<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>類別:</title>
  <link rel="alternate" href="http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/83231" />
  <subtitle />
  <id>http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/83231</id>
  <updated>2026-03-09T08:53:48Z</updated>
  <dc:date>2026-03-09T08:53:48Z</dc:date>
  <entry>
    <title>高維度下 Spike-and-Slab與Horseshoe 先驗的穩健性比較</title>
    <link rel="alternate" href="http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93452" />
    <author>
      <name>練政楷</name>
    </author>
    <author>
      <name>Cheng-Kai Lien</name>
    </author>
    <id>http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93452</id>
    <updated>2024-08-01T16:12:05Z</updated>
    <published>2024-01-01T00:00:00Z</published>
    <summary type="text">標題: 高維度下 Spike-and-Slab與Horseshoe 先驗的穩健性比較; A Comparative Study of Spike-and-Slab and Horseshoe Prior for Robustness in High Dimension
作者: 練政楷; Cheng-Kai Lien
摘要: 在統計分析中，當觀測數量 (n) 大於變量數 (p) 的情況下，一般採用頻率學派的方法進行變量選擇，這些方法在大多數狀況，模型的表現都還不錯。然而，當數據為稀疏高維度時，即變量數 (p) 遠大於觀測數量 (n)，使用一般常用的傳統頻率學派的方法可能面臨一些挑戰。本文使用貝葉斯方法作為變量選擇的一種替代方案。貝葉斯方法通過引入先驗分佈，為處理稀疏高維度數據提供了一種靈活的做法。本文首先比較兩種貝葉斯先驗分佈，分別是Spike-and-Slab先驗和Horseshoe先驗。這兩種先驗分佈在處理稀疏高維度數據時各有優缺點。本文將探討不同情況下這兩種先驗分佈的表現差異。此外，在本研究中，我們將挑選較為穩健的先驗分佈，並探討使用「後驗極端值調整平均數」取代「後驗平均數」比較兩者在變量選擇上的差異。所謂後驗極端值調整平均數，是指我們採用公認的穩健方法來調整先前選定的穩健先驗分佈，以進行有依據的極端值調整操作。文章將分析在不同數據集及參數設定下，各方法的表現如何，並評估哪種方法對數據異常值的穩健性質更佳。本研究的目的為在稀疏高維度數據的變量選擇提供更有效且穩健的貝葉斯方法，並期望應用在實際資料時，提供實用的參考依據。; When the number of observations (n) exceeds the number of variables (p), classic frequentist approaches for variable selection are commonly utilized, and they work well in the majority of situations. Traditional frequentist approaches, on the other hand, could be challenged when the data is sparse and high-dimensional, which means that the number of variables (p) greatly exceeds the number of observations (n). This research employs Bayesian approaches as an alternative way of variable selection. By incorporating prior distributions, Bayesian approaches provide a flexible method to dealing with sparse, high-dimensional data. This study begins by comparing two Bayesian priors: the Spike-and-Slab prior and the Horseshoe prior. Each of these priors has pros and cons when working with sparse, high-dimensional data. The study will investigate the performance differences between these two priors under a variety of situations. Furthermore, in this study, we will choose a more robust prior distribution and use the "posterior winsorized mean" instead of the "posterior mean," comparing the differences in variable selection between the two methods. The "posterior winsorized mean" refers to utilizing a recognized robust method to adjust the previously selected robust prior distribution, hence executing a justified winsorization rather than an arbitrary adjustment. This paper will compare the performance of various algorithms across different datasets and parameter settings, determining which method provides greater resistance against outliers. This study aims to provide a more effective and robust Bayesian approach for variable selection in sparse high-dimensional data, as well as practical advices for actual data applications.</summary>
    <dc:date>2024-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>預測年齡死亡率之比較研究</title>
    <link rel="alternate" href="http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93770" />
    <author>
      <name>鄭君淑</name>
    </author>
    <author>
      <name>JYUN-SHU JHENG</name>
    </author>
    <id>http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93770</id>
    <updated>2024-08-07T17:14:34Z</updated>
    <published>2024-01-01T00:00:00Z</published>
    <summary type="text">標題: 預測年齡死亡率之比較研究; A comparative study of age-specific mortality forecast
作者: 鄭君淑; JYUN-SHU JHENG
摘要: 隨著人口結構的變化和社會老齡化的加劇，死亡率的建模和預測越來越受到關注，特別是在保險領域，準確預測死亡率有助於保險公司有效制定保單。在這項研究中，我們從Lee-Carter 模型出發，將原始的奇異值分解拓展到函數型奇異值分解，進而得到與時間和年齡有關的一對奇異函數，接著我們根據年份相關的奇異函數進行局部線性外推做預測，並將其預測表現與Lee-Carter模型進行比較。在實際數據分析中，我們使用了台灣的死亡率數據，結果我們的預測結果比Lee-Carter模型更準確，在模擬實驗亦得到相同的結論。; With demographic changes and society aging, the modeling and prediction of mortality rates have garnered increasing attention, especially in the insurance sector. Accurate mortality forecasts assist insurance companies in formulating policies effectively. In this study, we begin with the Lee-Carter model and extend the original singular value decomposition to a functional version, yielding a pair of time-specific and age-specific singular functions. Subsequently, we conduct local linear extrapolation based on these year-specific singular functions for forecasting, comparing their performance with that of the Lee-Carter model. In our real data analysis, we use mortality data from Taiwan to demonstrate that our forecasting performance is more accurate than that of the Lee-Carter model. This conclusion is further supported by our simulation study.</summary>
    <dc:date>2024-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>適應式貪婪分段法於函數變異偵測中的應用</title>
    <link rel="alternate" href="http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98090" />
    <author>
      <name>李昀樵</name>
    </author>
    <author>
      <name>Yun-Chiao Lee</name>
    </author>
    <id>http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98090</id>
    <updated>2025-07-24T16:09:24Z</updated>
    <published>2025-01-01T00:00:00Z</published>
    <summary type="text">標題: 適應式貪婪分段法於函數變異偵測中的應用; Adapted Greedy Segmentation for Functional Change Detection
作者: 李昀樵; Yun-Chiao Lee
摘要: 在統計分析中，變異點偵測是一項關鍵議題，尤其在面對函數型資料時，其挑戰性更為明顯。過去的研究多半仰賴固定基底投影與強假設條件，可能導致對結構性變異的偵測能力不足。為解決此問題，本研究提出一套適應式貪婪分段法，結合差異強化協方差算子與迭代式基底選擇策略，能有效對應不同方向的變異並提升變異點定位的精確度。&#xD;
本方法的主要優勢包括：(1) 能動態調整基底方向，使其更貼近資料中的結構性變化；(2) 無須預先設定變異點個數或限制每段僅含一個變異點；(3) 在模擬與實證資料中皆展現出優於傳統方法的表現，且在高噪音或時間相依的情境下依然能維持良好表現。&#xD;
透過一系列模擬實驗與實際資料分析，本研究驗證了所提方法在變異點數目估計、位置偵測準確率與過／低偵測情境下的穩定表現。未來可望應用於氣候分析、生物訊號監控與智慧生活等多元領域。; Change-point detection is a critical topic in statistical analysis, especially for functional data where traditional projection-based methods may fail to capture structural shifts accurately. To address this challenge, we propose an Adapted Greedy Segmentation (AGS) framework that integrates discrepancy-enhanced covariance operators with an iterative basis refinement procedure. This approach allows the basis functions to adaptively align with the directions of structural changes, thereby improving detection sensitivity and localization accuracy.&#xD;
The proposed method offers several advantages:(1) it automatically selects basis directions that reflect segment-wise changes;(2) it does not rely on restrictive assumptions such as the "at most one change-point" condition;(3) it demonstrates improved empirical performance over existing methods, and maintains reliable detection accuracy under both high-noise and temporally dependent settings.&#xD;
Extensive simulation studies and real data applications confirm the robustness and effectiveness of the AGS approach. The results suggest that our method achieves superior detection accuracy compared to conventional projection-based and fully functional techniques, and holds promise for broader applications in climate research, biomedical monitoring, and smart environments.</summary>
    <dc:date>2025-01-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>透過限制性的最佳分割來檢測函數型資料中的多重轉折點</title>
    <link rel="alternate" href="http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/94229" />
    <author>
      <name>蔡知諺</name>
    </author>
    <author>
      <name>Chih-Yen Tsai</name>
    </author>
    <id>http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/94229</id>
    <updated>2024-08-15T16:20:24Z</updated>
    <published>2024-01-01T00:00:00Z</published>
    <summary type="text">標題: 透過限制性的最佳分割來檢測函數型資料中的多重轉折點; Detection of Multiple Changepoints in a Functional Data Sequence with Constrained Optimal Partitioning
作者: 蔡知諺; Chih-Yen Tsai
摘要: 在本論文中，我們提出了一種針對函數型資料的多重轉折點懲罰估計方法。我們介紹一種多重轉折點檢測方法，並增強了懲罰函數。該方法可避免預先給定轉折點個數，使我們可以同時找轉折點的位置和個數。此外，為了避免發生任兩個連續轉折點過於接近，我們加入了一個限制式來解決此問題。我們使用兩種方法，信噪比和樣本分割來選擇最佳的懲罰參數。最後我們將我們的方法與其他方法比較，包括多重轉折點隔離（MCI）和二分法分割，且在一些常見情況下優於現存方法。; In this thesis, we propose a method for penalized estimation of multiple changepoints in functional data sequences. We introduces a multiple changepoint detection method enhanced with a penalty function. This method eliminates the need to pre-specify the number of changepoints, enabling simultaneous detection of changepoint locations and quantities. Additionally, to mitigate the occurrence of overly close or consecutive changepoints, we introduce an additional constraint. We utilize two methods, the signal-to-noise ratio and sample splitting, to choose the optimal penalty parameter. Furthermore, we compare our method with others, including Multiple Changepoint Isolation (MCI) and the binary segmentation, demonstrating superior performance under some common scenarios.</summary>
    <dc:date>2024-01-01T00:00:00Z</dc:date>
  </entry>
</feed>

