Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/6375
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor許永真
dc.contributor.authorYi-ting Chiangen
dc.contributor.author蔣益庭zh_TW
dc.date.accessioned2021-05-16T16:27:31Z-
dc.date.available2013-02-01
dc.date.available2021-05-16T16:27:31Z-
dc.date.copyright2013-02-01
dc.date.issued2013
dc.date.submitted2013-01-28
dc.identifier.citation[1] J. Baxter. A model of inductive bias learning. Journal of Artificial Intelligence
Research, 12:149–198, March 2000.
[2] S. Ben-David, J. Blitzer, K. Crammer, and F. Pereira. Analysis of representations
for domain adaptation. In Advances in Neural Information Processing Systems 19,
pages 137–144, Cambridge, MA, 2007. MIT Press.
[3] S. Ben-David and R. S. Borbely. A notion of task relatedness yielding provable
multiple-task learning guarantees. Machine Learning, 73(3):273–287, Jan. 2008.
[4] S. Ben-David, T. Lu, T. Luu, and D. P′al. Impossibility theorems for domain adaptation.
Journal of Machine Learning Research – Proceedings Track, 9:129–136,
2010.
[5] D. J. Berndt and J. Clifford. Using dynamic time warping to find patterns in time
series. In KDD Workshop, pages 359–370, 1994.
[6] B. Cao, S. J. Pan, Y. Zhang, D.-Y. Yeung, and Q. Yang. Adaptive transfer learning.
In AAAI, 2010.
[7] R. Caruana. Multitask learning: A knowledge-based source of inductive bias. In
Proceedings of the 10th International Conference on Machine Learning, pages 41–
48. Morgan Kaufmann, 1993.
[8] C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM
Transactions on Intelligent Systems and Technology, 2:27:1–27:27, 2011.
[9] Y.-t. Chiang,W.-C. Fang, and J. Y.-j. Hsu. Knowledge source selection by estimating
distance between datasets. In Proceedings of the 2012 Conference on Technologies
and Applications of Artificial Intelligence (TAAI 2012), November 2012.
[10] P. Comon. Independent component analysis, a new concept? Signal Process,
36(3):287–314, Apr. 1994.
[11] D. Cook, K. D. Feuz, and N. C. Krishnan. Transfer learning for activity recognition:
A survey. Knowledge and Information Systems, 2012.
[12] T.M. Cover and J. A. Thomas. Elements of Information Theory. Wiley-Interscience,
1991.
[13] K. Crammer, M. Kearns, and J. Wortman. Learning from multiple sources. In
B. Sch‥olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information
Processing Systems 19, pages 321–328. MIT Press, Cambridge, MA, 2007.
[14] J. Edmonds. Paths, trees, and flowers. Canadian Journal of Mathematics, 17:449–
467, 1965.
[15] D. M. Endres and J. E. Schindelin. A new metric for probability distributions. IEEE
Transactions on Information Theory, 49(7):1858–1860, Sept. 2006.
[16] K. D. Forbus, D. Gentner, and K. Law. Mac/fac: A model of similarity-based retrieval.
Cognitive Science, 19(2):141–205, 1995.
[17] A. Frank and A. Asuncion. UCI machine learning repository, 2010.
[18] D. Gale and L. S. Shapley. College admissions and the stability of marriage. The
American Mathematical Monthly, 69(1):9–15, January 1962.
[19] D. Gentner. Structure-mapping: A theoretical framework for analogy. Cognitive
Science, 7(2):155 – 170, 1983.
[20] A. Gretton, O. Bousquet, A. Smola, and B. Sch‥olkopf. Measuring statistical dependence
with hilbert-schmidt norms. In Proceedings of the 16th international conference
on Algorithmic Learning Theory, ALT’05, pages 63–77, Berlin, Heidelberg,
2005. Springer-Verlag.
[21] P. Hall. On representatives of subsets. Journal of the London Mathematical Society,
10(1):26–30, 1935.
[22] D. H. Hu, V.W. Zheng, and Q. Yang. Cross-domain activity recognition via transfer
learning. Pervasive and Mobile Computing, 7(3):344–358, June 2011.
[23] A. Hyv‥arinen, J. Karhunen, and E. Oja. Independent Component Analysis. John
Wiley & Sons, 2001.
[24] I. T. Jolliffe. Principal Component Analysis. Springer Series in Statistics. Springer
New York, second edition, 2002.
[25] E. Keogh. Data mining and machine learning in time series databases. Tutorial in
SIGKDD 2004, 2004.
[26] E. Keogh, S. Lonardi, and C. A. Ratanamahatana. Towards parameter-free data
mining. In Proceedings of the tenth ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining, pages 206–215, Seattle,WA, August 2004.
[27] M. Klenk and K. Forbus. Domain transfer via cross-domain analogy. Cognitive
Systems Research, 10(3):240–250, Sept. 2009.
[28] S. Kullback and R. A. Leibler. On information and sufficiency. The Annals of
Mathematical Statistics, 22(1):79–86, 1951.
[29] S.-I. Lee, V. Chatalbashev, D. Vickrey, and D. Koller. Learning a meta-level prior
for feature relevance from multiple related tasks. In Proceedings of the 24th international
conference on Machine learning, ICML ’07, pages 489–496, New York, NY,
USA, 2007. ACM.
[30] M. Li, X. Chen, X. Li, B. Ma, and P. M. Vit′anyi. The similarity metric. IEEE
Transactions on Information Theory, 50(12):3250 – 3264, December 2004.
[31] J. Lin, R. Khade, and Y. Li. Rotation-invariant similarity in time series using bagof-
patterns representation. Journal of Intelligent Information Systems, 2012.
[32] M. M. H. Mahmud and S. Ray. Transfer learning using kolmogorov complexity:
Basic theory and empirical evaluations. In J. C. Platt, D. Koller, Y. Singer, and
S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages
985–992. MIT Press, Cambridge, MA, 2008.
[33] A. Martins. String kernels and similarity measures for information retrieval. Technical
report, Institute for Systems and Robotics, CMU-Portugal, 2006.
[34] L. Mihalkova, T. Huynh, and R. J. Mooney. Mapping and revising markov logic
networks for transfer learning. In Proceedings of the 22nd national conference on
Artificial intelligence - Volume 1, AAAI’07, pages 608–614. AAAI Press, 2007.
[35] L. Mihalkova and R. J. Mooney. Transfer learning by mapping with minimal target
data. In Proceedings of the AAAI-08 Workshop on Transfer Learning For Complex
Tasks, Chicago, IL, July 2008.
[36] S. J. Pan, I. W. Tsang, J. T. Kwok, and Q. Yang. Domain adaptation via transfer
component analysis. In Proceedings of the 21st international jont conference on
Artifical intelligence, IJCAI’09, pages 1187–1192, San Francisco, CA, USA, 2009.
Morgan Kaufmann Publishers Inc.
[37] S. J. Pan and Q. Yang. A survey on transfer learning. IEEE Transactions on Knowledge
and Data Engineering, 22(10):1345–1359, 2010.
[38] N. Quadrianto, A. Smola, T. Caetano, S. Vishwanathan, and J. Petterson. Multitask
learning without label correspondences. In J. Lafferty, C. K. I. Williams,
J. Shawe-Taylor, R. Zemel, and A. Culotta, editors, Advances in Neural Information
Processing Systems 23, pages 1957–1965, 2010.
[39] L. R. Rabiner. A tutorial on hidden markov models and selected applications in
speech recognition. In Proceedings of the IEEE, pages 257–286, 1989.
[40] R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng. Self-taught learning: transfer
learning from unlabeled data. In Proceedings of the 24th international conference
on Machine learning, pages 759–766, New York, NY, USA, 2007. ACM.
[41] P. Rashidi and D. J. Cook. Activity recognition based on home to home transfer
learning. In Proceedings of the AAAI Plan, Activity, and Intent Recognition Workshop,
pages 45–52, 2010.
[42] C. E. Shannon. Amathematical theory of communication. The Bell System Technical
Journal, 27(3):379–423, July 1948.
[43] P. C. Shields. Two divergence-rate counterexamples. Journal of Theoretical Probability,
6(3):521–545, 1993.
[44] E. M. Tapia. Activity Recognition in the Home Setting Using Simple and Ubiquitous
Sensors Emmanuel Munguia Tapia. PhD thesis, Massachusetts Institute of Technology,
2003.
[45] E. M. Tapia, S. S. Intille, and K. Larson. Activity recognition in the home setting
using simple and ubiquitous sensors. In Proceedings of PERVASIVE 2004, 2004.
[46] S. Thrun. Is learning the n-th thing any easier than learning the first? In D. Touretzky
and M. Mozer, editors, Advances in Neural Information Processing Systems (NIPS)
8, pages 640–646, Cambridge, MA, 1996. MIT Press.
[47] L. Torrey and J. Shavlik. Handbook of Research on Machine Learning Applications
and Trends. IGI Global, Aug. 2009.
[48] V. Van Asch. Domain similarity measures: On the use of distance metrics in natural
language processing. Phd dissertation, University of Antwerp, Antwerp, Belgium,
2012.
[49] T. van Kasteren, G. Englebienne, and B. Kr‥ose. Recognizing activities in multiple
contexts using transfer learning. In Proceedings of the AAAI Fall Symposium on AI
in Eldercare: New Solutions to Old Problems. AAAI Press, 2008.
[50] T. van Kasteren, G. Englebienne, and B. J. A. Kr‥ose. Transferring knowledge of
activity recognition across sensor networks. In Pervasive Computing, pages 283–
300, 2010.
[51] H. Wang and Q. Yang. Transfer learning by structural analogy. In Proceedings of
the Twenty-Fifth AAAI Conference on Artificial Intelligence, pages 513–518, 2011.
[52] B. Wei and C. Pal. Heterogeneous transfer learning with RBMs. In Proceedings of
the Twenty-Fifth AAAI Conference on Artificial Intelligence, pages 531–536, 2011.
[53] Y. Yoshida, T. Hirao, T. Iwata, M. Nagata, and Y. Matsumoto. Transfer learning for
multiple-domain sentiment analysis – identifying domain dependent/independent
word polarity. In Proceedings of the Twenty-Fifth AAAI Conference on Artificial
Intelligence, 2011.
[54] Y. Zhang and D.-Y. Yeung. Multi-task learning in heterogeneous feature spaces. In
Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence, 2011.
[55] Y. Zhu, Y. Chen, Z. Lu, S. J. Pan, G.-R. Xue, Y. Yu, and Q. Yang. Heterogeneous
transfer learning for image classification. In Proceedings of the Twenty-Fifth AAAI
Conference on Artificial Intelligence, August 2011.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/6375-
dc.description.abstract大多數的行為辨識研究應用了機器學習方法。傳統上,機器學習基於一個假設來建立模型:用來建立模型時的環境與應用此模型的環境是相同的。此假設並非總是成立。若是可以先於一實驗室環境收集資料,並利用轉移學習(Transfer learning)來減低收集資料的花費,建立行為辨識模型於一智慧家庭將更為實用。
轉移學習移除了建立及應用模型時環境必須相同的限制,允許學習及測試模型的資料庫可以相異,並成功地應用於許多機器學習問題。此研究介紹一應用於行為辨識的知識轉移架構。具體來說,我們定義一個以特徵為基礎,可以自動計算新的特徵表述來轉移知識的知識轉移架構。我們於下面兩種情境下實際建造行為辨識模型來驗證此架構。情境一假設資料已標記的來源領域資料庫以及關於來源及目標領域資料庫的背景知識皆可知,但目標資料庫並不可得;情境二假設資料已標記的來源以及目標領域資料庫皆可得。實驗證明此知識轉移架構可在兩相異環境中成功地萃取並轉移知識。
zh_TW
dc.description.abstractMost activity recognition research makes use of machine learning methods. Traditional machine methods build a model under the assumption that this model will be applied to the same environment where the training dataset is collected, which is sometimes unrealistic in real world. In addition, collecting data sets in every environment where the model is going to be applied is not feasible. Building activity recognition models in a smart home is more practical if we can collect the dataset in a laboratory environment, and use transfer learning to reduce the effort of data collection.
Transfer learning relaxes this constraint, so that the training and the testing dataset can be different. It has considerable success in many machine learning problems. In this work, we propose a knowledge transfer framework for activity recognition. Specifically, we propose a feature-based knowledge transfer framework which can automatically find the new formulation of features to transfer knowledge between two domains. We apply this framework under two different scenarios to train activity recognition models: In the first scenario, a labelled source dataset is available, and we have the background knowledge about the source and target domain, but we do not have any target domain data samples. In the second scenario, we have both labelled the source and target domain dataset. The experimental results show that this framework can successfully extract and transfer knowledge between two different domains.
en
dc.description.provenanceMade available in DSpace on 2021-05-16T16:27:31Z (GMT). No. of bitstreams: 1
ntu-102-D94922021-1.pdf: 1638810 bytes, checksum: eabff20a6653a820e04322227b5c3d67 (MD5)
Previous issue date: 2013
en
dc.description.tableofcontentsCertification i
Acknowledgments iii
致謝 v
中文摘要 vii
Abstract ix
1 Introduction 1
1.1 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 Scenario Settings . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Proposed Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Related Work 7
2.1 Transfer Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Transfer Learning in Activity Recognition . . . . . . . . . . . . . . . . . 8
2.3 Similarity Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.1 For i.i.d. Random Variables . . . . . . . . . . . . . . . . . . . . 9
2.3.2 For Non-i.i.d. Random Variables . . . . . . . . . . . . . . . . . . 10
2.3.3 Similarity Measures for Strings . . . . . . . . . . . . . . . . . . 12
2.3.4 Similarity Measures Using Kernel Method . . . . . . . . . . . . 13
3 Feature-based Knowledge Transfer Framework 17
3.1 Feature Similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.1.1 Issues on Measuring Feature Similarity . . . . . . . . . . . . . . 19
3.2 Feature Reformulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2.1 Feature Reformulation by Using Data Samples . . . . . . . . . . 22
3.2.2 Feature Reformulation by Profiles . . . . . . . . . . . . . . . . . 25
3.3 Feature Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3.1 Graph Matching Algorithms . . . . . . . . . . . . . . . . . . . . 28
3.3.2 Feature Mapping by Graph Matching . . . . . . . . . . . . . . . 29
3.3.3 Measuring Divergence of Datasets . . . . . . . . . . . . . . . . . 30
4 Experiments 31
4.1 Datasets and Data Preprocessing . . . . . . . . . . . . . . . . . . . . . . 32
4.2 The Feature Reformulation Procedure . . . . . . . . . . . . . . . . . . . 32
4.3 Estimate Feature Similarity . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.3.1 Feature Similarity Estimation by Using Data Samples . . . . . . 34
4.3.2 Feature Similarity Estimation by Profiles . . . . . . . . . . . . . 35
4.4 The Feature Alignment Procedure . . . . . . . . . . . . . . . . . . . . . 35
4.5 Experiments and Results . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.5.1 Knowledge Transfer by Using Data Samples . . . . . . . . . . . 38
4.5.2 Knowledge Transfer by Profiles . . . . . . . . . . . . . . . . . . 39
4.5.3 Further Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5 Conclusion 49
5.1 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.1.1 About the Experiments . . . . . . . . . . . . . . . . . . . . . . . 50
5.1.2 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.1.3 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Bibliography 53
dc.language.isoen
dc.title知識轉移於智慧家庭環境之行為辨識應用zh_TW
dc.titleBuilding Activity Recognition Models Using Transfer Learning in Smart Home Environmenten
dc.typeThesis
dc.date.schoolyear101-1
dc.description.degree博士
dc.contributor.oralexamcommittee徐宏民,林軒田,徐讚昇,王傑智,傅立成
dc.subject.keyword傑森-向農偏差,轉移學習,行為辨識,機器學習,智慧家庭,zh_TW
dc.subject.keywordJenson-Shannon divergence,transfer learning,activity recognition,machine learning,smart home,en
dc.relation.page57
dc.rights.note同意授權(全球公開)
dc.date.accepted2013-01-28
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-102-1.pdf1.6 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved