Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
    • Advisor
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/27677
Full metadata record
???org.dspace.app.webui.jsptag.ItemTag.dcfield???ValueLanguage
dc.contributor.advisor洪士灝
dc.contributor.authorJen-Hao Chenen
dc.contributor.author陳人豪zh_TW
dc.date.accessioned2021-06-12T18:15:05Z-
dc.date.available2009-09-03
dc.date.copyright2007-09-03
dc.date.issued2007
dc.date.submitted2007-08-30
dc.identifier.citation[1] Apple Inc., “iPhone.” http://www.apple.com/iphone/, 2007.
[2] S. Ghemawat, H. Gobioff, and S.-T. Leung, “The Google file system,” SOSP ’03:
Proceedings of the nineteenth ACM symposium on Operating systems principles,
pp. 29–43, 2003, New York, NY, USA, ACM Press.
[3] K. Skadron, M. Martonosi, D. I. August, M. D. Hill, D. J. Lilja, and V. S. Pai,
“Challenges in computer architecture evaluation,” Computer, vol. 36, no. 8,
pp. 30–36, 2003.
[4] Standard Performance Evaluation Corporation, “The SPEC CPU2006 Benchmark.”
http://www.spec.org/cpu2006, 2006.
[5] S. C. Woo, M. Ohara, E. Torrie, J. P. Singh, and A. Gupta, “The SPLASH-2
programs: Characterization and methodological considerations,” Proceedings of
the 22th International Symposium on Computer Architecture, pp. 24–36, 1995,
Santa Margherita Ligure, Italy.
[6] M. Guthaus, J. Ringenberg, D. Ernst, T. Austin, T. Mudge, and R. Brown,
“MiBench: A free, commercially representative embedded benchmark suite,”
Workload Characterization, 2001. WWC-4. 2001 IEEE International Workshop
on, pp. 3–14, 2001.
[7] C. Lee, M. Potkonjak, and W. H. Mangione-Smith, “MediaBench: A tool for
evaluating and synthesizing multimedia and communicatons systems,” International
Symposium on Microarchitecture, pp. 330–335, 1997.
[8] J. Yi and D. Lilja, “Simulation of computer architectures: simulators, benchmarks,
methodologies, and recommendations,” Transactions on Computers,
vol. 55, no. 3, pp. 268–280, 2006.
[9] T. Austin, E. Larson, and D. Ernst, “SimpleScalar: an infrastructure for computer
system modeling,” Computer, vol. 35, no. 2, pp. 59–67, 2002.
[10] D. Brooks, V. Tiwari, and M. Martonosi, “Wattch: a framework for
architectural-level power analysis and optimizations,” ISCA, pp. 83–94, 2000.
[11] C. Hughes, V. Pai, P. Ranganathan, and S. Adve, “Rsim: simulating sharedmemory
multiprocessors with ilp processors,” Computer, vol. 35, no. 2, pp. 40–
49, 2002.
[12] P. S. Magnusson, M. Christensson, J. Eskilson, D. Forsgren, G. Hallberg, J. Hogberg,
F. Larsson, A. Moestedt, and B.Werner, “Simics: A full system simulation
platform,” Computer, vol. 35, no. 2, pp. 50–58, 2002.
[13] ARM Inc., “RealView SoC Designer.” http://www.arm.com/products/
DevTools/SoCDesigner.html.
[14] M. Vachharajani, N. Vachharajani, D. A. Penry, J. A. Blome, and D. I. August,
“The Liberty simulation environment, version 1.0,” Performance Evaluation
Review: Special Issue on Tools for Architecture Research, vol. 31, no. 4, March
2004.
[15] S. L. Graham, P. B. Kessler, and M. K. McKusick, “gprof: a call graph execution
profiler,” SIGPLAN Symposium on Compiler Construction, pp. 120–126, 1982.
[16] Intel Corporation, “The Intel VTune Performance Analyzer.”
http://www.intel.com/software/products/vtune.
[17] V. Krishnan and J. Torrellas, “A chip-multiprocessor architecture with speculative
multithreading,” IEEE Trans. Comput., vol. 48, no. 9, pp. 866–880,
1999.
[18] L. Spracklen and S. G. Abraham, “Chip multithreading: Opportunities and
challenges,” HPCA ’05: Proceedings of the 11th International Symposium on
High-Performance Computer Architecture, pp. 248–252, 2005, Washington, DC,
USA, IEEE Computer Society.
[19] P. Kongetira, K. Aingaran, and K. Olukotun, “Niagara: A 32-way multithreaded
sparc processor,” IEEE Micro, vol. 25, no. 2, pp. 21–29, 2005.
[20] Intel Corporation, “Intel Dual-Core Processors - The First in the Multi-core
Revolution.” http://www.intel.com/technology/computing/dual-core/.
[21] C. Keltcher, K. McGrath, A. Ahmed, and P. Conway, “The AMD Opteron
processor for multiprocessor servers,” IEEE Micro, vol. 23, no. 2, pp. 66–76,
2003.
[22] H. Hofstee, “Power efficient processor architecture and the cell processor,” High-
Performance Computer Architecture, 2005. HPCA-11. 11th International Symposium
on, pp. 258–262, 2005.
[23] B. M. Cantrill, M. W. Shapiro, and A. H. Leventhal, “Dynamic instrumentation
of production systems,” Proceedings of the 2004 USENIX Annual Technical
Conference, July 2004.
[24] K. Yaghmour and M. Dagenais, “The Linux Trace Toolkit,” Linux Journal,
2005.
[25] P. R. Panda, “SystemC: a modeling platform supporting multiple design abstractions,”
ISSS ’01: Proceedings of the 14th international symposium on Systems
synthesis, pp. 75–80, 2001, New York, NY, USA, ACM Press.
[26] The Open SystemC Initiative, “SystemC 2.1 v1.” http://www.systemc.org/,
2005.
[27] Sun Microsystems Inc., Solaris Dynamic Tracing Guide. Iuniverse Inc, 2005.
[28] G. van Rossum and F. L. Drake, An Introduction to Python - The Python
Tutorial. Network Theory Ltd, 2006.
[29] The Apache Software Foundation, “The Apache XML Project.”
http://xml.apache.org/, 2006.
[30] J. Rosenberg, H. Schulzrinne, G. Camarillo, A. Johnston, J. Peterson,
R. Sparks, M. Handley, and E. Schooler, “SIP: Session Initiation Protocol.”
http://www.ietf.org/rfc/rfc3261.txt, 2002.
[31] H. Schulzrinne, S. Narayanan, J. Lennox, and M. Doyle, “SIPstone - Benchmarking
SIP Server Performance.” http://www.sipstone.org/.
[32] HP Labs, “The SIPp performance test tool for SIP protocol.”
http://sipp.sourceforge.net/, 2007.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/27677-
dc.description.abstract伺服器應用程式是重要的應用領域。效能的評估更是整個系統成功與否的關鍵。在本論文中, 我們提出並實作了一個伺服器應用程式效能評估的工具框架。
我們所實作的工具框架可分為兩個部分,程式分析工具
(profiling tool) 以及模擬器(simulator)。程式分析工具利用Dtrace 為基礎分析程式的行為,同時新增一個利用CPU performance counter的工具也被實作並加入工具框架中。模擬器使用SystemC製作, 可以模擬程式以及硬體平台在不同層級中的行為。模擬器的設計上採用參數化的設計, 不需要重新編譯即可測試新的平台。同時,應用程式以及系統模型是利用可攜的XML描述,使得不同工具
之間的整合變的更加容易。
我們所實作的工具可以提供程式執行期間行為的分析,不僅是應用程式層級的行為, 同時也提供作業系統層級行為的分析。我們的工具可以用來評估程式的延展性(scalability), 發現可能的效能瓶頸(bottleneck), 以及系統資源的使用情形,即使在沒有程式原始碼的情形之下也能進行分析。
zh_TW
dc.description.abstractI/O intensive server applications are among the most mportant applications. Evaluation of performance is key to success. In this thesis, a framework for I/O intensive applications is proposed and implemented.
We implemented this framework with a set of tools based on DTrace, including a new performance tool that we integrated to take advantage of the CPU performance counters. Our simulator is based on SystemC, which simulates application execution and reveals its locking behaviors on a given platform. The simulator is parameterized and requires no recompilation to simulate different applications and platforms with human readable XML-based models, which enables the integration of the simulation by linking models generated from different tools.
Our experimental results shows the proposed framework provides useful insights for I/O intensive server applications with information regarding to the runtime behavior of both application and kernel levels. The framework can be used to evaluate scalability, possible bottleneck/contentions and resource utilization for any given
application even without its source code.
en
dc.description.provenanceMade available in DSpace on 2021-06-12T18:15:05Z (GMT). No. of bitstreams: 1
ntu-96-R94922125-1.pdf: 10118316 bytes, checksum: c6fe7646b28447393c433e7b9e1baae9 (MD5)
Previous issue date: 2007
en
dc.description.tableofcontents1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Performance Evaluation and Design of a Computer System . . . . . . . . . . . 1
1.2 Performance Evaluation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.1 Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.2 Simulators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.3 Profilers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 I/O Intensive Server Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.1 CMP and CMT Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.2 Application Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.3 Challenges for Developing Server Applications . . . . . . . . . . . . . . . . 8
1.4 Research Motivation and Contribution . . . . . . . . . . . . . . . . . . . . . . . . . 9
2 Related Performance Profiling and Simulation Tools . . . . . . . . . . . . . . . . 11
2.1 DTrace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 CPU Performance Counters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 SystemC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3 Proposed Profiling and Simulation Framework . . . . . . . . . . . . . . . . . . . . 17
3.1 Design Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.1.1 Modeling server application behavior . . . . . . . . . . . . . . . . . . . . . . 18
3.1.2 Separation of application and platform models . . . . . . . . . . . . . . . . 19
3.1.3 Portable model description format . . . . . . . . . . . . . . . . . . . . . . . 19
3.1.4 Simulation speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.1.5 Configurable granularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.1.6 Provide information for both application and kernel level . . . . . . . . . . 20
3.2 Modeling Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2.1 Task blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2.2 Application Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2.3 Platform Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.3 Profiling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.3.1 Application and System Call Tracing . . . . . . . . . . . . . . . . . . . . . . 23
3.3.2 System Call Profiling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.4 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.4.2 Application Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.4.3 Kernel Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.4.4 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.4.5 Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4 Demonstration and Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.1 Case study: SIP Server Registration . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.1.1 Performance Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.1.2 Execution Time Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.1.3 Lock Contention Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.1.4 Data Cache Access Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
dc.language.isoen
dc.subject模擬zh_TW
dc.subject效能分析zh_TW
dc.subject程式分析zh_TW
dc.subjectperformance evaluationen
dc.subjectsimulationen
dc.subjectprogram profilingen
dc.title系統層級的效能量測與評估框架zh_TW
dc.titleSystem-Level Performance Profiling and Simulation Framework for I/O-Intensive Applicationsen
dc.typeThesis
dc.date.schoolyear95-2
dc.description.degree碩士
dc.contributor.oralexamcommittee郭大維,廖世偉,王勝德
dc.subject.keyword效能分析,程式分析,模擬,zh_TW
dc.subject.keywordperformance evaluation,program profiling,simulation,en
dc.relation.page38
dc.rights.note有償授權
dc.date.accepted2007-08-30
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
Appears in Collections:資訊工程學系

Files in This Item:
File SizeFormat 
ntu-96-1.pdf
  Restricted Access
9.88 MBAdobe PDF
Show simple item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved