請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/27750完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 洪士灝(Shih-Hao Hung) | |
| dc.contributor.author | Yi-Di Lin | en |
| dc.contributor.author | 林以迪 | zh_TW |
| dc.date.accessioned | 2021-06-12T18:18:40Z | - |
| dc.date.available | 2007-09-03 | |
| dc.date.copyright | 2007-09-03 | |
| dc.date.issued | 2007 | |
| dc.date.submitted | 2007-08-28 | |
| dc.identifier.citation | [1] SPECweb2005, http://www.spec.org/web2005/
[2] TPC – Transaction Processing Performance Council, http://www.tpc.org/ [3] B. M. Cantrill, M. W. Shapiro, and A. H. Leventhal. Dynamic instrumentation of production systems. In Proceedings of the 6th Symposium on Operating Systems Design and Implementation, 2004. [4] Susan L. Graham , Peter B. Kessler , Marshall K. Mckusick, Gprof: A call graph execution profiler, Proceedings of the 1982 SIGPLAN symposium on Compiler construction, p.120-126, June 23-25, 1982, Boston, Massachusetts, United States [5] Chi-Keung Luk , Robert Cohn , Robert Muth , Harish Patil , Artur Klauser , Geoff Lowney , Steven Wallace , Vijay Janapa Reddi , Kim Hazelwood, Pin: building customized program analysis tools with dynamic instrumentation, Proceedings of the 2005 ACM SIGPLAN conference on Programming language design and implementation, June 12-15, 2005, Chicago, IL, USA [6] N. Nethercote and J. Seward. Valgrind: A program supervision framework. In Proceedings of the 3rd Workshop on Runtime Verification. http://valgrind.kde.org/, 2003. [7] N. Nethercote. Dynamic Binary Analysis and Instrumentation. PhD thesis, University of Cambridge, United Kingdom, November 2004. [8] Performance Analysis, http://en.wikipedia.org/wiki/Performance_analysis [9] Intel VTune Performance Analyzer, http://www.intel.com/software/product/vtune [10] The Sun Studio Performance Tools, http://developers.sun.com/sunstudio/articles/perftools.html [11] G. Ammons, T. Ball, and J. R. Larus. Exploiting hardware performance counters with flow and context sensitive profiling. In SIGPLAN Conference on Programming Language Design and Implementation, pages 85-96 1997. [12] Solaris Dynamic Tracing Guide, http://docs.sun.com/app/docs/doc/817-6223 [13] Sun Studio Performance Analyzer, http://developers.sun.com/sunstudio/analyzer_index.html [14] Function-Level Metrics: Exclusive, Inclusive, and Attributed, http://docs.sun.com/source/816-2458/Data.html [15] SIP: Session Initiation Protocol, http://www.faqs.org/rfcs/rfc3261.html [16] Performance Tests, http://www.openser.org/docs/openser-performance-tests/ [17] OpenSER - the Open Source SIP Server, http://www.openser.org/ [18] Tutorials for understanding how to use Graphvize, http://www.graphviz.org [19] M.Arnold and P.F. Sweeney. Approximating the calling context tree via sampling. IBM Research Report, July 2000. [20] K. Yaghmour and M. R. Dagenais. Measuring and characterizing system behavior using kernel-level event logging. In Proc. of the USENIX Annual Technical Conference, June 2000. [21] Glenn Ammons , Thomas Ball , James R. Larus, Exploiting hardware performance counters with flow and context sensitive profiling, Proceedings of the ACM SIGPLAN 1997 conference on Programming language design and implementation, p.85-96, June 16-18, 1997, Las Vegas, Nevada, United States [22] L. Eeckhout, S. Nussbaum, J. Smith, and K. DeBosschere. Statistical simulation: Adding efficiency to the computer designer's toolbox. IEEE Micro, Sept/Oct 2003. [23] Valgrind, http://en.wikipedia.org/wiki/Valgrind [24] Call Graph Limitations, http://support.intel.com/support/performancetools/vtune/linux/v8/sb/CS-021427.htm | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/27750 | - |
| dc.description.abstract | 基準測試(Benchmarking)經常用來評估一個伺服器的效能。然而,基準測試所提供的資訊往往不夠詳盡, 也不足以用來幫助工程師了解並改進伺服系統上的軟硬體效能。在這篇論文中,我們發展了一套軟體剖析(profiling)工具幫助使用者評估伺服軟體的效能並建立其效能模型。這套方法由下列三個步驟組成:追蹤收集(Trace Collection)、追蹤分析(Trace Analysis)及模型建立(Model Generation)。利用 DTrace, 我們發展了一套追蹤收集工具,可以從應用軟體中收集我們感興趣的事件序列(event sequences, trace)。我們設計出一套程序(scripts),幫助使用者建立應用軟體的效能模型。為了改進我們分析方法的準確度,我們提出了減少 DTrace 造成的額外負擔的方法。實驗結果顯示,我們的分析工具比 gprof更詳盡,並且提供準確的時間資訊。 | zh_TW |
| dc.description.abstract | Benchmarking is commonly used to evaluate the performance of a server. However, without detailed information, benchmarking provides very little help for engineers to understand and improve the hardware and software on the server system. In this thesis, we develop a profiling toolkit that helps users evaluate and model the performance of the server applications. Our evaluation and modeling approach is composed of three phases: Trace Collection, Trace Analysis, and Model Generation. We build a trace collector based on the Solaris DTrace tool for collecting the interested event sequences (traces) from a target application. A set of scripts is then applied to analyze the traces and to help the user transform the traces into the application model. We also develop a method to improve the accuracy of this approach by estimating and reducing the instrumentation overhead caused by DTrace. Our experimental results show that our approach reveals more details than gprof and provides accurate timing information. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-12T18:18:40Z (GMT). No. of bitstreams: 1 ntu-96-R94922087-1.pdf: 935572 bytes, checksum: 28616b3231bbed59adbdbb99e5d522d8 (MD5) Previous issue date: 2007 | en |
| dc.description.tableofcontents | 中文摘要 i
Abstract ii Table of Contents iii List of Figures v List of Table vi Chapter 1 Introduction 1 1.1 Background 1 1.2 Problems 1 1.3 Objective and Contribution 2 1.4 Thesis Organization 2 Chapter 2 Related Works 4 2.1 Dynamic Binary Instrumentation Tools 4 2.2 Profiling Tools 5 Chapter 3 Modeling & Analysis Methodology 8 3.1 Overview 8 3.2 Trace Collection 9 3.3 Trace Analysis 13 3.3.1 Calling Context Analysis 13 3.3.2 Performance Metrics Analysis 18 3.4 Model Generation 20 3.4.1 Analysis for Modeling 21 3.4.2 Model Description 24 Chapter 4 Instrumentation Overhead 27 4.1 Overhead Estimation for a Probe 27 4.2 Overhead Elimination for Performance Metrics 30 Chapter 5 Experimental Evaluation 32 5.1 SIP and SIP Scenarios 32 5.2 Experiment Environment and Setup 33 5.3 Experiment Results 34 5.3.1 Registration Scenario Analysis 34 5.3.2 Proxying Scenario Analysis 37 5.3.3 Maximum Call Rate Estimation 42 Chapter 6 Conclusion and Future Work 44 References 46 | |
| dc.language.iso | en | |
| dc.subject | 追蹤分析 | zh_TW |
| dc.subject | 效能模型 | zh_TW |
| dc.subject | 剖析工具 | zh_TW |
| dc.subject | 效能評估 | zh_TW |
| dc.subject | trace analysis | en |
| dc.subject | performance model | en |
| dc.subject | DTrace | en |
| dc.subject | performance evaluation | en |
| dc.subject | profiling tool | en |
| dc.title | 利用 DTrace 在 Solaris 系統上以自動化方式建立應用軟體的效能模型與分析 | zh_TW |
| dc.title | Automating Server Application Performance Modeling Process on Solaris System via D-Trace and Trace-driven Analysis | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 95-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 王勝德(Sheng-De Wang),郭大維(Tei-Wei Kuo),廖世偉(Shih-Wei Liao) | |
| dc.subject.keyword | 效能模型,效能評估,剖析工具,追蹤分析, | zh_TW |
| dc.subject.keyword | DTrace,performance model,performance evaluation,profiling tool,trace analysis, | en |
| dc.relation.page | 47 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2007-08-28 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
| 顯示於系所單位: | 資訊工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-96-1.pdf 未授權公開取用 | 913.64 kB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
