Table of Contents
Table of Contents
OProfile is a profiling system for Linux 2.2/2.4 systems on most x86 processors. It is capable of profiling all parts of a running system, from the kernel (including modules and interrupt handlers) to shared libraries to binaries. It runs transparently in the background collecting information at a low overhead. These features make it ideal for profiling entire systems in production environments to determine bottle necks in real-world systems.
You'll need to have a configured kernel source for the current kernel to build the module. It is also recommended that if you have a uniprocessor machine, you enable the local APIC / IO_APIC support for your kernel (this is automatically enabled for SMP kernels). On machines with power management, such as laptops, the power management must be turned off when using OProfile. The power management software in the BIOS cannot handle the non-maskable interrupts (NMIs) used by OProfile for data collection.
This section gives a brief description of the available oprofile utilities and their purpose.
You should stop the profiler using this script. The profiler will collect all the data remaining to be processed, and quit.
This causes the profiler to process all pending information.
This is the main tool for retrieving useful profile data, described in Chapter 4.
This utility is useful for examining the relative profile values for all images on the system to determine the applications with the largest impact on system performance.
This utility can be used to produce annotated source, assembly or mixed source/assembly. Source level annotation is available only if the application was compiled with debugging symbols. See Section 3.
This utility is useful to merge samples files which belongs to the same application especially when you profile with separating samples for shared libs. See Section 4.
Table of Contents
Before getting into detail about usage, it's probably a good idea to have a quick stroll through an example session (this example is for Intel processors not AMD, but the process is the same).
First we need to start the profiler running in the background. We need to pass the correct vmlinux file to the daemon (to allow kernel profiling), and we need to specify what event to count and the counter value. Here I've started with :
op_start --vmlinux=/boot/2.4.0ac12/vmlinux --ctr0-event=CPU_CLK_UNHALTED --ctr0-count=600000
Here we've enabled counter 0 to count "CPU_CLK_UNHALTED" (number of cycles CPU is not halted) events with a count value of 600,000. This event is useful as profiles resulting generally correspond to time-spent profiles for functions etc.
A quick ps ax confirms that the daemon (oprofiled) has started. Data is now being collected in the kernel. Now we can do whatever we like ... although in this case I'm profiling the C++ application LyX. Note that unlike gprof, no instrumentation (-pg and -a options to gcc) is necessary. This is major factor in achieving the low overhead of OProfile. Compiling with debug symbols (the -g option) is not necessary to get a basic function-based profile listing, but it must be used in order to retrieve line number information and create annotated source.
Rather than wait for the buffers to fill up, I now force the profiling data to be processed with :
op_dump
which will ask the kernel module to dump as much data as it can to the daemon.
Forcing a dump like this can cause the daemon to become very busy, especially the first time it is done. Don't worry, that's not normal behaviour; so if you are profiling over a larger period of time, such spikes won't appear.
I can now ask for a symbol-based summary of the sample profile :
oprofpp --demangle -l ./lyx >oprof.out
This can be quite slow on large binaries, so sit tight. As it's a C++ program, I asked for the symbols to be demangled to a readable form. Examining the output will give the symbols against which the most hits were registered. In this case I got :
... Row::par(void)[0x0813ab54]: 5.4079% (472 samples) LyXText::GetRow(LyXParagraph *, int, int &) const[0x08170a4c]: 5.5683% (486 samples) LyXParagraph::GetFontSettings(BufferParams const &, int) const[0x08145420]: 5.7516% (502 samples) Row::next(void) const[0x0813ac24]: 15.4904% (1352 samples) |
at the top. Note that over a longer run (or with a lower ctr0-count value) the number of samples will be much more statistically reliable. Note that these sample counts do not necessarily reflect the relative amounts of time spent in each function - it depends on the event being counted. In this case we used CPU_CLK_UNHALTED which the command op_help tells us is "clocks processor is not halted", so in fact is likely to represent the relative time spent accurately (in fact, experiments have shown that using this event is far more accurate than the values produced by gprof).
If you're more used to gprof style profile output, you can use oprofpp -g gmon.out and then gprof -p binary to get flat profiles. OProfile does not (cannot) support the call graph feature of gprof.
In this section the configuration and startup of the profiler is discussed in more depth.
A shell script op_start is provided to set up the correct environment, insert the kernel module, and start up the profiler daemon. OProfile stores its relevant files in /var/lib/oprofile. Of most interest are the oprofiled.log log file, and the samples/ directory. The samples directory contains the actual sample profile files created by the daemon. Despite their apparent size they take up much less actual diskspace as they are created sparsely (stat or du [-h] should tell you their real on-disk size). Note than this imply we greatyl discourage use of filesystem without a proper support for sparse file including vfat fs and many network fs. Each filename corresponds to the profiled binary image (with / characters replaced with } characters). In addition, each filename has a suffix indicating the counter number. The man page for op_start details the all the options, only interesting ones are listed here :
This gives a short list of the hardware events that are countable (see Section 5.1). The meaning of options relating to the counters themselves is also detailed in that section.
This is the number of entries in the kernel-side profiling buffer. Generally the default value is fine: you might want to change this on low-memory machines, or if you are doing very detailed profiling. Each entry in the buffer takes 8 bytes.
This is the number of entries in the kernel-side profiling hash table. Generally the default value is fine: you might want to change this on low-memory machines, or if you are doing very detailed profiling. Each entry in the hash table takes 32 bytes (4 samples for each entry).
This is the number of entries in the kernel note table. Generally the default value is fine: you might want to change this on low-memory machines, or if you are doing very detailed profiling. Each entry in the buffer takes 20 bytes.
Default is to profile both user-space and the kernel. You can profile only the kernel with this option; this does not prevent the occasional user-space sample due to the hardware constraints, but reduces the overhead considerably.
Specify the vmlinux file from the current kernel's compile. This must match the running kernel if you expect meaningful profiles of the kernel. Note that this is a separate file from your kernel image (vmlinuz); you must specify the vmlinux file created during the kernel build in order to profile the kernel.
separate samples for each distinct application. With this option, samples in shared libraries are stored in a separate sample file specific to the primary binary image (e.g. /bin/cp) that the code is mapped into. This feature is not supported for process-context kernel samples, and incurs a performance penalty.
Only samples of this process id will be collected (including any kernel-side samples when this process is in the kernel). Note that threaded programs under Linux have a different process id for each thread.
Only samples of this process tty group id will be collected (including any kernel-side samples when this process is in the kernel).
This makes the daemon very verbose in its logfile. Don't use this unless you need it as the overhead of logging the data is significant. It is however useful for determining profiler bugs (believe me ;)
The runtime profiler system consists of two components: a kernel module (oprofile) and a user-space daemon process (oprofiled). The kernel module collects sample data into the hash table and buffer, and wakes up the daemon process when it is approaching full. The daemon will read this data, and process it into a non-volatile form. Any samples are recorded into the sample files at processing time.
The op_start shell script will insert the kernel module if needed. The profiling is activated when the daemon process initialises. Configuration of the kernel module parameters is done via sysctl; the available files are detailed in Section 5.3.
This section describes the oprofile Qt-based interface.
The oprof_start application provides a convenient way to start the profiler. Note that oprof_start is just a wrapper around the op_start script, so it does not provide more services than the script itself.
After oprof_start is started you can select the event type for each counter; the sampling rate and other related parameters are explained in Section 2. The "Configuration" section allows you to set general parameters such as the buffer size, kernel filename etc. The counter setup interface should be self-explanatory; Section 5.1 and related links contain information on using unit masks.
A status line shows the current status of the profiler: how long it has been running, and the average number of interrupts received per second and the total, over all processors. Note that quitting oprof_start does not stop the profiler.
Your configuration is saved when you quit the gui in two files in ~/.oprofile directory : oprof_start_config and oprof_start_event. These contain the general configuration, and event/counter setup, respectively.
It can often be useful to split up profiling data into several different time periods. For example, you may want to collect data on an application's startup separately from the normal runtime data. You can use the simple to tool op_session to do this. For example :
op_session run1 |
will create a sub-directory containing the samples up to that point (the current session's sample files are moved into this directory). You can then pass this name as, for example, a parameter to op_time to only get data up to the point you named the session.
Your CPU type may not include the requisite support for hardware performance counters, in which case you must use OProfile in RTC mode: see Section 5.2.
The hardware performance counters are detailed in the Intel IA-32 Architecture Manual, Volume 3, available from http://developer.intel.com/. The AMD Athlon/Duron implementation is detailed in http://www.amd.com/products/cpg/athlon/techdocs/pdf/22007.pdf. These processors are capable of delivering an interrupt to the local APIC LVTPC vector when a counter overflows. This is the basic mechanism on which OProfile is based. The kernel module installs an interrupt handler for this vector. The delivery mode is set to NMI so that blocking interrupts in the kernel does not prevent profiling. When the interrupt handler is called, the current EIP PC value, process id, and counter are recorded into the profiling structure. This allows the overflow event to be attached to a specific assembly instruction in a binary image. The daemon is necessary to transform these recorded values into a count against a file offset for a given binary image, in order to produce profile data off-line at a later time.
If we use an event such as CPU_CLK_UNHALTED or INST_RETIRED, we can use the overflow counts as an estimate of actual time spent in each part of code. Alternatively we can profile interesting data such as the cache behaviour of routines with the other available counters.
However there are several caveats. Firstly there are those issues listed in the Intel manual. There is a delay between the counter overflow and the interrupt delivery that can skew results on a small scale - this means you cannot rely on the profiles at the instruction level, except as a binary was/wasn't executed indicator. If you are using an "event-mode" counter such as the cache counters, a count registered against it doesn't mean that it is responsible for that event. However, it implies that the counter overflowed in the dynamic vicinity of that instruction, to within a few instructions. Further details on this problem can be found in Chapter 5 and also in the Digital paper "ProfileMe: A Hardware Performance Counter". Also note that a very high number of interrupts can have a large performance effect, and even overflow the profiling data structures. This can lead to mapping information getting overwritten, and loss of respect from boxing promoter (don't worry, an obscure reference). The system stability will never be affected, but profiling may not be able to work properly. An error message from the kernel module will appear in your system log files if this situation occurs.
As described in the Intel manual, each counter, as well as being configured to count an event type, has several more configuration parameters. First, there is the unit mask: this simply further specifies what to count. Second, there is the counter value, discussed below. Third, there is a parameter whether to increment counts whilst in kernel or user space. You can configure these separately for each counter.
So you must specify a counter value with the --ctrX-count option, where X is the logical counter number in the range 0-3 (0-1 for Intel processors, which only have two counters). Using multiple counters is useful for profiling several aspects of the same running program. After each overflow event, the counter will be re-initialized such that another overflow will occur after this many events have been counted. Picking a good value for this parameter is, unfortunately, somewhat of a grey art (not quite black). It is of course dependent on the event you have chosen. For basic time-based profiling, you will probably use CPU_CLK_UNHALTED (on Intel). You can estimate how many interrupts this value will generate per second with this event by dividing your CPU clock rate by the chosen value. I have a 600MHz Celeron, so specifying an overflow value of 100,000 will generate around 600 interrupts per second. Specifying too large a value will mean not enough interrupts are generated to give a realistic profile (though this problem can be ameliorated by profiling for longer). Specifying too small a value can lead to overflow problems discussed previously.
Some CPU types do not provide the needed hardware support to use the hardware performance counters. This includes some laptops, classic Pentiums, and other CPU types not yet supported by OProfile (such as Cyrix and Pentium IV). On these machines, OProfile falls back to using the real-time clock interrupt to collect samples. This interrupt is also used by the rtc module: you cannot have both the oprofile and rtc modules loaded nor the rtc support compiled in the kernel.
RTC mode is less capable than the hardware counters mode; in particular, it is unable to profile sections of the kernel where interrupts are disabled. There is just one available event, "RTC interrupts", and its value corresponds to the number of interrupts generated per second (that is, a higher number means a better profiling resolution, and higher overhead). The current implementation of the real-time clock supports only power-of-two sampling rates from 2 to 4096 per second. Other values within this range are rounded to the nearest power of two.
Setting the value from the GUI should be straightforward. On the command line, you need to specify the --rtc-value option to op_start, e.g. :
op_start --vmlinux=/boot/2.4.0ac12/vmlinux --rtc-value=256
Note the sysctl tree described in the next section is different when the RTC is being used. In particular, the file /proc/sys/dev/oprofile/rtc_value is used by the tools to set the desired RTC sampling rate, and will reflect the actual sampling rate after profiling has started.
When the kernel module loads, it generates a file hierarchy underneath /proc/sys/dev/oprofile. You can read and write to these files to give direct access to the kernel parameters.
With the exception of dump and dump_stop, any changes only take effect on restarting the profiler.
The buffer size, corresponding to the --buffer-size option.
The hash table size, corresponding to the --hash-table-size option.
The note table size, corresponding to the --note-table-size option.
Corresponding to the --kernel-only option.
Writing ASCII "1" to the file will initiate a sample data dump. Note: ignore the value "0" you get when reading the file - it is meaningless.
Writing to this file will stop the profiler, processing all pending data, and stopping the user-space daemon.
Read only; the number of total interrupts received on all processors since this file was last read. Used by the GUI.
Read only; used internally by the oprofile tools.
Each counter will have a directory containing files for that counter's settings. The rest of the files described here are per-counter.
The counter value for this counter.
Whether this counter is active.
The numeric event value. You can convert from symbolic event names to numeric values like so :
echo `op_help CPU_CLK_UNHALTED` >/proc/sys/dev/oprofile/0/0/event.
Whether to profile the kernel.
The unit mask specified.
Whether to profile user-space.
OProfile is a low-level profiler which allow continuous profiling with a low-overhead cost. If not used carefully, this can affect the stability of the system. If too low a count reset value is set for a counter, the system can become overloaded with counter interrupts, and seem as if the system has frozen.
This can happen as follows: When the profiler count reaches zero an NMI handler is called which stores the sample values in an internal buffer, then resets the counter to its original value. If the count is very low, a pending NMI can be sent before the NMI handler has completed. Due to the priority of the NMI, the local APIC delivers the pending interrupt immediately after completion of the previous interrupt handler, and control never returns to other parts of the system. In this way the system seems to be frozen.
If this happens, it will be impossible to bring the system back to a workable state. There is no way to provide real security against this happening, other than making sure to use a reasonable value for the counter reset. For example, setting CPU_CLK_UNHALTED event type with a ridiculously low reset count (e.g. 500) is likely to freeze the system.
In short : Don't try a foolish sample count value. Unfortunately the definition of a foolish value is really dependent on the event type - if ever in doubt, e-mail
Do I hear you shout "but my event value is low, but not stupid !" ? Yes, this can be the case. In these circumstances, a simple solution is to disable kernel profiling by turning off the kernel option for each enabled counter. As the NMI handler is in-kernel, this avoids the problem.
There are situations where you are only interested in the profiling results of a particular running process, or process tty group. You can set the pid/pgrp values via the --pid-filter and --pgrp-filter options to op_start, which will make the daemon ignore samples for processes that don't match the filter.
The kernel module can be unloaded, but is designed to take very little memory when profiling is not underway. There is no need to unload the module between profiler runs.
lsmod and similar utilities will still show the module's use count as -1. However, this is not to be relied on - the module will become unloadable some short time after stopping profiling.
Note that by default module unloading is disabled when used on SMP systems. This is because of a small chance of a module unload race crashing the kernel. As the race is very small, it is allowed to re-enable the module unload by specifying the "allow_unload" parameter to the module :
modprobe oprofile allow_unload=1
This option can be DANGEROUS and should only be used on non-production systems.
Table of Contents
OK, so the profiler has been running, but it's not much use unless we can get some data out. Fairly often, OProfile does a little too good a job of keeping overhead low, and no data reaches the profiler. This can happen on lightly-loaded machines. Remember you can force a dump at any time with :
op_dump
Remember to do this before complaining there is no profiling data ! Now that we've got some data, it has to be processed. That's the job of oprofpp or op_to_source. This works on a sample file in the /var/lib/oprofile/samples/ directory, along with the binary file being profiled, to produce human-readable data. Note that if the binary file changes after the sample file was created, you won't be able to get useful data out. This situation is detected for you. Note that several instances of a binary are merged into one sample file. By default, all samples from a dynamically linked library are merged into one sample file as well.
A different scenario happen when re-starting profiling with different parameters, as the old sample files from previous sessions don't get deleted (allowing you to build profiles over many distinct profiling sessions). If the last session is determined to be out of date due to the use of different profiling parameters, all the samples files are backed up in a sub-directory name session-#nr. If during profiling the daemon detects a change to a binary image and a samples file belonging to this binary exists, the samples file is silently deleted. So if during profiling you change a binary it is your responsibility to save the binary image and the samples files, if you need it.
Note that kernel modules without symbol data (this can happen with some initrd setups) cannot be profiled (modules with symbols show up in /proc/ksyms).
All post profile tools accept the following options
Show the command line options.
Show the version number of oprofile on the form:
app_name: oprofile 0.1cvs compiled on Mar 1 2002 20:40:40 |
Oprofpp can be used in three major modes; list symbol mode, detailed symbol mode, or gprof mode. The first gives sorted histogram output of sample counts against functions as shown in the walkthrough. The second can show individual sample counts against instructions inside a function, useful for detailed profiling, whilst the third mode is handy if you're used to gprof style output. Note that only flat gprof profiles are supported, however.
Some interesting options of the post-processor :
The samples file to use. By default, the current samples file for the given binary is used; this option can be used to examine older sample files.
The binary image (shared library, kernel vmlinux, or program) to produce data for.
Demangle C++ symbol names.
Which counter (0 - N) to extract information for. N is dependent on your cpu type: 1 for Intel CPUs, 3 for Athlon based CPUs.
List a histogram of sample counts against symbols. Each line shows the function name, its starting address, the relative percentage of hits across that image, and the absolute number of samples in this function.
Provide a detailed listing for the specified symbol name. This shows, for each sample, the position of the address, and the number of samples.
Dump output to the specified file in gprof format. If you specify gmon.out, you can then call gprof -p <binary>.
Provide a detailed listing for all symbols. Each line shows number of samples at the given address for all counters.
Show the function and line number for all samples. This requires that the image was compiled with debug symbols (-g), and is usable only with --list-all-symbols-details, --list-symbol and --list-symbols.
Comma-separated list of symbols to ignore. This can be useful to ignore the leading contributors to the sample histogram, as the percentage values are re-calculated.
Show the details for each shared lib which belongs to the given application. This option is useful only if you have profiled with the --separate-samples option and you specify on the oprofpp command line either --list-symbols or --list-all-symbols-details.
Specify the output format where a single format char is a field intended for: 'v' vma, 's' nr samples, 'S' nr cumulated samples, 'p' percent samples, 'P' cumulated percent samples, 'n' symbol name, 'l' source file name and line nr, 'L' ditto as 'l' but with basename of source file name, 'i' image name, 'I' ditto as 'i' but with base name of image name, 'd' details for each samples for the selected symbols and 'h' for the header itself. This option is not available with --dump-gprof-file.
op_to_source generates annotated source files or assembly listings optionally mixed with source. If you want to see the source file the profiled application needs to have debug information and the source must be available through this debug information e.g. compile the application with -g for gcc.
Note that for the reason explained in Section 5.1 the results can show some inaccuracy. The debug info itself can add other problems; for example, the line number for a symbol can be incorrect. Assembly instructions can be re-ordered and moved by the compiler, and this can lead to crediting source lines with samples not really "owned" by this line. Also see Chapter 5.
The options allowed are :
Output assembly code. Currently the assembly code is sorted in increasing order on the vma address. The --sort-by-counter, --with-more-than-samples percent_nr and --until-more-than-samples percent_nr options can also be used with this option to provide filtering capabilities.
This option is used in conjunction with --output-dir. You can use it to specify the base directory of the source which you wish to produce annotated output for. With this option, any source files outside the directory (for example, system header files) are ignored.
Specify that you want to produce an annotated source tree, rather than getting all output to stdout. This creates a hierarchy of annotated source files, and is affected by the --source-dir, --output, and --no-output options.
Specify a set of comma-separated patterns for matching annotated source output filenames. If this option is present, a file is only output if it matches one of the given patterns (which applies to the filename and each components of the containing directory names). For example :
--output '*.c,user.h'
Specify a set of comma-separated patterns for filtering annotated source output filenames. If this option is present, a file is only output if it does not match one of the given patterns (which applies to the filename and each components of the containing directory names). For example :
--no-output 'boring.c,boring*.h'
Output assembly code mixed with the source file, implies --assembly.
pass the params strings directly to objdump allowing to pass additional parameters to objdump. Check the objdump man page to see what options objdump accept e.g. -o '--disassembler-options=intel' to get Intel assembly syntax instead of att syntax. This option can be used only with --assembly or --source-with-assembly
Sort by decreasing number of samples on counter_nr. For assembly output this option provides only a filtering and not a sort order.
Output source file which contains at least percent_nr samples. Can not be combined with --until-more-than-samples.
Output source files until the amount of samples in these files reach percent_nr samples. Can not be combined with --with-more-than-samples.
Specify the samples file. At least one of the --samples-file or --image-file must be specified.
Specify the image file.
Comma-separated list of symbols to ignore. This can be useful to ignore the leading contributors to the sample histogram, as the percentage values are re-calculated.
op_merge is used to merge samples wich belongs to the same binary image. Its main purpose is to merge samples files created by profiling with --separate-samples. So you can create one samples file containing all samples for a shared libs: op_merge/usr/lib/ld-2.1.2.so will create a samples file named }usr}lib}ld-2.1.2.so ready to use with oprofpp or other post-profiling tools. Additionally you can merge a subset of samples files inside one sample file by specifying explicitly the samples files name to merge. This allows to use post-profile tools on shared libs for a subset of applications.
The options allowed are :
use counter nr to select the appropriate samples files
You can get a quick look at an overall summary of relative binary profiles using op_time. This utility displays the relative amount of samples for each application profiled sorted by decreasing order of samples count. So with op_time [--option] [image_name[,image_names]] you can get :
/lib/libc-2.1.2.so 19 32.7586% /usr/X11R6/bin/XF86_SVGA 13 22.4138% ... /usr/bin/grep 1 1.72414% /usr/X11R6/lib/libXt.so.6.0 1 1.72414% |
If you don't specify any image_name on command line op_time report information about all profiled binary image. You can use shell wildcards like : op_time /usr/bin/* Currently you cannot use : op_time "/usr/bin/*"
Options allowed are :
use counter nr for sorting samples count
Show the details for each shared lib which belongs to one application. This option is useful only if you have profiled with the --separate-samples option.
Show details for each symbols in each profiled files
demangle GNU C++ symbol names
show the image name when specifying --list-symbols
Sort by decreasing samples count instead of increasing count.
Specify an alternate list of pathname to locate image file. This is usefull if your samples files name does not match the image file name such as module loaded at boot time through a ram disk
Same as mdash; but retrieve recursively the image file name in the path list
Specify the output format where a single format char is a field intended for: 'v' vma, 's' nr samples, 'S' nr cumulated samples, 'p' percent samples, 'P' cumulated percent samples, 'n' symbol name, 'l' source file name and line nr, 'L' ditto as 'l' but with basename of source file name, 'i' image name, 'I' ditto as 'i' but with base name of image name, 'd' details for each samples for the selected symbols and 'h' for the header itself. This option is available only with --list-symbols
Comma-separated list of symbols to ignore. This can be useful to ignore the leading contributors to the sample histogram, as the percentage values are re-calculated.
Table of Contents
Another grey art. The standard caveats of profiling come to mind: profile realistic situations, profile difference scenarios, profile for as long as a time as possible, avoid system-specific artifacts, don't trust the profile data too much. Also bear in mind the comments on the performance counters above - you can not rely on totally accurate instruction-level profiling. However, for almost all circumstances the data can be useful. Ideally a utility such as Intel's VTUNE would be available to allow careful instruction-level analysis; go hassle Intel for this, not me ;)
This is an example of how the latency of delivery of profiling interrupts can impact the reliability of the profiling data. This is pretty much a worst-case-scenario example: these problems are fairly rare.
double fun(double a, double b, double c) { double result = 0; for (int i = 0 ; i < 10000; ++i) { result += a; result *= b; result /= c; } return result; } |
Here the last instruction of the loop is very costly, and you would expect the result reflecting that - but (cutting the instructions inside the loop):
$ op_to_source -a -w 10 /* 9349 0.3788% */ 8048394: fadd %st(3),%st /* 22858 0.9261% */ 8048396: fmul %st(2),%st /* 687682 27.86% */ 8048398: fdiv %st(1),%st /* 1747822 70.81% */ 804839a: decl %eax /* 17 0.0006887% */ 804839b: jns 8048394 |
The problem comes from the x86 hardware; when the counter overflows the IRQ line is asserted but the hardware have features that can delay the NMI interrupt: x86 hardware is synchronous (e.g. can not interrupt during an instruction but interrupt at the end of instruction), there is also a latency when the IRQ line is asserted the hardware can take some cycles to get account; the multiple execution unit and the out of order model of modern x86 family also causes problems. The following shows the same function at source level
$op_to_source -a -w 10 show double fun(double a, double b, double c) /* fun(double, double, double) 2468162 100% */ /* 165 0.006685% */ { /* 3 0.0001215% */ double result = 0; for (int i = 0 ; i < 10000; ++i) { /* 9349 0.3788% */ result += a; /* 22858 0.9261% */ result *= b; /* 687682 27.86% */ result /= c; /* 1747918 70.82% */ } return result; /* 187 0.007576% */ } |
So the conclusion: don't trust samples coming at the end of a loop, particularly if the last instruction generated by the compiler is costly, this case can occur also for each branch in your program. Always bear in mind that samples can be often delayed by a few cycles from its real position. That's a hardware problem and oprofile can do nothing about it.
The compiler can introduce some pitfalls in the annotated source output. The optimizer can move pieces of code in such manner that two line of codes are interlaced (instruction scheduling). Also debug info generated by the compiler can show strange behavior. This is especially true for complex expressions e.g. inside an if statement:
if (a && .. b && .. c &&) |
here the problem come from the position of line number. The available debug info does not give enough details for the if condition, so all samples are accumulated at the position of the right brace of the expression. Using op_to_source -a can help to show the real samples at an assembly level.
Often assembler can not generate debug information automatically. Such example of commonly used assembler is gas and nasm. This means than you can not get source report unless you manually define the neccessary debug information, report to your assembler documentation for that. The only debugging info needed currently by oprofile is the linenr/filename vma association. When profiling assembly without debugging info you can always get report for symbol and optionnaly for vma through oprofpp -l or oprofpp -L but this work only for symbol with the right attribute. For gas you can get this by
.globl foo .type foo,@function |
while for nasm you must use
GLOBAL foo:function ; [1] |
Note than oprofile do not need the global attribute, but only the function attribute. User of gas and nasm must found the right way to not declare the foo symbol global if necessary.
Another cause of apparent problems is the hidden cost of instructions. A very common example is two memory reads: one from L1 cache and the other from memory. It's clear for all people than the second memory read will show more samples but there are many other causes of hidden cost of instructions. A non-exhaustive list: mis-predicted branch, TLB cache miss, partial register stall, partial register dependencies, memory mismatch stall, re-executed µops. If you want to write programs at assembly level, or you write compiler take a look at the Intel and AMD documentation at http://developer.intel.com/ and http://www.amd.com/products/cpg/athlon/techdocs/.
One of the major design criterion for OProfile was low overhead. In many cases profiling is hardly noticeable in terms of overhead (I regularly leave it turned on all the time). It achieves this by judicious use of kernel-side data structures to reduce the analysis overhead to a bare runtime minimum. There are several things that unfortunately complicate the issue, so there are cases where the overhead is noticeable.
The worst-case scenario is where there are many short-lived processes. This can be seen in a kernel compile, for instance. This leads to hash table clashes; clashes lead to faster buffer filling; buffer filling leads to higher overhead. Even in this worst case overhead is low compared to other profilers; only very detailed profiling of these workloads has an overhead of higher than 5%. Actual performance data is presented in the source distribution. In fact most situations have much fewer numbers of processes, leading to far better performance.
Some graphs of performance characteristics of oprofile are available on the website - see Section 2.
Thanks to (in no particular order) : Arjan van de Ven, Rik van Riel, Juan Quintela, Philippe Elie, Phillipp Rumpf, Tigran Aivazian, Alex Brown, Alisdair Rawsthorne, Bob Montgomery, Ray Bryant, H.J. Lu, Jeff Esper, Will Cohen, Cliff Woolley, Alex Tsariounov, Al Stone, Richard Reich (rreich@rdrtech.com), Dave Jones, Charles Filtness; and finally Pulp, for "Intro".