mirror of
https://github.com/ARM-software/workload-automation.git
synced 2025-01-19 04:21:17 +00:00
29174912dd
Add a workload for lat_mem_rd and bw_mem tests of LMbench benchmark, a suite of portable ANSI/C microbenchmarks for UNIX/POSIX. This is a port of corresponding workload from WA2.
109 lines
5.6 KiB
Plaintext
Executable File
109 lines
5.6 KiB
Plaintext
Executable File
%M% %I% %E%
|
|
|
|
The set of programs and documentation known as "lmbench" are distributed
|
|
under the Free Software Foundation's General Public License with the
|
|
following additional restrictions (which override any conflicting
|
|
restrictions in the GPL):
|
|
|
|
1. You may not distribute results in any public forum, in any publication,
|
|
or in any other way if you have modified the benchmarks.
|
|
|
|
2. You may not distribute the results for a fee of any kind. This includes
|
|
web sites which generate revenue from advertising.
|
|
|
|
If you have modifications or enhancements that you wish included in
|
|
future versions, please mail those to me, Larry McVoy, at lm@bitmover.com.
|
|
|
|
=========================================================================
|
|
|
|
Rationale for the publication restrictions:
|
|
|
|
In summary:
|
|
|
|
a) LMbench is designed to measure enough of an OS that if you do well in
|
|
all catagories, you've covered latency and bandwidth in networking,
|
|
disks, file systems, VM systems, and memory systems.
|
|
b) Multiple times in the past people have wanted to report partial results.
|
|
Without exception, they were doing so to show a skewed view of whatever
|
|
it was they were measuring (for example, one OS fit small processes into
|
|
segments and used the segment register to switch them, getting good
|
|
results, but did not want to report large process context switches
|
|
because those didn't look as good).
|
|
c) We insist that if you formally report LMbench results, you have to
|
|
report all of them and make the raw results file easily available.
|
|
Reporting all of them means in that same publication, a pointer
|
|
does not count. Formally, in this context, means in a paper,
|
|
on a web site, etc., but does not mean the exchange of results
|
|
between OS developers who are tuning a particular subsystem.
|
|
|
|
We have a lot of history with benchmarking and feel strongly that there
|
|
is little to be gained and a lot to be lost if we allowed the results
|
|
to be published in isolation, without the complete story being told.
|
|
|
|
There has been a lot of discussion about this, with people not liking this
|
|
restriction, more or less on the freedom principle as far as I can tell.
|
|
We're not swayed by that, our position is that we are doing the right
|
|
thing for the OS community and will stick to our guns on this one.
|
|
|
|
It would be a different matter if there were 3 other competing
|
|
benchmarking systems out there that did what LMbench does and didn't have
|
|
the same reporting rules. There aren't and as long as that is the case,
|
|
I see no reason to change my mind and lots of reasons not to do so. I'm
|
|
sorry if I'm a pain in the ass on this topic, but I'm doing the right
|
|
thing for you and the sooner people realize that the sooner we can get on
|
|
to real work.
|
|
|
|
Operating system design is a largely an art of balancing tradeoffs.
|
|
In many cases improving one part of the system has negative effects
|
|
on other parts of the system. The art is choosing which parts to
|
|
optimize and which to not optimize. Just like in computer architecture,
|
|
you can optimize the common instructions (RISC) or the uncommon
|
|
instructions (CISC), but in either case there is usually a cost to
|
|
pay (in RISC uncommon instructions are more expensive than common
|
|
instructions, and in CISC common instructions are more expensive
|
|
than required). The art lies in knowing which operations are
|
|
important and optmizing those while minimizing the impact on the
|
|
rest of the system.
|
|
|
|
Since lmbench gives a good overview of many important system features,
|
|
users may see the performance of the system as a whole, and can
|
|
see where tradeoffs may have been made. This is the driving force
|
|
behind the publication restriction: any idiot can optimize certain
|
|
subsystems while completely destroying overall system performance.
|
|
If said idiot publishes *only* the numbers relating to the optimized
|
|
subsystem, then the costs of the optimization are hidden and readers
|
|
will mistakenly believe that the optimization is a good idea. By
|
|
including the publication restriction readers would be able to
|
|
detect that the optimization improved the subsystem performance
|
|
while damaging the rest of the system performance and would be able
|
|
to make an informed decision as to the merits of the optimization.
|
|
|
|
Note that these restrictions only apply to *publications*. We
|
|
intend and encourage lmbench's use during design, development,
|
|
and tweaking of systems and applications. If you are tuning the
|
|
linux or BSD TCP stack, then by all means, use the networking
|
|
benchmarks to evaluate the performance effects of various
|
|
modifications; Swap results with other developers; use the
|
|
networking numbers in isolation. The restrictions only kick
|
|
in when you go to *publish* the results. If you sped up the
|
|
TCP stack by a factor of 2 and want to publish a paper with the
|
|
various tweaks or algorithms used to accomplish this goal, then
|
|
you can publish the networking numbers to show the improvement.
|
|
However, the paper *must* also include the rest of the standard
|
|
lmbench numbers to show how your tweaks may (or may not) have
|
|
impacted the rest of the system. The full set of numbers may
|
|
be included in an appendix, but they *must* be included in the
|
|
paper.
|
|
|
|
This helps protect the community from adopting flawed technologies
|
|
based on incomplete data. It also helps protect the community from
|
|
misleading marketing which tries to sell systems based on partial
|
|
(skewed) lmbench performance results.
|
|
|
|
We have seen many cases in the past where partial or misleading
|
|
benchmark results have caused great harm to the community, and
|
|
we want to ensure that our benchmark is not used to perpetrate
|
|
further harm and support false or misleading claims.
|
|
|
|
|