Gala's Wall of DEATH :D

ok
so
today
was
amazing!
so
if
i
didnt
tell
yall
i
broke
up
with
my
GF
almost
2
weeks
ago
cause
no
matter
what
she
didnt
want
to
be
a
christian
and
i
was
highly
against
that
and
told
her
so
we
broke
up
/tear
but
anywho
this
girl
that
i
like
but
im
really
good
friends
with
is
doing
a
party
so
i
get
to
go
to
that
tomorrow!
u
have
no
idea
how
excited
i
am
that
im
going
y?
i
dont
know
now
what
shall
i
be!
hmmm
i
should
be
a
chicken
cause
i
like
chicken!!
but
then
again
i
could
be
a
ninja!
ooo
i
have
the
perfect
one
i
could
be
a
wall!
haha
i
was
thinking
of
a
monk
but
decided
against
it
y?
i
dont
know!
ug
i
need
to
go
get
a
costume
and
my
dads
not
home!
gar!
hmm
sorry
if
we
r
raiding
tomorrow
but
i
wanna
cut
down
on
my
wow
time
by
ALOT
just
cause
that
way
ill
focus
on
school
and
friends.
such
as
i
found
out
on
my
horticulture
midterm
i
got
a
99%
highest
in
class
i
was
so
proud!
and
i
have
i
had
a
college
scout
tell
me
after
the
game
yesterday
that
i
could
get
a
full
ride
as
a
strong
safety!!!
and
play
on
special
teams
im
so
excited!
but
because
of
that
and
i
have
4
years
i
need
to
start
working
out
alot
more
and
keeping
in
shape
and
sitting
at
the
computer
isnt
helping
that!
but
i
do
enjoy
raiding
soooo
i
still
will
be
on
to
raid
and
such
but
i
wont
be
on
for
8
hours
at
a
time
and
if
u
see
me
on
for
that
long
yell
at
me
k?
but
ya
today
was
fun
and
a
good
day!
o
and
i
got
my
grade
up
in
biology
from
a
C
to
a
high
B
dont
know
how
but
i
did!
im
so
happy
mwahaha
but
ya.
o
we
were
talking
about
ivan
the
terrible
(reminds
me
of
goblit)
and
saying
how
bi-polar
he
was.
its
scary!
imagine
your
parents
beating
u
with
a
septer!
i
mean
come
on
u
beat
your
son
to
death
well
close
to
death
with
a
scepter!
ug
that
would
sucK!
i
mean
he
threw
his
animals
off
tall
buildings
and
tore
birds
wings
off!
what
kind
of
person
is
that?
one
wacko
person.
still
think
it
reminds
me
of
gobby.
but
ya
o
btw
the
game
FEAR
is
LAME
i
hate
it.
its
horridly
done
and
the
story
line
sucks
really
bad
but
ya
im
talking
randomly.
and
if
someone
reads
all
this
they
have
way
to
much
free
time!
like
me
no
homework
but
ya
ok
im
going
to
go
get
a
wall
of
spam
and
get
back
to
u
 
Located off the northwest tip of Bird's Head Peninsula on the island of New Guinea is Raja Ampat, or the four kings, is one of the worlds greatest biodiversity ever found. This area contians more then 450 species of reef-building coral! The Caribbean holds fewer then 70 species. With the worlds reefs being destroyed or declining efforts to safeguard this treasure is more pressing. Raja Ampat is the crowning jewel of a large area that is being protected by the Indonesia government named the Bird’s Head Seascape.
The archipelago reef was founded by a man named Max Ammer in 1990. He established two resorts on the small island Kri near by. One of his his tours he guided an Australian ichthyologist Gerry Allen, who was simply amazed by the sight of the corral. He immediately got ahold of Conservation International (CI) to survey the area. On this survery they found over 970 different fish species.
Other areas were found around Raja Ampat that contain at least 56 new species. These reefs are still being searched and new species are being founded beyond what anyone ever expected.
Since the survey(s) went above and beyond what anyone expected the CI, Nature Conservancy, and the World Wide Fund for Nature is attempting to create a 70,600-square-mile named Bird’s Head Seascape. This area would protect the Raja Ampat area and its four islands (also known as the four kings). Also it would protect 2,500 islands and reefs, nearly 1,300 fish species, 600 coral species, 700 mollusks, sea turtle rookeries, and more. The only thing missing from this amazing area is sharks. These sharks were killed by local hunters for shark-fin soup.
At the moment the Bird’s Head Seascape is still not leally protected but is slowly but surely becoming so. Also scientists and the government are trying to make sure the surrounding villages do not hurt the area in any more then have from fishing. Hopefully this will help the Bird’s Head Seascape be protected and be able to be explored to its fullest.
Troglobites is the technical name for millipedes, spiders, worms, blind salamanders, and eyeless fish that can navigate, mate, and kill amid perpetual darkness. Since they evolved in isolation and unable to disperse, species often are only a handful of individuals in one cave, or a room of a cave. The existence of these creatures raise many questions. How did they get there, and when? How did they manage to survive?
Worldwide there is about 90% of caves that lack visible entrances and have not been discovered. Even in caves that have been well-explored caves troglobites are being found. There is roughly 7,700 species known troglobites and is most likely only a tad bit of what is left to find. These Troglobites are able to survive for months without food because of their super slow metabolism. Troglobites have adapted to their environment in many ways. Natural selection is seen as one of the possible reasons why these strange creatures can be so well adapted to their surroundings. Such as spiders in the pitch-black depths of the caves don’t have eyes. Since their isn’t any light in the area vision isn’t needed for these spiders, instead they have long slender legs and vibration sensitivity.
Slowly the population of these amazing creatures is disappearing because the reason they can survive in their almost non livable conditions is because of how they have adapted to the climate that any slight chance in any condition such as temperature will kill off these troglobites. The question is will they be able to survive through all the changes of the world today? The global warming, pollution, the depletion of aquifers, all off these conditions as one could seem as though its not that big of a deal but the combination of these and the sensitivity of the troglobites is a disaster waiting to happen. Also with only about 41 of these thousands of troglobites are on the federal endangered or threatened list, while according to Nature Conservancy 95 percent of the thousands of species known in the United States alone are imperiled.

Flayer is a tool for dynamically exposing application innards for security testing and analysis. It is implemented on the dynamic binary instrumentation framework Valgrind [17] and its memory error detection plug-in, Memcheck [21]. This paper focuses on the implementation of Flayer, its supporting libraries, and their application to software security.
Flayer provides tainted, or marked, data flow analysis and instrumentation mechanisms for arbitrarily altering that flow. Flayer improves upon prior taint tracing tools with bit-precision. Taint propagation calculations are performed for each value-creating memory or register operation. These calculations are embedded in the target application's running code using dynamic instrumentation. The same technique has been employed to allow the user to control the outcome of conditional jumps and step over function calls.
Flayer's functionality provides a robust foundation for the implementation of security tools and techniques. In particular, this paper presents an effective fault injection testing technique and an automation library, LibFlayer. Alongside these contributions, it explores techniques for vulnerability patch analysis and guided source code auditing.
Flayer finds errors in real software. In the past year, its use has yielded the expedient discovery of flaws in security critical software including OpenSSH and OpenSSL.
1 Introduction

Vulnerabilities often lay undiscovered in software due to the complexity of the code paths leading to them. Recent tools attempt to understand these paths and modify running application code, detecting flaws ranging from undefined memory use [21] to signedness conversion errors [15] to unbounded memory access [32]. In addition, symbolic evaluation and analysis frameworks, like EXE [8] and SAGE [12], and other multiple execution path analysis tools [16], have begun to augment this effort through the automated generation of dangerous input. While execution path, or flow, analysis techniques have been in use for over three decades [7], practical analysis tools for white box testing and auditing scenarios have only recently become commonplace [15] [12] [8] [32] [19].
This paper presents Flayer, an execution flow analysis and modification tool, and a complementary fuzz testing [14] technique. Flayer is implemented as a plug-in to the dynamic binary instrumentation framework Valgrind [17] using core functionality from its memory error detection plug-in, Memcheck [21]. It traces the flow of tainted, or marked, input data through an application during execution and logs the traversal of conditional jumps and system calls. Recent works, such as autodafé [32] and Byakugan [19], also rely on understanding input flow through a process. However, these tools use input pattern matching techniques for taint tracing which lack the accuracy of Flayer's dynamic binary instrumentation based approach. Flayer improves on existing taint tracing software, like TaintCheck [18] and Catchconv [15], through the addition of bit-precise taint propagation. This precision allows for taintedness to propagate into bitfields and bit arrays creating a more accurate view of the impact input has on an application's execution. Furthermore, Flayer is not solely a taint tracing tool. It also provides the ability to redirect the flow irrespective of input. Flayer can instrument the outcome of conditional jumps and function calls in the execution path based on user-supplied arguments. In addition, a library for automated execution and output processing, LibFlayer, is available for use along with an interactive shell interface, FlayerSh, for easy human interaction.
The application of Flayer's flow tracing and alteration functionality, flaying, provides a means to directly expose code obscured behind complex code paths for direct testing. This approach combined with random fuzz testing results in a lightweight, yet effective testing technique.
1.1 Paper structure

The remainder of this paper discusses Flayer, its implementation and applications. Section 2 covers the detailed implementation of Flayer. Section 3 introduces a new fuzz testing technique. Section 4 discusses other techniques enabled through the use of Flayer and its supporting libraries. Section 5 provides real world experiences where the presented software and techniques have successfully discovered security-related application flaws. Section 6 details the possibilities for future work, and Section 7 gives the conclusions drawn.
2 Flayer

2.1 Foundation

Flayer is implemented as a plug-in to Valgrind, a framework for instrumenting machine code at runtime. In particular, it is based upon functionality from Memcheck. Memcheck is a Valgrind plug-in that provides four types of memory error detection: byte-level addressability, heap allocations, memory block argument overlapping, and definedness checking. Of these, definedness checking was the basis for Flayer's taint propagation feature. Other functionality provided directly by Valgrind was leveraged for implementing taint sources and control flow alteration. In addition, Valgrind's default error output and robust command line argument handling mechanisms enabled easy automation with a simple wrapper library, LibFlayer.
2.2 Bit-precision taint tracing

Tainting is the process of tagging data with metadata that is propagated when that data is involved in a value-creating operation. The implementation of bit-precision taint tracing may be divided into three logical pieces: initial taint assignment, taint propagation and notification, and taint removal.
Taint is assigned to data based on the data sources specified on the command line. The following sources are supported: network, file, and stdin. All data originating from the network, the file system, or standard input are tainted through the instrumentation of system calls made by the target application. In most cases, this is handled by the read system call. As data enters the application via this kernel interface, the instrumented call checks if the source file descriptor is tainted and appropriately marks the destination memory addresses. In addition, recvmsg and recvfrom are instrumented in the same manner. File descriptor-based tainting is managed in two ways. If standard input tainting is specified, data originating from file descriptor 0 is tainted. For network and file tainting, file descriptor tracking is handled through the instrumentation of the following system calls: open, socket, connect, accept, socketpair, and close. When the data sourced from the file system is to be tainted, open controls whether a file descriptor is marked as providing tainted data. By default, if file tainting is enabled, all file descriptors opened with open will be marked. When a file descriptor is closed with close, it is unmarked as providing tainted data. However, tainting all input from open file descriptors may taint a large amount of data as shared libraries are loaded and files are read by the target application. The command line argument --file-filter exists to mitigate this problem. The argument takes a string which specifies a path prefix to the desired file, or files, to be tainted. This allows for targeted tainting of file input data. Unfortunately, there are no such filters for network tainting. If enabled, all network file descriptors are assumed to produce tainted data. Usually, this is not a burden given that network operations are not fundamental to process initialization. Along with system call instrumentation, taint may be assigned through one other mechanism: client calls. Valgrind provides a mechanism where special machine instructions may be inserted into an application, or library, at compile time through the use of C macros. Usually used from preloaded shared objects, these client calls may taint, untaint, or examine chunks of application memory.
The propagation of taintedness, whether data is tainted or not, is largely implemented using the undefinedness propagation technique implemented in Memcheck. In this technique, all bits in memory and registers have associated bits of metadata, shadow bits, which track taintedness. Furthermore, each value-creating memory operation has a shadow operation which calculates the taintedness of the result. This direct memory propagation approach performs the majority of the taintedness propagation. Flayer also implements an indirect technique to further expand coverage. Flayer preloads a shared library that replaces several functions in the target application which operate on strings and raw memory: strnlen, strlen, strncmp, strcmp, memcmp, and bcmp. In practice, these functions operate on memory that may be tainted but will not propagate taintedness to their return value because that value is not the direct result of a memory operation. For example, x = y + 1 results in x being tainted if y is tainted. However, in the following example len will not be tainted even if s is:
char *c = s; size_t len = 0;
for( ; *c; c++ ) { len++; }
return len;

While it is clear to a human that the final value stored in len is based completely on the contents of s, direct memory-to-memory propagation cannot address the situation. To work around this, the replacement functions listed make use of client calls to determine if the source memory is tainted and taint the return value appropriately. If these functions have been inlined, or custom equivalents are used, the preloaded versions will not be used and taintedness will not propagate indirectly.
Taintedness propagation functions generate external notification messages. Given that Memcheck already reports on traversed conditional jumps, system call argument usage, memory access, and SIMD or FP register memory loads, Flayer inherited output that is sufficiently rich without the addition of further messages.
Memory must be untainted when it no longer contains a tainted value to avoid false positives. In most cases, memory is untainted through the taint propagation code. If an untainted value is written directly to a tainted memory location, that location will become untainted. Memory is also untainted when it is allocated or freed on the heap through malloc/free wrapper functions. All other cases are handled through Valgrind callbacks: stack creation, stack destruction, and client calls.
2.3 Execution path alteration

Flayer alters a target program's execution path through direct instrumentation of its machine code, a practice classically used in software cracking. In particular, two types of alterations are possible: forcing conditional jumps and stepping over function calls. The instrumentation occurs after machine code is translated to Valgrind's intermediate representation (IR) and before it is translated back to machine code.
Conditional jump alteration is controlled by the --alter-branch command line argument. This argument takes a comma-separated list of instruction pointer and value pairs joined by colons, e.g. --alter-branch=0x8080:1,0x9090:0. The value specified after the instruction pointer is that of the guard of the conditional jump. A value of 0 indicates that the branch should not be followed while a value of 1 will result in the branch being followed. This behavior occurs irrespective of the values involved in the conditional itself. Any conditional jump may be altered using this technique regardless of whether it is visible during taint analysis.
In addition to forcing conditional jump outcomes, Flayer allows function calls to be stepped over using the --alter-fn command line argument. This argument takes a similar format to --alter-branch except that the value may be any 32-bit integer. The address supplied is not that of the function to be skipped, but instead, the address where the function is called. At this address, Flayer adds two instructions. The first sets the value of the EAX register to the 32-bit value supplied in the command line argument. The second is a jump to the next physical instruction after the call site. This forces the function call to be bypassed while still providing a controllable return value.
2.4 LibFlayer

LibFlayer is a Python library which provides a programmatic interface to Flayer. It is comprised of several components, the most important of which is the Flayer class.
The Flayer class is the core interface of the library. It supplies the getters and setters for managing Flayer command line arguments and provides interfaces for interacting with parsed output. Through these interfaces it is possible to specify what input type to taint, what file paths to filter, and what conditional jump addresses to modify. The interface can be used directly or wrapped further for higher levels of abstraction. One such wrapper provides the interactive shell interface used by FlayerSh. In addition, some effort has been invested in the automated exploration of execution path trees using LibFlayer.
3 A new fuzz testing technique

3.1 Background

Random fault injection-based testing, or fuzz testing, is the technique of supplying random input to an application with the intent of discovering an unseen, and potentially dangerous, code path. Traditional fuzz testing is often underutilized due to its inherent limitations. In particular, exhaustive testing of an application's input space quickly becomes infeasible. Fuzz testing one or two bytes may not be prohibitive, but testing even a small set of 500 bytes requires 28*500 combinations to completely exercise the input space.
While there are many specialized techniques to mitigate this exponential explosion of combinations, two generalized practices have arisen. The first is block-based [4], or format aware, fuzz testing. Spike [5], PROTOS [20], and Peach [11], among others, use this approach to limit the randomness in the data to just the mutation of format-specific components. This approach has shown its efficacy [4] but requires a substantial initial investment in the form of extensive format specification. Even in systems where this specification is generated automatically [32] [6], fuzz testing based on a protocol definition may not exercise code from undocumented features or proprietary vendor extensions and may waste significant resources testing unimplemented specification features. For example, consider testing a HTTP server. WebDAV [13] alone adds nine new HTTP methods in addition to multiple new HTTP headers. The combination of these HTTP methods, headers, and their arguments takes a substantial time to explore regardless of whether the server supports the functionality.
The second technique is exemplified in the work by Vuagnoux called autodafé [32], as well as Pusscat's Byakugan [19]. The approach focuses on the use of recognizable patterns in the input stream which are detected through function hijacking or frequent memory scanning. This technique is useful for detecting which pieces of input reach specific locations, but it is limited by design. Not only is it possible for the marker text to be modified beyond recognition during execution, but the method itself introduces uncertainties in measurement. The values in the marker text will dictate which code paths are taken and intrinsically limit the coverage.
Recently, variations on directed fuzz testing have been introduced parallel to the work presented in this paper. Jared DeMott's Evolutionary Fuzzing System [10] uses genetic algorithms to construct viable input sets based on reproductive criteria driven by the amount of code coverage of each successive run. It eliminates the risks of wasting effort on unimplemented functionality and of failing to exercise undocumented features. Like fuzz [14], it still must overcome basic protocol input validation tests. Usually, these tests are used in software to determine the format of incoming user input. This might be a version check similar to the protocol banner in OpenSSH [3] or a file format type indicator like the magic check in LibTIFF [2]. While this limitation may not affect the approach dramatically, other techniques, inspired by fuzz testing, address this issue through application flow analysis. Catchconv [15], EXE [8], and SAGE [12] leverage symbolic execution to guide input error detection and generation. Constraints are extracted by tracing the execution of an application on fixed input, such as a known good file. The extracted constraints are then explored through virtualized execution and, in some cases, through repeated execution on input mutated based on code coverage heuristics. These approaches have shown promising results but are limited by approximation errors in symbolic execution and the potential of poor initial input selection.
3.2 Fuzzing flayed applications

Fuzzing flayed applications is a lightweight testing approach which minimizes the initial time investment required from the auditor. The only initial work required is flaying. It does not require a protocol aware input generator, a large testing harness, or any input selection work. Instead, a time investment is required when a crash condition is uncovered. The auditor must spend time creating viable input or determining if the bug is unreachable in normal circumstances.
Flaying is an iterative process for increasing the reachability of complex application code by removing the outer layers of application defenses. Initially, an auditor must supply random input to a target application and analyze the resulting taint tracing output. As uninteresting, or non-state building, sanity and error checks are traversed, they must be forcibly followed or bypassed using Flayer's flow alteration commands. This process is repeated until the desired code is directly exposed for testing. Once exposed, traditional random fuzz testing is used to uncover vulnerabilities. Upon the discovery of a vulnerability, the malicious input must be crafted by the auditor such that it will bypass the removed checks in an unaltered version of the software. The success of this technique is discussed in Section 5.

Figure 1: Bypassing the "Protocol Mismatch" error check on an Ubuntu Feisty OpenSSH 4.3p2-8ubuntu1 binary
Flayer may be used on an application regardless of the availability of the source code or debugging symbols. While the availability of this data will speed the flaying and creation of valid input, simple heuristics work in many cases which make them unnecessary. For instance, if testing of OpenSSH's cipher suite negotiation is desirable, then it would be useful to bypass the SSH protocol version check. This is done in Figure 1 by stepping over a sscanf call. Address 0x8A2E was identified as the call site to the offending check as it preceded the first tainted call to the logging function which generated the bad protocol version error message. Only the libc symbols were used to infer this. With the check removed, it becomes possible to build a simple test harness that copies data from /dev/urandom and sends it to the flayed sshd. In addition, it is trivial to introduce the required data into any payload by prepending a proper version value. While this is a simplistic example, it captures the essence of the technique.
It is worth noting that the fuzz testing of flayed applications does not require Flayer. This technique was first performed manually through the removal of error and sanity checks using interactive debugging and source code modification. However, the automation of the iterative discovery and modification process greatly speeds the use. The primary benefit of manual flaying is the ability to bypass state building statements through code addition.
4 Further uses

The Flayer tool suite provides a useful feature set for software auditors, developers, and maintainers. The ability to comprehend and interact with the flow of data through an application provides unique insight into that application's operation and makes other useful security auditing and testing techniques possible.
4.1 Guided source code auditing

Many of the more dangerous vulnerabilities, such as remote execution of code, result from malicious user input. Therefore, it is quite useful to determine input entry points and input-tainted functions when auditing an application. This is where Flayer proves useful.
By running a given application, compiled with debugging symbols, through Flayer with an arbitrary input set, the auditor can see which conditional jumps are traversed by the data along with the containing functions. Given that the direct output from Flayer is not always immediately comprehensible to a human auditor, this technique is augmented by the use of FlayerSh..
...
Figure 2: A snippet of a guided auditing session in FlayerSh reviewing a magic check in tiffinfo (LibTIFF-3.8.2).
FlayerSh parses the output of Flayer providing error summaries, branch alteration, and source code snippet listing. Figure 2 provides an example session which shows a run of tiffinfo on random input, locations where tainted values were used, and the source code from one such use in a magic value check. Using this shell, it is possible to rapidly follow the data flow as well as review snippets of source code surrounding locations where tainted data was used. This allows for quick insight into the operation of the target application and immediately displays error checking locations without the need for additional tools or software.
FlayerSh does not replace interactive debuggers or disassemblers, such as GDB [1] or IDA Pro [9], but it does provide a compromise between single stepping through code execution and manually locating application error checking code.
>>> # LibTIFF 3.8.2 unpatched | >>> # LibTIFF 3.8.2 patched
>>> snippet(0x2) | >>> snippet(0x2)
* Read offset to next directory for sequential |
* scans. | /*
*/ | * Check for integer overflow when
(void) ReadOK(tif, &nextdiroff, | * validating the dir_off, otherwise
sizeof (uint32)); | * a very high offset may cause an
} else { | * OOB read and crash the client.
toff_t off = tif->tif_diroff; | * -- taviso@google.com, 14 Jun 2006.
| */
|if (off + sizeof (uint16) > tif->tif_size) { | |if (off + sizeof (uint16) > tif->tif_size ||
TIFFErrorExt(tif->tif_clientdata, module, | off > (UINT_MAX - sizeof(uint16))) {
"%s: Can not read TIFF directory count", | TIFFErrorExt(tif->tif_clientdata, module,
tif->tif_name); | "%s: Can not read TIFF directory count",
return (0); | tif->tif_name);
>>> | >>>
Figure 3: Patch analysis of LibTIFF version 3.8.2 using two FlayerSh instances.
4.2 Patch and vulnerability analysis

In complement to auditing and testing, Flayer and FlayerSh, in particular, prove useful when analyzing input data flow through variants of the same piece of software. This scenario occurs quite frequently in both the commercial and open source worlds: projects fork, operating system distributions apply different patches to the same original application, and systems become dependent on old versions of software. When vulnerabilities are announced, patches to the original source code will often not be useful to the maintainers of modified source.
It is possible to run two instances of FlayerSh, one on the patched original application and one on an unpatched variant, with a known bad input. This approach allows one to review the code snippet of each of the conditional jumps along the code path of both versions, and, if needed, to force specific behavior to locate any vulnerable code. Performing this simultaneous analysis results in a quick assessment of the variant's behavior.
Figure 3 provides an example of this. It shows a small piece of a FlayerSh session for a version of LibTIFF patched for the directory offset overflow and one that is not. In particular, it is displaying the affected tainted conditional where a safety check has been added in one version but is missing in the original.
5 Real world experience

Fuzz testing of flayed applications has been used with some success since the summer of 2006. This work resulted in the discovery of multiple vulnerabilities in well known open source applications:
Seven vulnerabilities in LibTIFF version 3.8.2 were disclosed [22] [23] [24] [25] [26] [27] [28].
A remote denial of service vulnerability was discovered [30] in OpenSSH which affected all versions before 4.4.
An out of band read was discovered [31] in libPNG which affected versions 1.0.6 through 1.2.12.
A NULL pointer dereference was disclosed [29] in OpenSSL which affected all current clients.
In addition, FlayerSh has been used to determine if variants of LibTIFF and OpenSSH were affected by these vulnerabilities.
5.1 Finding a LibTIFF overflow

One of the recently reported vulnerabilities in LibTIFF resulted from an unchecked integer value which had previously gone unnoticed. The value was that of the TIFF directory entry offset read directly from a supplied TIFF image file. This section provides a simple procedure for finding this vulnerability with Flayer.
The first step is identifying a good test application. For the purposes of this vulnerability, tiffinfo is used. LibTIFF version 3.8.2 was downloaded and compiled with debugging symbols. With this completed, the compiled tool is run under Flayer with some random input as seen in Figure 4.
$ dd if=/dev/urandom of=test.tiff \
bs=1k count=1
$ valgrind --tool=flayer \
--taint-file=yes \
--file-filter=$PWD/test.tiff \
./tiffinfo $PWD/test.tiff
Figure 4: Tracing random input through tiffinfo
The first run will result in an error message about the TIFF header magic. E.g., "Not a TIFF or MDI file, ...". In the Flayer output, there are three tainted conditional jump events which occur prior to the first printf call. It is assumed that this call issues the error message. Each of these identified conditional jumps are tested by supplying each instruction pointer address at which the event occur to Flayer. One such test is shown in Figure 5.
$ valgrind --tool=flayer \
--taint-file=yes \
--file-filter=$PWD/test.tiff \
--alter-branch=0x4049E66:1 \
./tiffinfo $PWD/test.tiff
Figure 5: Testing a tainted conditional jump in tiffinfo
After some trial and error, it is possible to circumvent the BigTIFF and version error checking resulting in a different error message: "Can not read TIFF directory count". With the version checks cleared, the directory count code may be exercised by the test harness provided in Figure 6.
#!/bin/bash
while /bin/true; do
dd if=/dev/urandom \
of=test.tiff bs=1k \
count=1
valgrind --tool=flayer \
--taint-file=yes \
--file-filter=$PWD/test.tiff \
--alter-branch="0x4049E6C:1,
0x4049EA6:1" \
./tiffinfo ./test.tiff
if [[ $? -ne 0 && $? -ne 1 ]]
then; break; fi
done
Figure 6: An example Flayer test harness
The test harness is simple but has proved effective with LibTIFF and several other tested applications. However, for this vulnerability, once the directory count error message is triggered, a quick review of the source code at the specified line number reveals an integer overflow. In addition, if the auditor attempted to force the conditional jump with a guard value of 0 at that location, it would have immediately resulted in a segmentation fault.
5.2 The good and the bad

Flayer and flaying have been used extensively for real world application auditing and fuzz testing. With use, the strengths and weaknesses of this tool and related techniques are clear.
For patch analysis and guided auditing, Flayer has worked well for the authors' needs, but auditing style is largely personal preference. With debugging symbols and available source code, however, it has proved a straightforward means for discovering input entry points to an application. This allowed for targeted audits which follow the data flow through the audited application without any initial analysis of the source code. In addition, the ability to step over functions and force conditionals was useful in analyzing foreign binary behavior. It is possible to guide binary analysis by indicating the addresses where interesting behavior occurs and forcing that behavior to continue. In many cases, if the target application crashes, it is possible to infer the data primitives expected by examining the resulting logs.
Fuzzing flayed applications is a highly effective technique for testing binary input such as image files and some network protocols. The values supplied by generating random data from /dev/urandom will fully exercise the handlers for the incoming binary code once the blocking checks are removed. However, when the input format is highly structured, such as the ASCII protocol HTTP, this coverage drops off significantly. The likelihood of data originating from /dev/urandom generating valid HTTP messages is extremely low. This does not completely discount the use of flaying and Flayer from these scenarios, though. Instead, the fully random data source may be replaced with a somewhat protocol aware payload generator. While a fully protocol aware payload generator may yield the most thorough protocol coverage, merging Flayer with a partially protocol aware generator allows for the execution path taken to be targeted. For example, Flayer may be used to bypass the HTTP version check in order to allow for a HTTP BNF-based fuzzer to generate acceptable data without forcing it to be aware of which versions of the protocol are normally implemented.
Flayer has its own limitations. The largest of these is that skipping sections of code, conditional jump branches or entire functions, may result in missing required runtime state. While this is often not a problem, in some cases values are derived from the source data which need to fall within a small range, and that value is used in subsequent calculations or even memory allocations. When this occurs, Flayer is less useful and manual code modification is required to force correct state. Flayer suffers from another limitation. If a conditional jump is forced, it is forced every time. When that conditional jump determines whether loop should continue, it is possible to lock the application in a never ending loop. Flayer provides no mechanism yet to alter the outcome of a conditional a specific number of times.
A practical limitation of Flayer is that it does not yet provide full coverage of all useful taint source system calls. One notable example is mmap. This system call is used to map a file on the file system directly into process memory. Surprisingly, instrumenting this system call has not been necessary in testing and analysis done so far. Given that instrumentation has been added as needed, this is only a minor limitation.
6 Future Work

There are many avenues left to explore with Flayer. Most immediately, Flayer's implementation limitations should be removed. This includes expanding the coverage of tainting input vectors, adding support for conditional jump alteration a controllable number of times, adding network taint filtering, as well as adding an assignment operator to conditional jumps. In the case of an assignment operator, instead of forcing a jump by replacing the guard value, the actual tainted value would be reassigned to the value it is being tested against. This would address state building challenges in a simple, but effective way.
Other, more challenging, work is possible. One example is the addition of origin tracking of tainted memory. There is a Memcheck code branch which supports this concept, but it does not do so in a way compatible with Flayer. Adding this feature to the existing tool would allow further automated analysis and potentially, the automatic generation of input for interesting code paths. An alternate approach for reaching the same goal would be integrating Flayer's output with a program slicing [33] system. This approach would remove the need for origin tracking while still automatically generating input.
Additional work automating programmatic control flow comprehension is another viable direction. It is possible to automate the process of flaying through brute force flow alteration testing or through the integration with more sophisticated systems. For instance, integration with a code coverage tool would allow for automated runs of Flayer with randomly selected conditional jumps to be optimized. This integration would enable a tree view of the code path and provide pruning of dead end code paths from the analysis enhancing the quality of testing.
Along with these extensions, further integration of Flayer with other fuzz testing techniques will yield very useful results. Flayer may be used to force other fuzz testing software to test more targeted areas of code than they were previously able to. More investigation into the compatibility and benefit will be explored.
7 Conclusions

The Flayer tool suite, built on the Valgrind framework using core concepts from Memcheck, should be added to the toolkit of anyone who regularly performs application auditing or vulnerability patch analysis.
Flayer provides mechanisms to trace input flow through an application and to arbitrarily modify that flow. LibFlayer layers a convenient interface on Flayer. FlayerSh provides a reference tool implemented on LibFlayer. This suite enables multiple security auditing and testing techniques, such as flaying. In concert, these tools and techniques allow one to more effectively audit software.
The Flayer tool suite is a starting point for application auditing and analysis that requires extremely little initial investment while yielding solid results. Even though Flayer is still at an early stage, its techniques have proved their efficacy through the discovery of vulnerabilities in Internet security critical applications, such as OpenSSH and OpenSSL. This software is available for public use and enhancement.
Here's a theory you hear a lot these days: "Microsoft is finished. As soon as Linux makes some inroads on the desktop and web applications replace desktop applications, the mighty empire will topple."

Although there is some truth to the fact that Linux is a huge threat to Microsoft, predictions of the Redmond company's demise are, to say the least, premature. Microsoft has an incredible amount of cash money in the bank and is still incredibly profitable. It has a long way to fall. It could do everything wrong for a decade before it started to be in remote danger, and you never know... they could reinvent themselves as a shaved-ice company at the last minute. So don't be so quick to write them off. In the early 90s everyone thought IBM was completely over: mainframes were history! Back then, Robert X. Cringely predicted that the era of the mainframe would end on January 1, 2000 when all the applications written in COBOL would seize up, and rather than fix those applications, for which, allegedly, the source code had long since been lost, everybody would rewrite those applications for client-server platforms.

Well, guess what. Mainframes are still with us, nothing happened on January 1, 2000, and IBM reinvented itself as a big ol' technology consulting company that also happens to make cheap plastic telephones. So extrapolating from a few data points to the theory that Microsoft is finished is really quite a severe exaggeration.

However, there is a less understood phenomenon which is going largely unnoticed: Microsoft's crown strategic jewel, the Windows API, is lost. The cornerstone of Microsoft's monopoly power and incredibly profitable Windows and Office franchises, which account for virtually all of Microsoft's income and covers up a huge array of unprofitable or marginally profitable product lines, the Windows API is no longer of much interest to developers. The goose that lays the golden eggs is not quite dead, but it does have a terminal disease, one that nobody noticed yet.

Now that I've said that, allow me to apologize for the grandiloquence and pomposity of that preceding paragraph. I think I'm starting to sound like those editorial writers in the trade rags who go on and on about Microsoft's strategic asset, the Windows API. It's going to take me a few pages, here, to explain what I'm really talking about and justify my arguments. Please don't jump to any conclusions until I explain what I'm talking about. This will be a long article. I need to explain what the Windows API is; I need to demonstrate why it's the most important strategic asset to Microsoft; I need to explain how it was lost and what the implications of that are in the long term. And because I'm talking about big trends, I need to exaggerate and generalize.

Developers, Developers, Developers, Developers

Remember the definition of an operating system? It's the thing that manages a computer's resources so that application programs can run. People don't really care much about operating systems; they care about those application programs that the operating system makes possible. Word Processors. Instant Messaging. Email. Accounts Payable. Web sites with pictures of Paris Hilton. By itself, an operating system is not that useful. People buy operating systems because of the useful applications that run on it. And therefore the most useful operating system is the one that has the most useful applications.

The logical conclusion of this is that if you're trying to sell operating systems, the most important thing to do is make software developers want to develop software for your operating system. That's why Steve Ballmer was jumping around the stage shouting "Developers, developers, developers, developers." It's so important for Microsoft that the only reason they don't outright give away development tools for Windows is because they don't want to inadvertently cut off the oxygen to competitive development tools vendors (well, those that are left) because having a variety of development tools available for their platform makes it that much more attractive to developers. But they really want to give away the development tools. Through their Empower ISV program you can get five complete sets of MSDN Universal (otherwise known as "basically every Microsoft product except Flight Simulator") for about $375. Command line compilers for the .NET languages are included with the free .NET runtime... also free. The C++ compiler is now free. Anything to encourage developers to build for the .NET platform, and holding just short of wiping out companies like Borland.

Why Apple and Sun Can't Sell Computers

Well, of course, that's a little bit silly: of course Apple and Sun can sell computers, but not to the two most lucrative markets for computers, namely, the corporate desktop and the home computer. Apple is still down there in the very low single digits of market share and the only people with Suns on their desktops are at Sun. (Please understand that I'm talking about large trends here, and therefore when I say things like "nobody" I really mean "fewer than 10,000,000 people," and so on and so forth.)

Why? Because Apple and Sun computers don't run Windows programs, or, if they do, it's in some kind of expensive emulation mode that doesn't work so great. Remember, people buy computers for the applications that they run, and there's so much more great desktop software available for Windows than Mac that it's very hard to be a Mac user.

Sidebar What is this "API" thing?

If you're writing a program, say, a word processor, and you want to display a menu, or write a file, you have to ask the operating system to do it for you, using a very specific set of function calls which are different on every operating system. These function calls are called the API: it's the interface that an operating system, like Windows, provides to application developers, like the programmers building word processors and spreadsheets and whatnot. It's a set of thousands and thousands of detailed and fussy functions and subroutines that programmers can use, which cause the operating system to do interesting things like display a menu, read and write files, and more esoteric things like find out how to spell out a given date in Serbian, or extremely complex things like display a web page in a window. If your program uses the API calls for Windows, it's not going to work on Linux, which has different API calls. Sometimes they do approximately the same thing. That's one important reason Windows software doesn't run on Linux. If you wanted to get a Windows program to run under Linux, you'd have to reimplement the entire Windows API, which consists of thousands of complicated functions: this is almost as much work as implementing Windows itself, something which took Microsoft thousands of person-years. And if you make one tiny mistake or leave out one function that an application needs, that application will crash.
And that's why the Windows API is such an important asset to Microsoft.

(I know, I know, at this point the 2.3% of the world that uses Macintoshes are warming up their email programs to send me a scathing letter about how much they love their Macs. Once again, I'm speaking in large trends and generalizing, so don't waste your time. I know you love your Mac. I know it runs everything you need. I love you, you're a Pepper, but you're only 2.3% of the world, so this article isn't about you.)

The Two Forces at Microsoft

There are two opposing forces inside Microsoft, which I will refer to, somewhat tongue-in-cheek, as The Raymond Chen Camp and The MSDN Magazine Camp.

Raymond Chen is a developer on the Windows team at Microsoft. He's been there since 1992, and his weblog The Old New Thing is chock-full of detailed technical stories about why certain things are the way they are in Windows, even silly things, which turn out to have very good reasons.

The most impressive things to read on Raymond's weblog are the stories of the incredible efforts the Windows team has made over the years to support backwards compatibility:

Look at the scenario from the customer's standpoint. You bought programs X, Y and Z. You then upgraded to Windows XP. Your computer now crashes randomly, and program Z doesn't work at all. You're going to tell your friends, "Don't upgrade to Windows XP. It crashes randomly, and it's not compatible with program Z." Are you going to debug your system to determine that program X is causing the crashes, and that program Z doesn't work because it is using undocumented window messages? Of course not. You're going to return the Windows XP box for a refund. (You bought programs X, Y, and Z some months ago. The 30-day return policy no longer applies to them. The only thing you can return is Windows XP.)

I first heard about this from one of the developers of the hit game SimCity, who told me that there was a critical bug in his application: it used memory right after freeing it, a major no-no that happened to work OK on DOS but would not work under Windows where memory that is freed is likely to be snatched up by another running application right away. The testers on the Windows team were going through various popular applications, testing them to make sure they worked OK, but SimCity kept crashing. They reported this to the Windows developers, who disassembled SimCity, stepped through it in a debugger, found the bug, and added special code that checked if SimCity was running, and if it did, ran the memory allocator in a special mode in which you could still use memory after freeing it.

This was not an unusual case. The Windows testing team is huge and one of their most important responsibilities is guaranteeing that everyone can safely upgrade their operating system, no matter what applications they have installed, and those applications will continue to run, even if those applications do bad things or use undocumented functions or rely on buggy behavior that happens to be buggy in Windows n but is no longer buggy in Windows n+1. In fact if you poke around in the AppCompatibility section of your registry you'll see a whole list of applications that Windows treats specially, emulating various old bugs and quirky behaviors so they'll continue to work. Raymond Chen writes, "I get particularly furious when people accuse Microsoft of maliciously breaking applications during OS upgrades. If any application failed to run on Windows 95, I took it as a personal failure. I spent many sleepless nights fixing bugs in third-party programs just so they could keep running on Windows 95."

A lot of developers and engineers don't agree with this way of working. If the application did something bad, or relied on some undocumented behavior, they think, it should just break when the OS gets upgraded. The developers of the Macintosh OS at Apple have always been in this camp. It's why so few applications from the early days of the Macintosh still work. For example, a lot of developers used to try to make their Macintosh applications run faster by copying pointers out of the jump table and calling them directly instead of using the interrupt feature of the processor like they were supposed to. Even though somewhere in Inside Macintosh, Apple's official Bible of Macintosh programming, there was a tech note saying "you can't do this," they did it, and it worked, and their programs ran faster... until the next version of the OS came out and they didn't run at all. If the company that made the application went out of business (and most of them did), well, tough luck, bubby.

To contrast, I've got DOS applications that I wrote in 1983 for the very original IBM PC that still run flawlessly, thanks to the Raymond Chen Camp at Microsoft. I know, it's not just Raymond, of course: it's the whole modus operandi of the core Windows API team. But Raymond has publicized it the most through his excellent website The Old New Thing so I'll name it after him.

That's one camp. The other camp is what I'm going to call the MSDN Magazine camp, which I will name after the developer's magazine full of exciting articles about all the different ways you can shoot yourself in the foot by using esoteric combinations of Microsoft products in your own software. The MSDN Magazine Camp is always trying to convince you to use new and complicated external technology like COM+, MSMQ, MSDE, Microsoft Office, Internet Explorer and its components, MSXML, DirectX (the very latest version, please), Windows Media Player, and Sharepoint... Sharepoint! which nobody has; a veritable panoply of external dependencies each one of which is going to be a huge headache when you ship your application to a paying customer and it doesn't work right. The technical name for this is DLL Hell. It works here: why doesn't it work there?

The Raymond Chen Camp believes in making things easy for developers by making it easy to write once and run anywhere (well, on any Windows box). The MSDN Magazine Camp believes in making things easy for developers by giving them really powerful chunks of code which they can leverage, if they are willing to pay the price of incredibly complicated deployment and installation headaches, not to mention the huge learning curve. The Raymond Chen camp is all about consolidation. Please, don't make things any worse, let's just keep making what we already have still work. The MSDN Magazine Camp needs to keep churning out new gigantic pieces of technology that nobody can keep up with.

Here's why this matters.

Microsoft Lost the Backwards Compatibility Religion

Inside Microsoft, the MSDN Magazine Camp has won the battle.

The first big win was making Visual Basic.NET not backwards-compatible with VB 6.0. This was literally the first time in living memory that when you bought an upgrade to a Microsoft product, your old data (i.e. the code you had written in VB6) could not be imported perfectly and silently. It was the first time a Microsoft upgrade did not respect the work that users did using the previous version of a product.

And the sky didn't seem to fall, not inside Microsoft. VB6 developers were up in arms, but they were disappearing anyway, because most of them were corporate developers who were migrating to web development anyway. The real long term damage was hidden.

With this major victory under their belts, the MSDN Magazine Camp took over. Suddenly it was OK to change things. IIS 6.0 came out with a different threading model that broke some old applications. I was shocked to discover that our customers with Windows Server 2003 were having trouble running FogBugz. Then .NET 1.1 was not perfectly backwards compatible with 1.0. And now that the cat was out of the bag, the OS team got into the spirit and decided that instead of adding features to the Windows API, they were going to completely replace it. Instead of Win32, we are told, we should now start getting ready for WinFX: the next generation Windows API. All different. Based on .NET with managed code. XAML. Avalon. Yes, vastly superior to Win32, I admit it. But not an upgrade: a break with the past.

Outside developers, who were never particularly happy with the complexity of Windows development, have defected from the Microsoft platform en-masse and are now developing for the web. Paul Graham, who created Yahoo! Stores in the early days of the dotcom boom, summarized it eloquently: "There is all the more reason for startups to write Web-based software now, because writing desktop software has become a lot less fun. If you want to write desktop software now you do it on Microsoft's terms, calling their APIs and working around their buggy OS. And if you manage to write something that takes off, you may find that you were merely doing market research for Microsoft."

Microsoft got big enough, with too many developers, and they were too addicted to upgrade revenues, so they suddenly decided that reinventing everything was not too big a project. Heck, we can do it twice. The old Microsoft, the Microsoft of Raymond Chen, might have implemented things like Avalon, the new graphics system, as a series of DLLs that can run on any version of Windows and which could be bundled with applications that need them. There's no technical reason not to do this. But Microsoft needs to give you a reason to buy Longhorn, and what they're trying to pull off is a sea change, similar to the sea change that occurred when Windows replaced DOS. The trouble is that Longhorn is not a very big advance over Windows XP; not nearly as big as Windows was over DOS. It probably won't be compelling enough to get people to buy all new computers and applications like they did for Windows. Well, maybe it will, Microsoft certainly needs it to be, but what I've seen so far is not very convincing. A lot of the bets Microsoft made are the wrong ones. For example, WinFS, advertised as a way to make searching work by making the file system be a relational database, ignores the fact that the real way to make searching work is by making searching work. Don't make me type metadata for all my files that I can search using a query language. Just do me a favor and search the damned hard drive, quickly, for the string I typed, using full-text indexes and other technologies that were boring in 1973.

Automatic Transmissions Win the Day

Don't get me wrong... I think .NET is a great development environment and Avalon with XAML is a tremendous advance over the old way of writing GUI apps for Windows. The biggest advantage of .NET is the fact that it has automatic memory management.

A lot of us thought in the 1990s that the big battle would be between procedural and object oriented programming, and we thought that object oriented programming would provide a big boost in programmer productivity. I thought that, too. Some people still think that. It turns out we were wrong. Object oriented programming is handy dandy, but it's not really the productivity booster that was promised. The real significant productivity advance we've had in programming has been from languages which manage memory for you automatically. It can be with reference counting or garbage collection; it can be Java, Lisp, Visual Basic (even 1.0), Smalltalk, or any of a number of scripting languages. If your programming language allows you to grab a chunk of memory without thinking about how it's going to be released when you're done with it, you're using a managed-memory language, and you are going to be much more efficient than someone using a language in which you have to explicitly manage memory. Whenever you hear someone bragging about how productive their language is, they're probably getting most of that productivity from the automated memory management, even if they misattribute it.

Sidebar
Why does automatic memory management make you so much more productive? 1) Because you can write f(g(x)) without worrying about how to free the return value from g, which means you can use functions which return interesting complex data types and functions which transform interesting complex data types, in turn allowing you to work at a higher level of abstraction. 2) Because you don't have to spend any time writing code to free memory or tracking down memory leaks. 3) Because you don't have to carefully coordinate the exit points from your functions to make sure things are cleaned up properly.
Racing car aficionados will probably send me hate mail for this, but my experience has been that there is only one case, in normal driving, where a good automatic transmission is inferior to a manual transmission. Similarly in software development: in almost every case, automatic memory management is superior to manual memory management and results in far greater programmer productivity.

If you were developing desktop applications in the early years of Windows, Microsoft offered you two ways to do it: writing C code which calls the Windows API directly and managing your own memory, or using Visual Basic and getting your memory managed for you. These are the two development environments I have used the most, personally, over the last 13 years or so, and I know them inside-out, and my experience has been that Visual Basic is significantly more productive. Often I've written the same code, once in C++ calling the Windows API and once in Visual Basic, and C++ always took three or four times as much work. Why? Memory management. The easiest way to see why is to look at the documentation for any Windows API function that needs to return a string. Look closely at how much discussion there is around the concept of who allocates the memory for the string, and how you negotiate how much memory will be needed. Typically, you have to call the function twice—on the first call, you tell it that you've allocated zero bytes, and it fails with a "not enough memory allocated" message and conveniently also tells you how much memory you need to allocate. That's if you're lucky enough not to be calling a function which returns a list of strings or a whole variable-length structure. In any case, simple operations like opening a file, writing a string, and closing it using the raw Windows API can take a page of code. In Visual Basic similar operations can take three lines.

So, you've got these two programming worlds. Everyone has pretty much decided that the world of managed code is far superior to the world of unmanaged code. Visual Basic was (and probably remains) the number one bestselling language product of all time and developers preferred it over C or C++ for Windows development, although the fact that "Basic" was in the name of the product made hardcore programmers shun it even though it was a fairly modern language with a handful of object-oriented features and very little leftover gunk (line numbers and the LET statement having gone the way of the hula hoop). The other problem with VB was that deployment required shipping a VB runtime, which was a big deal for shareware distributed over modems, and, worse, let other programmers see that your application was developed in (the shame!) Visual Basic.

One Runtime To Rule Them All

And along came .NET. This was a grand project, the super-duper unifying project to clean up the whole mess once and for all. It would have memory management, of course. It would still have Visual Basic, but it would gain a new language, one which is in spirit virtually the same as Visual Basic but with the C-like syntax of curly braces and semicolons. And best of all, the new Visual Basic/C hybrid would be called Visual C#, so you would not have to tell anyone you were a "Basic" programmer any more. All those horrid Windows functions with their tails and hooks and backwards-compatibility bugs and impossible-to-figure-out string-returning semantics would be wiped out, replaced by a single clean object oriented interface that only has one kind of string. One runtime to rule them all. It was beautiful. And they pulled it off, technically. .NET is a great programming environment that manages your memory and has a rich, complete, and consistent interface to the operating system and a rich, super complete, and elegant object library for basic operations.

And yet, people aren't really using .NET much.

Oh sure, some of them are.

But the idea of unifying the mess of Visual Basic and Windows API programming by creating a completely new, ground-up programming environment with not one, not two, but three languages (or are there four?) is sort of like the idea of getting two quarreling kids to stop arguing by shouting "shut up!" louder than either of them. It only works on TV. In real life when you shout "shut up!" to two people arguing loudly you just create a louder three-way argument.

(By the way, for those of you who follow the arcane but politically-charged world of blog syndication feed formats, you can see the same thing happening over there. RSS became fragmented with several different versions, inaccurate specs and lots of political fighting, and the attempt to clean everything up by creating yet another format called Atom has resulted in several different versions of RSS plus one version of Atom, inaccurate specs and lots of political fighting. When you try to unify two opposing forces by creating a third alternative, you just end up with three opposing forces. You haven't unified anything and you haven't really fixed anything.)

So now instead of .NET unifying and simplifying, we have a big 6-way mess, with everybody trying to figure out which development strategy to use and whether they can afford to port their existing applications to .NET.

No matter how consistent Microsoft is in their marketing message ("just use .NET—trust us!"), most of their customers are still using C, C++, Visual Basic 6.0, and classic ASP, not to mention all the other development tools from other companies. And the ones that are using .NET are using ASP.NET to develop web applications, which run on a Windows server but don't require Windows clients, which is a key point I'll talk about more when I talk about the web.

Oh, Wait, There's More Coming!

Now Microsoft has so many developers cranking away that it's not enough to reinvent the entire Windows API: they have to reinvent it twice. At last year's PDC they preannounced the next major version of their operating system, codenamed Longhorn, which will contain, among other things, a completely new user interface API, codenamed Avalon, rebuilt from the ground up to take advantage of modern computers' fast display adapters and realtime 3D rendering. And if you're developing a Windows GUI app today using Microsoft's "official" latest-and-greatest Windows programming environment, WinForms, you're going to have to start over again in two years to support Longhorn and Avalon. Which explains why WinForms is completely stillborn. Hope you haven't invested too much in it. Jon Udell found a slide from Microsoft labelled "How Do I Pick Between Windows Forms and Avalon?" and asks, "Why do I have to pick between Windows Forms and Avalon?" A good question, and one to which he finds no great answer.

So you've got the Windows API, you've got VB, and now you've got .NET, in several language flavors, and don't get too attached to any of that, because we're making Avalon, you see, which will only run on the newest Microsoft operating system, which nobody will have for a loooong time. And personally I still haven't had time to learn .NET very deeply, and we haven't ported Fog Creek's two applications from classic ASP and Visual Basic 6.0 to .NET because there's no return on investment for us. None. It's just Fire and Motion as far as I'm concerned: Microsoft would love for me to stop adding new features to our bug tracking software and content management software and instead waste a few months porting it to another programming environment, something which will not benefit a single customer and therefore will not gain us one additional sale, and therefore which is a complete waste of several months, which is great for Microsoft, because they have content management software and bug tracking software, too, so they'd like nothing better than for me to waste time spinning cycles catching up with the flavor du jour, and then waste another year or two doing an Avalon version, too, while they add features to their own competitive software. Riiiight.

No developer with a day job has time to keep up with all the new development tools coming out of Redmond, if only because there are too many dang employees at Microsoft making development tools!

It's Not 1990

Microsoft grew up during the 1980s and 1990s, when the growth in personal computers was so dramatic that every year there were more new computers sold than the entire installed base. That meant that if you made a product that only worked on new computers, within a year or two it could take over the world even if nobody switched to your product. That was one of the reasons Word and Excel displaced WordPerfect and Lotus so thoroughly: Microsoft just waited for the next big wave of hardware upgrades and sold Windows, Word and Excel to corporations buying their next round of desktop computers (in some cases their first round). So in many ways Microsoft never needed to learn how to get an installed base to switch from product N to product N+1. When people get new computers, they're happy to get all the latest Microsoft stuff on the new computer, but they're far less likely to upgrade. This didn't matter when the PC industry was growing like wildfire, but now that the world is saturated with PCs most of which are Just Fine, Thank You, Microsoft is suddenly realizing that it takes much longer for the latest thing to get out there. When they tried to "End Of Life" Windows 98, it turned out there were still so many people using it they had to promise to support that old creaking grandma for a few more years.

Unfortunately, these Brave New Strategies, things like .NET and Longhorn and Avalon, trying to create a new API to lock people into, can't work very well if everybody is still using their good-enough computers from 1998. Even if Longhorn ships when it's supposed to, in 2006, which I don't believe for a minute, it will take a couple of years before enough people have it that it's even worth considering as a development platform. Developers, developers, developers, and developers are not buying into Microsoft's multiple-personality-disordered suggestions for how we should develop software.

Enter the Web

I'm not sure how I managed to get this far without mentioning the Web. Every developer has a choice to make when they plan a new software application: they can build it for the web or they can build a "rich client" application that runs on PCs. The basic pros and cons are simple: Web applications are easier to deploy, while rich clients offer faster response time enabling much more interesting user interfaces.

Web Applications are easier to deploy because there's no installation involved. Installing a web application means typing a URL in the address bar. Today I installed Google's new email application by typing Alt+D, gmail, Ctrl+Enter. There are far fewer compatibility problems and problems coexisting with other software. Every user of your product is using the same version so you never have to support a mix of old versions. You can use any programming environment you want because you only have to get it up and running on your own server. Your application is automatically available at virtually every reasonable computer on the planet. Your customers' data, too, is automatically available at virtually every reasonable computer on the planet.

But there's a price to pay in the smoothness of the user interface. Here are a few examples of things you can't really do well in a web application:

Create a fast drawing program
Build a real-time spell checker with wavy red underlines
Warn users that they are going to lose their work if they hit the close box of the browser
Update a small part of the display based on a change that the user makes without a full roundtrip to the server
Create a fast keyboard-driven interface that doesn't require the mouse
Let people continue working when they are not connected to the Internet
These are not all big issues. Some of them will be solved very soon by witty Javascript developers. Two new web applications, Gmail and Oddpost, both email apps, do a really decent job of working around or completely solving some of these issues. And users don't seem to care about the little UI glitches and slowness of web interfaces. Almost all the normal people I know are perfectly happy with web-based email, for some reason, no matter how much I try to convince them that the rich client is, uh, richer.

So the Web user interface is about 80% there, and even without new web browsers we can probably get 95% there. This is Good Enough for most people and it's certainly good enough for developers, who have voted to develop almost every significant new application as a web application.

Which means, suddenly, Microsoft's API doesn't matter so much. Web applications don't require Windows.

It's not that Microsoft didn't notice this was happening. Of course they did, and when the implications became clear, they slammed on the brakes. Promising new technologies like HTAs and DHTML were stopped in their tracks. The Internet Explorer team seems to have disappeared; they have been completely missing in action for several years. There's no way Microsoft is going to allow DHTML to get any better than it already is: it's just too dangerous to their core business, the rich client. The big meme at Microsoft these days is: "Microsoft is betting the company on the rich client." You'll see that somewhere in every slide presentation about Longhorn. Joe Beda, from the Avalon team, says that "Avalon, and Longhorn in general, is Microsoft's stake in the ground, saying that we believe power on your desktop, locally sitting there doing cool stuff, is here to stay. We're investing on the desktop, we think it's a good place to be, and we hope we're going to start a wave of excitement..."

The trouble is: it's too late.

I'm a Little Bit Sad About This, Myself

I'm actually a little bit sad about this, myself. To me the Web is great but Web-based applications with their sucky, high-latency, inconsistent user interfaces are a huge step backwards in daily usability. I love my rich client applications and would go nuts if I had to use web versions of the applications I use daily: Visual Studio, CityDesk, Outlook, Corel PhotoPaint, QuickBooks. But that's what developers are going to give us. Nobody (by which, again, I mean "fewer than 10,000,000 people") wants to develop for the Windows API any more. Venture Capitalists won't invest in Windows applications because they're so afraid of competition from Microsoft. And most users don't seem to care about crappy Web UIs as much as I do.

And here's the clincher: I noticed (and confirmed this with a recruiter friend) that Windows API programmers here in New York City who know C++ and COM programming earn about $130,000 a year, while typical Web programmers using managed code languages (Java, PHP, Perl, even ASP.NET) earn about $80,000 a year. That's a huge difference, and when I talked to some friends from Microsoft Consulting Services about this they admitted that Microsoft had lost a whole generation of developers. The reason it takes $130,000 to hire someone with COM experience is because nobody bothered learning COM programming in the last eight years or so, so you have to find somebody really senior, usually they're already in management, and convince them to take a job as a grunt programmer, dealing with (God help me) marshalling and monikers and apartment threading and aggregates and tearoffs and a million other things that, basically, only Don Box ever understood, and even Don Box can't bear to look at them any more.

Much as I hate to say it, a huge chunk of developers have long since moved to the web and refuse to move back. Most .NET developers are ASP.NET developers, developing for Microsoft's web server. ASP.NET is brilliant; I've been working with web development for ten years and it's really just a generation ahead of everything out there. But it's a server technology, so clients can use any kind of desktop they want. And it runs pretty well under Linux using Mono.

None of this bodes well for Microsoft and the profits it enjoyed thanks to its API power. The new API is HTML, and the new winners in the application development marketplace will be the people who can make HTML sing.
Why and How do Cats Purr?
My mailbox often brings interesting challenges, as in this short question I received awhile back from Gideon: "Do cats purr when they are alone?" What a great question! As I replied to Gideon, it is on the order of, "If a tree falls in the woods, and there's no one there to hear it, does it make a sound?" Or - "Does the light burn inside a closed refrigerator?" Yet, by far, I think the question about cats purring is the much more fascinating of the three. Truthfully, I don't know if cats purr when they are alone. It seems likely that they do, if one understands a little about how why cats purr. To learn more about that topic, you'll just have to read the full article.

The cats pictured here were both purring to beat the band as Jaspurr lovingly groomed Bubba, his mentor.

There is absolutely no fear that compares with the one that strikes a parent when a baby gets sick, and that applies almost equally to "cat parents." Kittens, by their very size, seem so fragile, and it is such a helpless feeling to watch them sneezing and coughing with a cold.

Because kittens are so fragile, when they are sick, they really need to be seen by a veterinarian. Even colds can be serious for young kittens, and colds can easily develop into bronchitis or even pneumonia. Other serious infectious diseases can mimic colds, and only your veterinarian can properly diagnose and treat them.

I am reposting this article which was first posted in 2004, because so many people are still posting comments three years later, asking me and my readers for medical help, something none of us can provide. The bottom line is to read the linked articles to learn more about the things YOU can do, but take your kitten to the veterinarian for diagnosis and treatment.
More Suggested Reading for Parents of Kittens With Colds:

URIs (upper respiratory infections) in Cats
How to Take a Cat's Temperature
How to Give a Cat Liquid Medicine
How to Pill a Cat

Most cat lovers want to give their cats the very best care. With this fact in mind, your knowledge about cats' needs plays a large part in the kind of care you may actually be giving them. This quiz is a followup to several articles on the About Cats site, including the free email class on cat care. Links to these resources are included below. You can either do your homework first, or take the quiz from scratch, then bone up on those areas you may have missed.

You may choose your own difficulty level for this quiz, depending on the number of questions you wish to answer. Please choose the MOST CORRECT answer for each question. Have fun with it, and don't forget to cuddle your cat when you've finished!

Pictured here is Raleigh, our beloved Cats Forum diva who allows HOSTPat to share her home, and who receives the very best of care.

Although it's still been warm here during the day, there's a decided chill in the air at night, and Old Man Winter isn't that far away. The ASPCA sends annual reminders that cold weather can be a killer for cats that are for one reason or another, outside cats. Although cats really belong indoors, many people feed stray cats or care for feral colonies, and this is important information for those who care for outdoor cats. Among the tips the ASPCA offers are:
Keep them sheltered from the cold and wind
If you take your cat out for a walk, dress him in a warm jacket (dogs' small size works well).
Keep them well-hydrated
Heated water bowls are available for this purpose, if you have a stray colony in your yard.
Watch out for spilled anti-freeze
It's very attractive to cats and dogs, and it's poison.
Ice melts are dangerous too
Used to rid sidewalks and driveways from ice and snow, these products contain sodium products, which can irritate tender pads and cause even more problems if ingested.
Suggested Reading:
Top Reasons to Keep Cats Indoors
Seasonal Safety Resources
ASPCA's Bulletin

From the HSUS:
This summer, the U.S. House of Representatives included language in its Farm Bill that would prohibit the use in research of dogs and cats obtained from "random source" Class B dealers, who may steal pets or fraudulently obtain them through "free to good home" ads. Now, we need the Senate to act. Please help protect pets by taking action today.
The Senate is expected to vote on its version of the Farm Bill any day now. It is crucial that we ensure that this vital protection is included in the Senate's Farm Bill as well.

Phone calls are suggested, with follow-up emails wherever possible.
U.S. Senators Phone Numbers by State
U.S. Senators' Email Addresses Email Contact by State
Suggested Reading:
Free Kittuns, by Jim Willis

"Blackberry." We adopted this little male kitten only yesterday from the SPCA. He's only 10 weeks old. As he was exploring the house , he discovered a dusty corner under a table. He came out covered in Cobwebs!! He looked like a perfect Halloween kitty. He even scared our big Bernese Mountain dog.

Cats Pictures of the Week are selected from general photo submittals. Blackberry's photo was submitted for the October Black Cats gallery, and is included in the Black Cats Gallery 2, new for 2007. This photo really tickled my funnybone, and it seemed purrfect for a Cat of the Week honor during Halloween - Black Cat month. If Blackberry ever grows into those feet and ears, he's sure to be a big boy!

You may send your own photo, using the guidelines on the Photo Submittal page. Be sure to write a paragraph or so about the cat, at least two or three sentences. Our galleries have space for 2500 characters, including spaces, so don't be afraid to tell us about your cat within those parameters.

This story, which borders on the supernatural, really thrummed a string in my heart. I've published other mystic stories about cats, and Misty/Désirée's story ranks right along with them. It seemed fitting that it came to me just before Halloween, and I'd like to share it here. Leigh A. Arrathoon writes:

On June 18, 07, my gray and white DSH tuxedo cat, Mistigris died. She had had diabetes, CRF, and strokes for two years. I treated her with vitamins so that all the symptoms disappeared, but she eventually died, surrounded by me and her loving cat family, when her heart slowed to a stop. She passed away in my arms, smiling.
Two days later, she burst into my consciousness, all upset. She wanted to come back. I told her that would be extremely difficult, that in fact I had no idea how to go about it. She stubbornly insisted and was conceived the following day (at least, so she led me to understand). During one visit (and I couldn't see her; I just knew she was there), I told her that I wanted to call her something different. When I suggested Désirée, her aura turned bright gold, like the sun, if you got close enough to see the flames. Her spirit filled me with perfect joy.

After looking everywhere for Misty/Désirée, on October 13th, I went to my local Petsmart and there she was - a tiny gray and white kitten. As soon as she saw, me she clung to me. We sat together for an hour and a half....read more and view full photo.


Watch Other Users' VideosSubmit Your Own VideoWhile some cat owners may view cats' propensity to shred toilet paper roll after roll as a behavioral problem, others look at it as an amusing example of cats' quirkiness. Enjoy this YouTube video by kridley66 of her cat Stinky methodically shredding a roll of toilet paper. What about you? Do you think cats' fascination with toilet paper is funny to look at, or does it just represent a bad habit to be broken? Post your comments here.

My Cat Ran Away
In all liklihood, the owner of indoor-outdoor cats will eventually face the sorrow of having a cat turn up missing. However, the chances are (for good or bad) that your cat did not run away. Cats are very territorial (even the neutered ones) and will defend their territory at all costs, and if driven out by another alpha cat who is bigger and meaner, will seek safety indoors (if allowed that option) before running off. The truth is that the chances are more likely that a cat has been unwillingly removed from the area, injured, or killed.

In order to find your cat, you need to consider the possible reasons for his absence, many of them distressful. However, this is the time to set aside emotions and to rationally evaluate the possibilities, with an appropriate action for each



WALL OF TEXT CRITS NEVI 9999999999999999999999999921
 
...like...uhh...this thread could turn out really badly...oh wait...it already has...SPAM...I mean...like...uhh...like...SPAM...stupid 30 second rule...LOL...STORM THE GATES OF HELL!!!...uhh...xtsite.com...uhh...maybe...CRAZY!!!...Play...more...hamburger ...cheese...WoW...
 
actually it not simply cut and paste! i wrote 3 of the articles in there and my dad wrote the other 2 so boya!
 
Back
Top