Wednesday, October 14, 2009

New blog available

Electr0n has setup a new blog for the ##security channel on Freenode, and has asked me to help with some content. I just posted there on Pastebin hacking in light of the recent Hotmail password fiasco.

Check it out.

Monday, August 17, 2009

Career "Advancement"

About 2 years back I left my job in an InfoSec group. That particular position wasn't the right fit for me anymore, and somehow I didn't think I would be a good fit for a security role in most other organizations. I don't have the pen testing experience needed for most security companies, and the thought of maintaining firewall rules at some retail house would bore the crap out of me.

Since then I've been struggling to find myself career wise. I spent some time in an IT role, and I'm now trying a more customer facing role. Still I find myself happiest in ##security on Freenode, answering people's security questions.

So, what now? Maybe someday I'll figure that out.

Thursday, March 05, 2009

The Wrong Tool for the Job

These days anti-virus and anti-spam are two very crucial components of a well run e-mail system. Due to how often spammers change their techniques, my company outsources this function to a vendor which provides both services. Both functions are designed and work fairly well.

Anti-Virus

For Anti-Virus they seem to run messages through multiple commercial anti-virus scanners on their servers. Messages that trigger positive are quarantined, and a notification will be sent to the site admin and/or the intended recipient of the message notifying them of what happened.

The site admin can report false positives to the vendor who will investigate and release a message if they can confirm that it was in fact a false positive. They also take an action to reduce future false positives based on what they find. These investigations tend to take 24 hours or so.

Anti-Spam

Spam tends to be a bit more subjective, so false positives tend to be higher than with Viruses. Due to this, their anti-spam offering makes it a lot easier to both prevent and deal with these situations.

Spam messages can either be tagged for users to filter on their own, or they can be actively filtered and put into a quarantine on their servers. Unlike quarantined virus mail, quarantined spam can be accessed and released by users directly.

In order to prevent false positives site admins are able to whitelist domains, e-mail addresses or IP addresses for specific mail relays. Whitelisting a domain is typically not a great idea in these days of e-mail address spoofing, but e-mail address and whitelisting relays works fairly well.

Where it falls apart

Sounds good so far, right?

Well, here's where it all goes wrong. It appears that anti-virus vendors have discovered that they can use their scanning engines to pick up certain types of phishing and scam e-mails, essentially adding anti-spam into their anti-virus product.

A phishing or a scam mail is SPAM, not a VIRUS. The difference here cause a big problem when you get spam levels of false positives while removing the user's ability to release their own messages and the site admin's ability to implement an sort of whitelisting.

That's when you start getting end user reports of mail threads with customers going missing. Add in a 24 hour turn around time for releasing the messages when the problem is discovered and you start to consider deep-sixing your vendor.

Tuesday, March 03, 2009

Wooo.. Kindle

Being a big reader and a tech gear junkie, I was rather tempted when Amazon announced the Kindle back in 2007. Somehow I managed to hold out on buying it until they announced the 2.0 version in early February. I pre-ordering it right away, and got my hands on it just last week.

So far, I like it. Its thinner than I expected. Definitely very easy to use. I can hold it in one hand and access most of the controls that I need to read a book. The left side has the "Previous Page" and "Next Page" buttons while on the right side the "Previous Page" button is replaced by a "Home" button. Since I tend to read books in one direction, this seems to work fine.

The free built in wireless is great for getting books, and occasionally pulling up text-only web sites. Due to the rather slow refresh on screen changes using it as a regular web browser is a bit tough.

My biggest complaint is the DRM for files through the Kindle store. After being bitten by DRM from the iTunes Music store, I definitely have a bad taste in my mouth over DRM. Luckily there are other options out there.

The first for me is Many Books. They offer a lot of free content in quite a few eBook formats, including both the native Kindle format and Mobibook which the Kindle also supports. They even have a Mobile Interface which works well from the Kindle itself. Most of the content has elapsed copyrights (older books), but there are occasionally newer books either available with sample chapters or content that was published under a Creative Commons License.

Next was O'Reilly. Being a big tech book reader, I have a lot of O'Reilly books.

O'Reilly offers a number of their books in DRM-free E-Book formats, including the Kindle supported Mobibook format. They're not free, but I don't have any objection to paying for content, just having its usage limited by DRM. They even provide free updates to the books as new revisions are published. I just wish they made it a bit easier to get a list of just their books available in E-Book format.

While I definitely like the Kindle, the only thing I'm not sure about at this point is if it was worth the cost or not. The Kindle costs $360. Sony's offering is quite a bit cheaper, although I have no idea how it compares feature wise.

Friday, February 27, 2009

Data Recovery From a Bad Disk

My wife’s laptop drive failed yesterday, leaving her Windows XP laptop unbootable. IT provided her with a new laptop, but had deemed her data lost. While she does do backups of her data to a USB drive, it had been a while since the last backup so she was a bit concerned. And I of course enjoy a new challenge.

From the various articles I’ve read on data recovery in the past, I knew that the best bet was to make an image of the disk and attempt to recover data off of the image. There’s nothing worse than running a chkdisk/fsck on a partition, and having the attempts to fix the filesystem cause additional filesystem problems.

So how should I make an image? Being a Unix guy, my first thought was dd. DD allows you to copy the complete filesystem off of a partition, and write it to a file. Unfortunately dd can have issues when it attempts to read a block from a disk that is in the process of failing. It will attempt to read again, rather than just moving on to the next block.

A quick Google search brought me to ddrescue, which was designed to deal with this very issue.

Next step is to figure out the best way to actually access the data off of the disk. My first thought was to just pull out the drive and hook it up to my desktop machine. I have an adapter that allows me to plug a laptop drive into a standard IDE cable for a desktop system. I soon discovered that the system was using a SATA drive, and I didn’t have the correct cabling to hook a laptop SATA disk to my desktop, so that plan was shot.

Next thought, Linux live cd. Unfortunately this was a Thinkpad x60s laptop (12" ultra-portable) which doesn’t actually have a CD-ROM drive. There are USB drives for it, but I don’t have one available. That leaves a USB flash drive.

Now to choose what linux image to use. I typically use Ubuntu as a live cd, but I’m not actually sure if they include ddrescue on that. I’m also concerned that Ubuntu might try to auto-mount the bad disk, potentially making the problem worse. So, after a bit of searching I come across System Rescue CD. Its simple, console only and includes ddrescue. Even better, it includes instructions for putting it on a USB disk.

I download the ISO and follow their instructions, and no luck. The USB drive won’t work. I think their instructions could use some work. A quick download of uNetbootin, and I’m on my way. uNetbootin is a generic tool for turning a Linux live CD into a bootable usb drive. I found it a few months ago while trying to install Ubuntu on my eeePC. One more reboot, and I’m good to go.

So now I have the necessary tools to make an image of the bad disk. I just need a place to store the disk image. Its a 60GB drive, so there’s a bit of a storage need here. I don’t have a large enough USB drive on the system, so I need something network enabled. As it turns out, System Rescue CD includes sshfs support, allowing me to mount part of my desktop machine
filesystem remotely. Awesome.

Running ddrescue was easy. Just dd_rescue /dev/sda1 /mnt/desktop. A few hours later, and the data was ready to be accessed. It even reported any bad blocks found on the disk. There turned out to be 120 errored reads, all clumped together on the disk. Based on the initial Windows
boot errors, that part of the disk seemed to hold OS components. Good sign for her data.

Now I have an image of a corrupt NTFS partiton. I used the ntfsfix tool from the ntfsprogs package on Ubuntu to fix the image. Any data from the bad sectors of the disk is going to be gone, but the partition can now be mounted in order to read the rest of the data.

A quick mount with mount -t ntfs-3g image /mnt, and there the data is. Looks like all of her important files were fine. I got to show up the IT folks, and earn me some nice brownie points. Perhaps I'll redeem them for actual brownies.

Thursday, February 19, 2009

Information Gathering Using SSH Public keys

I've been a pretty heavy user of SSH for the past 10 years or so. In that time I've learned a number of tricks including port forwarding in various directions, forwarding SSH agents (and the associated risks) and various key management techniques if you're providing key based authentication to large numbers of systems.

The most interesting trick I've learned with SSH, I haven't really seem talked about much. A former co-worker pointed me to the feasibility of this working with protocol 1 and a hacked up SSH client, but these days it trivially works with both protocol 1 and 2 using the normal OpenSSH client.

The Trick

  1. Generate an RSA SSH key, and delete the private half. The passphrase does not matter since we won't be using the private key at all. ssh-keygen -t rsa -f test -N "" && rm test

  2. Take the public key file (test.pub), and copy it to the authorized_keys file of a remote system.

  3. Set mode 600 on the public key. chmod 600 test.pub

  4. Try to log into the remote system using the public half of the SSH key. ssh -2 -i test.pub user@server

Assuming all went according to plan, you should get prompted with Enter passphrase for key 'test.pub':. Since this is the public half of a key, no passphrase will ever succeed. You do however know that the private half of this key would have allowed you to log in.

In case you're curious, the reason for the chmod 600 is that the SSH client attempts to enforce good permissions for private keys by refusing to use a "private" key with open permissions. Since you're essentially tricking the client into treating a public key as a private key, the same rules apply.

So What?

This trick allows you to do two things:

It allows you to identify what servers a user has access to. If you have access to a person's public key (which are typically not protected since they're PUBLIC), you can determine what servers the person has access to by attempting to log into root, their username or any other account using their public key.

The second piece is a bit more interesting. If your company has a central key repository which is available to all employees, it becomes very easy to test all keys against a specific server in order to determine who has a private key which has access to the system.

In the past I've used this functionality at work in order to determine who can still log into a system which had been down for a considerable amount of time (and had missed some key rotations). A hacker could instead use this functionality to know who's private SSH key they're going to need to steal in order to gain access to the targeted system.

Why it works

The reason this works can be understood by looking at the Public Key Authentication Method of the SSH protocol.

Among other bits of data, the SSH client sends a copy of the public SSH key to the server as part of the authentication process. The server then responds with SSH_MSG_USERAUTH_FAILURE or a SSH_MSG_USERAUTH_PK_OK message. At this point you now know if access would be granted with the private key, but you have not needed to use that private key in any way yet.

This explains why only the public key is needed during the authentication step, but not necessarily why the SSH client makes this so easy for us. I suppose its probably just a quirk of how their key parsing code works.

They could change the code to not allow you to attempt to do private key operations without a private key, but really that just adds a small hurdle to exploiting this small weakness in the protocol. At the end of the day, you're still only as safe as the protections you put in place on your private keys.

Wednesday, January 28, 2009

Mac OS DNS bug

I had a bit of an interesting experience the other day while attempting to fail over our Jabber server from our production site to the DR site.

Our two servers each have their own A records in DNS with a TTL of 3600 seconds (1 hour). This long timeout is fine since the IP address of the actual server never really changes.

Access to the service is instead provided by a CNAME record which points to one of those two hostnames. The TTL of the CNAME record is 60 seconds, allowing us to quickly fail over between the two sites as needed.

So the time came, and I had to perform a fail over. I updated the CNAME, and in order to prevent users from being unable to connect, I waited 60 seconds before shutting down the old server and starting up the new one.

From there things went bad. I tried to access the admin console, and failed. I tried to log into the Jabber server, and failed. Finally I hit the admin console through the A record instead of the CNAME, and found that other users had seamlessly failed over.

After a bit of testing I determined that my Linux box and my Windows box both worked fine. The only problem was the Mac that I was making the change from. For some reason, the Mac was holding on to the old IP address.

After some testing, and confirmation from other individuals on their Macs, I think I know what was going on. Using dscacheutil -cachedump -entries, I inspected the local resolver cache.

Here's what I found:

Category Best Before Last Access Hits Refs TTL Neg DS Node
---------- ------------------ ------------------ -------- ------ -------- ----- ---------
Host 01/28/09 21:07:02 01/28/09 20:18:35 10 4 3600
Key: h_aliases:openfire.domain.fake. ipv4:1
Key: h_aliases:openfire.domain.fake ipv4:1
Key: h_name:server1.domain.fake ipv4:1

This appears to be reporting that the local resolver cached the server1.domain.fake DNS record, and set an expiration date of the record for "01/28/09 21:07:02". openfire.domain.fake was then set as an alias for that record without retaining its own TTL. This would certainly explain the behavior that I saw.

So it seems to Mac OS X may be incompatible with a fairly common DNS failover technique. I filed a bug, so it'll be interesting to see how long it takes before Apple gets around to fixing it.

Monday, January 12, 2009

Home Media Server

I'm a bit of a media buff. I own several hundred DVDs, and I'm guessing well over 1000 cds.

Like the rest of the known world, I solved the CD problem years ago. I ripped all of my CDs, and I store them in a few places including my iPod and my home media server. Early on I ripped in ogg format, but quickly regretted it when I bought a Phat Box media player for my car. By the time they supported Ogg format, I had moved onto an iPod.

Eventually I started buying music from the iTunes Music Store since it was so much easier than CDs, and I continued until I discovered the Amazon MP3 Store. Buying in mp3 format instead of AAC is so much easier to deal with, not to mention the lack of DRM.

Recently I read about Sockso. Sockso provides me with a simple web interface for streaming music off of my server so I can listen from work without having everything on local disk. Considering that my music collection is 58GB these days, it certainly saves me some space. Unfortunately Sockso does not support AAC format at this point, so I'm kind of out of luck on my iTunes media (even the non-DRM files).

Recently I started trying to tackle the DVD issue as well. I have a DVD changer, but its just kind of clumsy. It attempts to detect the name of movies from the disc, but rarely succeeds. You can attach a PS2 keyboard and type them in manually, but I eventually had to move the player which required removing the discs (and losing the inputted data).

So I thought I'd apply the same techniques to my movies. I used Handbrake to rip a number of movies and copy them onto my server as well. From there I can copy them into iTunes to watch on my computer, my iPod or transfer to my AppleTV (I apparently buy too much Apple gear).

Most of my media is on my file server which runs linux, so I wanted to see if I can get away without running iTunes. My first option was pyTivo. Its an interesting project. Its a python script that you point to your movie collection. It performs the necessary UDP broadcasts in order to announce your movie share on the network for your TiVo to see, and then converts the movie on demand to a format that TiVo can display properly. pyTivo works pretty well, but the code is in flux, and I'm not sure how much I want to trust it.

My latest try was XBMC. Its a rather nice media player that was created for the original XBox. These days it has also been ported to run on Windows, Linux, the Mac, and AppleTV. It can easily be installed on the AppleTV using ATV USB Creator.

XBMC can recieve a stream from a Universal Plug'n'Play Media Server. In my case I used MediaTomb since it was available straight from Ubuntu. I'm not sure I'd suggest it due to the lack of access control. For now, I'm fine just running it bound to my local network only.

I'm not sure I'm really happy with how all of this is working, but its still a work in progress.