fishsupreme: (Default)
[personal profile] fishsupreme
As I'm sure most people have noticed or heard, the Internet was hit hard last week by a worm known as the 'SQL Slammer', due to its target, Microsoft SQL Server 2000. Most home users felt little effect, other than perhaps a certain sluggishness on the 'net, but corporate users were blasted by it -- primarily because only corporate users actually run the software in question.

The worm shut down corporate networks, because an infected server immediately devotes all its time to sending copies of the worm to an IP address, randomly chosen by GetTickCount (i.e. an IP address is a 32-bit number, and the number of milliseconds the computer is turned on is also stored as a 32-bit number, so the worm just slammed a timer into the network stack to choose a target -- a crude but effective way of generating random targets.) Computers that run Microsoft SQL Server 2000 tend to be enterprise database servers right in the heart of corporate datacenters -- really powerful machines, with really wide connections to the network, making them the perfect worm-spreaders. The scary thing is that for all the damage this worm did, it was fundamentally benign -- all it did was spread itself around, it did not actually try to do any damage. Imagine what could have happened if the worm carried a "payload" -- if it, say, deleted or subtly corrupted all the data in infected servers.

Microsoft, understandably, has gotten a lot of flak for the security vulnerability that made this worm possible (the worm spreads by sending a specific malformed 370-byte packet to a web service port on SQL Server. Due to a bug in the server, a buffer overrun presumably, the specially-formed packet is actually executed rather than just used as data, thus causing the new server to begin spreading the worm as well). However, Microsoft's response is that they fixed the vulnerability in question six months ago and released the fix as a "critical" security patch that all administrators should install. What's more, the vulnerability is also fixed in SQL Server 2000 Service Pack 3, which has been out for more than a month. Thus, the only people affected by this are people running an obsolete version of a Microsoft product and who failed to install critical security patches that any competent administrator would know about. Unfortunately, that group included most SQL Server installations. Basically, Microsoft asks, "What more can we do?"

Some news stories of interest here (which will be referenced in the rest of this post):

Tom's Hardware says rumors indicate even Microsoft was hit
David Litchfield reconsiders if he should have even published the vulnerability when he discovered it
Analysts say Microsoft Trustworthy Computing has failed, with quotes from Bruce Schneier


Basically, with this worm, what has failed is not the security of a specific piece of software (Microsoft SQL Server 2000), though that did, in fact, fail in this case. What failed here is the computer security system, the whole infrastructure that keeps the Internet running. Most people don't realize how common horrible security flaws in software are. I used to subscribe to BugTraq, a daily mailing list for computer security. Basically, every day there would be 1-2 posts of an exploit -- a short piece of code that, when run, will give you complete and total access to any machine running piece of software X. Then, the next day, the vendor for X would post a patch to fix it. Most of these vulnerabilities were in UNIX, or in some piece of UNIX-based software (indeed, almost all of them were -- but there's another list called NtBugTraq in which almost all the posted vulnerabilities are in Windows). So even UNIX, which is generally regarded as pretty secure, has several "if you do this, you control the system"-class security failures every week. The general process goes like this:

1.) Security researcher or independent hacker finds a bug. They write an exploit to demonstrate it.
2.) Exploit is sent to the vendor to give them a reasonable time to fix it (customarily 3-7 days)
3.) Exploit is posted to BugTraq and other security forums.
4.) Vendor posts a patch to their software that protects from the exploit on their website, on security forums, and anywhere else they can put it up. If it's really bad, security organizations like CERT release an advisory to make sure everyone knows about it. The advisory contains only the patch -- not the exploit, which never leaves the computer security community.
5.) Sysadmins that run this piece of software see the advisory and download and install the patch.

Now, in the SQL Slammer worm case, here's what happened instead:

1.) Security researcher finds a bug, and writes an exploit to demonstrate it.
2.) Exploit is sent to the vendor (Microsoft), who promptly creates a patch for it.
3.) Exploit is posted in security forums, where a malicious hacker sees it and writes a worm based on it.
4.) Microsoft posts the patch everywhere, sends it out as a critical software update, gets a CERT advisory for it, everyone is made aware of it.
5.) Sysadmins promptly ignore the advisory, don't install the patch, and fail to upgrade the product when a new service pack is released.
6.) Hacker from #3 releases his worm into the wild, and everything falls apart.

The failure here is in #3 and #5. What can be done differently to prevent this?

Well, for #3, Litchfield is considering in one of the above articles that perhaps he shouldn't have posted the exploit to begin with. Chances are, the worm writer would never have figured out how to do this without a researcher like Litchfield having done all the hard work for him already. However, having subscribed to BugTraq for quite a while, I have to say that this isn't practical at all. Now, it's certainly true that, with Microsoft, had Litchfield sent them the exploit and never posted it publicly, Microsoft would still have fixed the bug, released a patch, gone through the advisory procedure, etc. Contrary to popular belief, Microsoft takes security bugs very seriously and responds to them with blinding speed when made aware of them. However, this is not universal to all companies. Many times on BugTraq, an exploit would be posted with "I sent this to the vendor a week ago, but they haven't responded to me and haven't released a patch and basically ignored me. So here's the exploit." Then, 1-2 days later, the company releases the patch. In other words, with many companies, if the security researchers don't very formally and publicly tell the world how to exploit a security vulnerability, it is regarded as "acceptable risk" and never fixed at all. The companies just think, "Oh, nobody will ever figure that out, we'll just fix it in the next version." Hotfixes (patching a product that's already released) are both expensive and embarrassing -- they cost time and money to develop, test, and release (nothing's worse than releasing a fix that breaks people's software, so they have to be tested very thoroughly), and they involve shouting to the world "Look, we have a bug in our software!", which no company wants to do. Without the malicious hackers to "enforce" the requirement that companies patch their software, some companies simply will not do it.

So, really, the only place that a breakdown in the process can be fixed is at step #5 -- Sysadmins promptly ignore the advisory, don't install the patch, and fail to upgrade the product when a new service pack is released. How can Microsoft fix this? There are three basic approaches, and all of them have some rather serious problems.

------------------

1. Just release the product without any security bugs in the first place.

This is a pipe dream. It will never happen. Keep in mind that UNIX has been out for more than twenty years and new security bugs are still found every couple of weeks. It is very hard to write a 1,000-line program with no bugs in it. It is impossible to write an 11-million-line program like Windows with no bugs in it. Especially when for programs of that scale, the designers, developers, and testers are all different people with different skill sets. There is no one who actually understands the entire program. The real problem here is that security is, and always has been, a trade-off.

The traditional security trade-off is security vs. usability. The more secure a product is, the less usable it is. A good example of this is with passwords. Requiring no login or password is very usable -- you just fire up the program and use it. Requiring a login and password makes using the software less convenient. You need someone to administer it, in case of lost passwords. You can't have a temp just sit down and use it -- you need to get them an account. All the users have to remember logins and passwords. It takes time to log in. These are all trade-offs -- but in return for this, it is a lot more secure. It's now possible to prevent some unauthorized tampering, and to tell who did the tampering in the case of employee malfeasance. But passwords can be guessed or cracked. It's more secure if you require passwords to be 15 characters long... and include numbers and symbols... and never repeat any previous passwords... and be changed every two days... and not include any dictionary words, names, dates, or repeated characters. Unfortunately, that last regime, requiring everyone to have passwords like "Hw79%#PKxzUi7@3", is so hopelessly unusable that it will actually make the system less secure. People will forget their passwords all the time. Rather than going to IT, they'll just ask the person in the next cubicle for their password. People will get so used to forgotten passwords that they'll happily give one to whoever asks. Besides, they'll have them written on Post-It notes stuck to their monitors anyway. This sort of security is appropriate in a place like the CIA, where all the users are conscious of security and proactively doing their part to further it, but in the average workplace it would be a disaster.

Security can also be a trade-off with features, cost, and release time. Security is not so important as to outweigh all other concerns -- just as human life is not so valuable as to be protected "at all costs" (if it were, cars would be illegal, because they're dangerous. Going outside is dangerous, too. So is eating solid food.) It is important, but there is always a point where it is too costly. People demand new features in software, and will not buy the software without them. This means new code must always be written, which is inherently risky as new code can never be tested as thoroughly as old code (and remember that UNIX's 20-year-old code still has a few bugs left). People demand the software actually be released -- doing an extra 10 years of security testing before each software release would be absurd. People must be able to afford the software -- imagine how much software would have to cost to be profitable if you had to do 10 years of security testing on it. It's quite possible to argue about how much time, money, and usability should be given up for security -- but it's not possible to make any product totally secure, or completely free of bugs.

In addition, people don't realize that security must be security from something. The above password regime is totally insecure against a malicious system administrator. I've yet to see a deadbolt lock that protects from tanks or a battering ram, or a burglar alarm that detects tunneling through a wall or up through the floor. Accounting safeguards are, in general, of no help whatsoever against a conspiracy by the CEO, controller, and auditors to defraud the company. Safety deposit boxes are of no help against thermonuclear weapons. These are exaggerated examples, but I'm sure you see the point -- you must choose, when designing security, what to be secure from and what not to be -- thus, total security is unachievable.

2. Force users to update.

This is essentially the cry of, "Microsoft, protect me from myself!". However, this cry has gone up before, and Microsoft has responded. A few years back, the big security threat was email viruses. People would get an email with an attachment, and they'd run the attachment, and it would infect their computer, look in their Outlook/Outlook Express address book, and send itself to everyone they know. The process would repeat. People demanded Microsoft fix the problem (because they knew there would always be morons who opened every attachment that came to them, so user-education was an impossible solution).

First, Microsoft made Outlook have a warning message appear when you try to run an attachment, that said that the attachment could be harmful to you, and that you should only run it if you trust the sender. But of course the virus came from someone you trust (after all, it got you out of someone's address book), so people just clicked "Yes" and ran the virus. So then they made an option to allow running attachments, and it defaulted to off, so unless you went into your Outlook options and enabled running attachments, you couldn't run them. People either changed the option, or saved the attachment, found it on their hard drive, and ran it then. So Microsoft removed the option, and made it so you couldn't run attachments out of email -- you had to save them first if you wanted to run them, and even then you got the warning about viruses. So people got a virus in the mail, saved it, ignored the warning, ran it, and spread the virus anyway. So Microsoft added a warning when the virus tried to spread, saying something along the lines of "Another application is trying to access your Address Book. If you did not request this, it may be a virus. Do you want to allow access?" Unbelievably, people still clicked yes. Finally, Microsoft took the tactic that is still in Outlook today -- if an attachment is any potentially-harmful type (i.e. if there's any possible way that somebody could in theory put a virus in it), you can't download it, save it, run it, or anything else. It is impossible. This is so unusable that I know people who stopped using Outlook and switched to competing products to get away from this. But it's the only thing that stopped -- or even significantly slowed -- email viruses. Keep in mind that through all of this, there were no bugs fixed* -- because none of these problems were caused by bugs. They were caused by people choosing to specifically execute virus code!

(* in point of fact, there were, as some viruses exploited an Outlook Express bug to run themselves on receive. However, these were a tiny minority and irrelevant to the discussion in question)

You see the problem -- Microsoft did finally protect people from themselves, and the resulting change really pisses off advanced users (like me) who know better than to run a virus, and get really sick of the fact that we constantly have to reply to email with "Crap, your file didn't come through... can you rename it so Outlook won't throw it away?".

In the case of the SQL Slammer worm, the problem people want Microsoft to protect them from is sysadmins that don't install patches. Once again, the same pattern applies. First, they made Windows Update, a single site you can go to and get all the security fixes your computer needs for all known Windows bugs. But people don't always go there, so they made Critical Update Notification starting with Windows 98. With this enabled, you get a popup window telling you to go to Windows Update and get the patches whenever new secuirty issues are discovered -- you don't need to be subscribed to BugTraq or something similarly obscure, your computer outright tells you when you need a patch. But people would ignore the notification, or decide the download times were too long, or turn off Critical Update Notification. So with Windows XP, Microsoft introduced Automatic Updating. This downloads the patches in the background, and only when they're all downloaded pops up with a message that says "New Critical Updates have been downloaded for your computer. Do you want to install them now?" And it will keep bugging you until you either say Yes, or turn off the Automatic Updating service (which some people still do).

Now, admittedly, I don't know that SQL Server 2000 patches are included in Windows Update, since they're not part of Windows (some software is on there, some is not -- for instance, a critical Office update won't show up, but a critical IIS (web server) update will show up). If they're not, that is one improvement Microsoft could make -- all Microsoft products should be included in Windows Update/Automatic Updating. However, all the time there are worms/viruses/etc. exploiting things that Microsoft has fixed and had as an update for months... and people just haven't installed the updates.

There is only one way left for Microsoft to protect people from sysadmins who don't download and install updates. That is to force the updates on all machines. When a critical update comes available, if you're running Windows, your computer will download and install the update whether you bloody well want it to or not. While for the vast majority of people this would be a great benefit, there would be a small cadre of people, privacy advocates, cypherpunks, etc., who would despise this. After all, this means Microsoft can run whatever it wants on your computer any time it wants, and you can't stop it! Imagine the news stories this would create. "Microsoft takes over all desktop computers" and such things. It's easy for me to see why Microsoft has not implemented something like this... but other than this, what can they do to force people to update? Any reasonably competent sysadmin has easy access to all the updates already -- nothing short of forcing them on everyone will help.

3. Use a cryptographically trusted platform.

This last idea is the basis for the Palladium initiative that has been in the news some of late (and the news is the only place I've heard anything about it... I'm not involved with Palladium in any way). Basically, the idea of Palladium is to have hardware designed with a cryptographic signature mechanism built into the chips that verifies signatures on code before running it. To be secure at all, it has to be in hardware -- there's no way to do this in software. Then, whenever you write any code, you have to sign it (that is, run it through a cryptographic algorithm that generates a numeric signature). Whenever anyone with a Palladium-equipped machine and OS runs your code, it will pop up a dialog... "This code is signed by Bob's Garage Computer Software. [link to home page] [link to certification authority]. Do you wish to trust Bob's Garage Computer Software with the following permissions on your computer [list of permissions, such as "permission to execute", "permission to read from and write to hard disk", "permission to access the Internet", etc.) If you say yes, you can run Bob's Garage Computer Software from then on with no problems... if you say no, then nothing from that company can ever harm you, because your CPU will refuse to execute any of their code. Presumably a Microsoft OS would come to you already trusting Microsoft, but would require you to choose whether or not to trust any other vendor. This stops most viruses utterly, as it is essentially impossible to fake a cryptographic signature (e.g. to write a virus and have it come up as being from Adobe Systems or Microsoft or something). So unless people trust a virus/worm writer's name or company name, the virus is stopped. In addition, all code must be signed to run at all -- no signature, no permissions.

This once again gets the privacy advocates, cypherpunks, etc. up in arms. After all, this means that a few Certification Authorities must be trusted absolutely, and essentially get to decide what apps can and can't run on your computer. People railing against Microsoft seem to think that Microsoft would be the certification authority, but this is ridiculous -- it would almost certainly be a neutral third party like VeriSign, and there would presumably be several different authorities to choose from. Contrary to popular belief this would not kill open-source software -- it would just require that you trusted whoever compiled the software. It would not necessarily make open-source software more secure, but neither would it block it. Indeed, if you compile the software yourself, you could just sign it yourself -- of course you'd set your computer to grant permissions to you.

There are two problems with this approach, though. Whether the first is a problem depends on who you are, though. The first problem is that this could lead to impregnable DRM software. It would be possible for record and movie companies to exercise absolute control over content with this -- to make you pay every time you want to play a digital song, for instance. (This would, incidentally, only affect future media -- once something is in MP3 or AVI format, it's cracked, Palladium or no, and this would not change). People have an obvious dislike for this -- I don't like it myself, to be honest. The second problem with this approach is that it would not have stopped or even slowed the SQL Slammer worm.

In the SQL Slammer worm, the worm essentially grafts itself onto SQL Server. It runs in-proc, by redirecting EIP onto the stack. Thus, to a Palladium computer, the worm is SQL Server, which presumably you've given permissions to run (after all, it would be pointless to have a database server and not give it permission to run a database). Palladium protects software against tampering -- the worm could not, in a Palladium system, modify programs on your hard drive to as to infect them and come up again after a reboot (so this would, for instance, stop NIMDA). However, it so happens that the SQL Slammer worm doesn't do that anyway -- a reboot totally cleans a machine of all evidence of SQL Slammer. This is the point of the discussion above about security having to secure from something -- Palladium is extremely secure against what it secures against, but the SQL Slammer worm is simply outside of that domain.

------------------

So my real question is, what can Microsoft do? How could Microsoft have prevented this from happening, beyond the many preventative measures they've already taken? This is a serious question -- I can't think of anything, beyond the certain-to-be-hopelessly-unpopular update-pushing method I discussed, that would stop this sort of attack. I'll be happy to discuss this, though I of course cannot discuss anything that requires proprietary Microsoft knowledge (that is, I can't say anything about what Microsoft is doing to secure its software other than things that have already been published in the press, which is the source of everything above).
This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

Profile

fishsupreme: (Default)
fishsupreme

July 2014

S M T W T F S
   1 23 45
678 9101112
1314 1516171819
20212223242526
2728293031  

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 29th, 2025 12:10 pm
Powered by Dreamwidth Studios