fishsupreme (
fishsupreme) wrote2005-02-18 11:38 pm
![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
RSA Conference, Day 3 (updated)
And now we return for another exciting episode of security conference updates. This time we've got a short time-lag; I'll be writing about yesterday's sessions rather than today's, since I didn't get a chance to do a write-up yesterday.
I began the day with another 6:50 alarm, and once again skipped hotel breakfast in favor of eating at the conference. I had a cold bagel and cream cheese. Why does anywhere serve bagels cold? They're so much better toasted! Who doesn't like a nice, toasty warm bagel? But I digress.
The first session was a panel on DRM. The panelists consisted of two DRM makers (Crypto Research and RSA Security) and two media companies (Warner Bros. and Fox.) Which is to say, watching the panel was like watching the choir preach to itself. The only opposing viewpoint came from the audience's tendency to applaud at precisely the wrong times, on purpose. The Warner Bros. representative engaged in a bit of historical revisionism about DeCSS (claiming it appeared on Windows before Linux, and that there were Linux DVD player apps before DeCSS was written.) There was some spectacular ignorance, some of it willful -- they all seemed to accept the (incorrect) axiom that CSS is a form of copy protection for DVDs (it's not -- if what you want to do is copy discs, you don't have any reason to break CSS), and save for the Crypto Research guy, none of them even seemed to comprehend the trusted client problem (the fact that if a person can play a disc, they can copy it, and there's absolutely nothing you can do to change that fundamental fact.) One interesting thing was that they agreed that DRM schemes must contain some provision in them that allows them to be relaxed if the initial settings and policies turn out to be too draconian, and they went so far as to suggest upgradeable firmware for consumer electronics devices. This, of course, would be heaven for device hackers, who would produce "custom" firmware in a heartbeat. Also, we learned that the HDTV Broadcast Flag is Andy Setos's fault (he was one of the panelists.) It will, of course, never work, but in the world of DRM, when has that ever stopped anyone? Also, Setos claimed that the media companies have no desire to interfere with legitimate home use of media that doesn't involve transferrence -- a claim that seems to be contradicted by the DVD-CCA's current lawsuit against Kaleidascape (a company that makes hard drive based DVD jukebox appliances that have no transferrence capabilities.) All in all, it seems DRM makers and media companies are continuing their proud tradition of stumbling around in the dark while claiming they can see perfectly.
The second session was a keynote by a VP of Sun Microsystems. This was actually a good, interesting presentation. He started out with an interesting observation -- we don't put brakes on cars because we want to stop, we put brakes on cars so we can drive faster. If we just wanted to stop, sticking a stick out the door would be sufficient so long as you never went above 2-3 mph. Security is not a feature, it's an enabler -- we want security not for its own sake, but to facilitate the other things we want to do, and we need to present and sell it that way. He thinks levels of authority need to be commensurate with levels of authentication -- imagine a network where you can do some things (e.g. email) with just a password, and more if you have a smart card as well as a password, that sort of thing. Bank websites where a password will check balances, but a smart card or one-time password token is needed to actually move money around. He wants to remove people from security processes, and automate more -- pretty much the opposite apprach from Schneier's. After that he moved to the inevitable sales pitch, but he'd said enough we didn't mind overmuch. They're putting integrated virtualization and partitioning in Solaris 10 -- centralized management of virtual machines across multiple physical machines in a cluster. It's actually a pretty cool multi-machine hypervisor setup. I think the demo was mostly faked, but I was still impressed, aside from the fact that his software makes multiple use of the non-word "Configurator," which sounds like some kind of bureaucratic superhero (enemy of the Bureauc-rat?) He had an identity management demo that was outright incomprehensible; it made me think they had a somewhat cool product with a user interface worthy of SAP (the reigning champions of crappy UI design.) There was also a video about automatic payment at vending machines and stores via your mobile phone in Japan; apparently Japanese mobile phones have proximity smart card functionality, which is an interesting idea since everyone carries their mobile phone all the time anyway. Also, Sun is moving to a new license model where they license their entire middleware product line to your company for $140/employee-year regardless of how much actual software you get; it's the kind of licensing agreement that gets Microsoft sued. We'll see how it works for them.
Out third presenter was "crowd favorite" (or so the illustrious conference emcee tells us) Stratton Sclavos of VeriSign. He was going to present on "the Evolution of Strong Authentication." He reiterated a very common theme at this conference, that in today's IT industry innovation is found in integration, not invention. Funny how integration is good as long as anyone but Microsoft is doing it. The short-short version of his speech is that he thinks the Internet needs a federated identity scheme (basically a form of PKI, which was tried 43 times in the 90s and failed every time) using one-time password tokens, which, conveniently, VeriSign makes.
(For those who don't know, one-time password tokens (OTPs for short) are things like RSA SecurID that display a different code every minute; everyone in your organization gets a different one, and to log in they have to use both their password and the current code. Thus, to break their account, you need to have their password as well as their token -- this is called two-factor identification. The three factors are something you have (like a token), something you know (like a password), and something you are (a biometric, like a fingerprint or retina scan.) Most authentication systems use only one factor; two-factor authentication provides greater security. At work, I know of one door whose lock requires three-factor authentication.)
Federated identity was talked about a lot at this conference, and Bruce Schneier pointed out the big problem with it -- quite simply, people do not want one identification scheme for all systems because we want to provide different information to different entities. There's no reason why your driver's license number couldn't double as your bank account number, all your credit card numbers, etc. For that matter, there's no technical reason your drivers' license couldn't function as the key to your house and your car. But most people don't really want it to. A guy from J.P. Morgan Chase (to be discussed in tomorrow's writeup) also pointed out that the financial industry doesn't like the idea of a single point of failure at all, and federated identity schemes have to potential to create this.
Also, in the VeriSign guy's inevitable sales pitch (see a pattern here?), he showed us two demos, one with a person using current technology and one with them using VeriSign's OTP devices (which are not just OTP devices but also encrypted USB keys for transferring data that do a health check on any computer they're plugged into; pretty neat, actually.) However, during his "before" scenario, the person in the scenario actually lost the object he was carrying his important data on. The "after" scenario addressed all the things that went wrong in the "before" scenario except this one, which actually would be made a lot worse -- since without his VeriSign OTP, not only would he be missing his important data, he'd be unable to log into... well, anything. This is such a flagrant flaw in the demo and so easily solved (just don't have the guy lose his files in the "before" scenario and none of us would have even thought about it) that I'm amazed they put this in. It's basically the only thing I remember about the demo and it drew my attention to a flaw in their system -- not a good thing for VeriSign.
The next event was a CSO Panel. We had the CSOs of Oracle (who, unlike the others, is responsible for securing a product rather than an IT infrastructure), Microsoft (where she is responsible for our IT infrastructure; product security belongs to SWI, another team), and two other companies. They basically just discussed, and there were quite a few interesting observations made. Oracle delivers all their patches quarterly just after financial reporting season to avoid interfering with people's book-closing. Someone pointed out that security professionals just think of things -- all things -- differently; we look at a system and think of everything that could go wrong, and most people just don't think that way. They just plug the wireless access point into their office network jack and the idea of other people using it doesn't occur to them. Oracle's CEO said "market failure," so she loses 200 points. She made back a few by pointing out that procurement contracts (i.e. customer demands) are more powerful and flexible than legislation anyway.
The other CSOs were asked what their advice to Microsoft would be. Interestingly, all of them were quite positive about MS. They wanted us to continue taking the notion of developing secure software to heart, share insights on security with customers, and keep driving a security culture. These are all things we're doing, and they all said Microsoft had markedly improved on security in the last two years (since the beginning of Trustworthy Computing and the Secure Windows Intitiative.) The Secure Windows Initiative, by the way, is this team in MS that reviews all the products, code, etc. made by other teams and makes sure they're secure; if SWI won't sign off on your design, or code, or whatever, you can't release. if they don't like your design, you get to redesign your product and start over (needless to say, we run designs past them early.) And if there's one known security bug, they won't sign off.
Someone asked the panel what their advice is to people who want to become a CSO. They said to learn to talk about business, to learn about risk management, and to understand business culture and risk tolerance. The CSO's job is not security, but managing risk. The CSO has to be willing to say what people don't want to hear (and hence sometimes gets known as "the Vice-President of 'No.'")
At that point we broke for lunch. Sarah was meeting a friend of hers and I didn't want more Indian food with Himani, so I went off to Quizno's. I had a small Classic Italian, and it was food.
After lunch, I had sessions. The first session was entitled "CyberCop Case Studies," and was three law enforcement officers discussing what IT admins should and shouldn't do if they need to call in law enforcement for an investigation. One of the presenters was quite amusing, as he was a big, muscular, loud, shaven-headed NYPD homicide-detective-turned-computer-crime-investigator. It was like he stepped out of Die Hard and started talking about network logs. All in all, the advice was pretty common sense, and it was quite obvious from all of them that nothing annoys the police like calling them up, asking for help, then refusing to cooperate with their investigation. They don't like it when they have to subpoena the victim of a crime who called them in in order to get the information they need to carry out the investigation. Which I suppose is understandable.
The second session was ostensibly on forensics and digital evidence. Really it was about Sarbanes-Oxley Compliance, which everything seems to be these days. The cost of that legislation to the American economy must be unfathomable. Essentially, the CEO is required to certify not only that their company's accounting records are correct, but that their IT infrastructure is secure as well, and if they turn out to be wrong, they go to prison. I suppose this is great for IT security professionals (and we've already seen how great it is for accountants,) but it seems a collosal waste of time and money as well. They did mention in the panel that there are quite a few issues with rules of evidence and procedure, since they're based on the physical world rather than the electronic, so the controls are sometimes inappropriate. There's now a lot more scrutiny on digital evidence -- it's no longer good enough to just show that you have logging functionality and have the judge and jury believe it like an oracle ("The computer says it, so it must be right!") There are no technical controls on electronic documents -- they're equated to paper. But this is a problem because they're so mutable, even back in time (forging timestamps is easy in many systems,) which may cause a backlash against all digital evidence if this sort of tampering happens too often. In addition, legally, a digital document -- the actual source data -- is just a stream of zeroes and ones; the document you see on the screen is just a view of a view of the data. It's an interpretation of the zeroes and ones, and lacks ascertainability. This may become a legal issue in some cases.
The last presentation of the day was also the best. It was entitled "Management that Measures Up: Metrics for Information Risk Reduction and Decision Making." It was given by an uber-geek from CyberTrust, an association of associations that gathers massive amounts (2+ gigs a day) of security information. The presenter's point was that we focus too much on vulnerability and not enough on risk. The best solution for a single computer may be the wrong solution for a network -- patching is an example of this. To make an analogy, the best solution for cholera in a person is antibiotics, rest, and lots of fluids, but this would be a bad prescription for cholera in a third-world country -- the best solution there is to separate the latrines from the drinking water by as much distance as possible and improve hygiene. A community is not cured in the same way as an individual -- for the community, you want to manage risk.
He disagreed with most of the industry's "best practices," asserting that strong passwords do not appreciably reduce risk, and nor do encrypted Internet connections. Going wireless actually improves most home users' security, since the additional risk of someone jumping on their network is actually far lower than the risk they've mitigated by being behind a default-deny NAT (which wireless routers are.) Improving an existing countermeasure (i.e. requiring strong passwords rather than just any password) is almost always inferior to adding an additional countermeasure. The reason for this is that risk is multiplicative. Risk = Threat * Vulnerability * Impact, where threat is the rate of attacks, vulnerability is the likelihood of an attack to succeed, and impact is the cost of a successful attack (including intangibles.) Thus, reducing any of these reduces your overall risk.
Because of this multiplication, workarounds are often better than the "right" answer. Being behind a firewall so you can't get a worm may protect you better than patching your systems to be immune to the worm, even though patching is the "correct" solution. To mitigate risk, you have to do one of four things: deter (reduce the threat by making people less likely to attack,) protect (reduce the vulnerability of your systems,) detect (reduce the impact by noticing attacks early,) recover (reduce the impact by lowering the cost of a successful attack,) or transfer (reduce the impact by buying insurance.) Deterrence is not usually possible for a company -- hackers will hack, and there's not much you can do about it other than be a low-profile target.
His point is that mitigations multiply. If you already have passwords, you're mitigating say 70% of the vulnerability. If you strengthen your password policy, you might raise this to 80%. But adding a firewall might mitigate 50% of the vulnerability, too -- and this is the equivalent of going from 70 to 85%. Stack many, many 50% effective things on top of each other, and you get a very high percentage. Adding another layer of low-effectiveness security is often better than raising the one or two of your measures fail, you still have backups. This is just another look at defense in depth.
Another thing he looked at are amplified and dampened countermeasures. Some countermeasures work better than others against specific attacks. Against worms, for instance, patching is dampened -- you need 99% or more of your systems patched before a worm is even appreciably slowed in your organization. Sure, the patched systems are safe -- but three systems with Slammer on them will bring down your entire network, even if the other 50,000 system sare safe. On the other hand, firewalling off network segments is an amplified countermeasure -- even a little bit of it helps immensely. Unfortunately, which countermeasures are amplified and which are dampened varies depending on the attack -- there's no one best countermeasure.
Thus ended Day 3 of the conference, so I walked back to my hotel. I called up Sarah, one of my coworkers, to see if she was interested in dinner; she was not, but redirected me to Greg, who was going to a seafood place called the Tadich Grill at 8:00. So I joined Greg, Himani, and two guys named Dave (one PM, one test) at the Tadich Grill. I took a taxi there, but it only ran $5, so it was quite near my hotel. The Tadich Grill was quite packed, and served quite good seafood -- and a lot of it. I got the grilled petrale sole, and it was good, but I could only actually eat half of it. Atferwards, it was about 9:45, and Greg and the Daves were going to go get coffee and dessert somewhere, but Himani and I both wanted to go back to the hotel and get some sleep. Since it was quite comfortable out (aside from being raining), we just walked back to the hotel. Along the way, we discussed, of all things, martial arts and guns. Her topics, so she doesn't think I'm a loon now. :)
Back at the hotel, I signed on to World of Warcraft for a few minutes, which turned into an hour when I discovered someone selling a really nice weapon and had to borrow money from
pyran to buy it. I also got a phone call from my lovely wife, which always makes me happy. And then I went to sleep, glad that RSA Day 4 started at 9:00, rather than 8:00 like the first three days.
I began the day with another 6:50 alarm, and once again skipped hotel breakfast in favor of eating at the conference. I had a cold bagel and cream cheese. Why does anywhere serve bagels cold? They're so much better toasted! Who doesn't like a nice, toasty warm bagel? But I digress.
The first session was a panel on DRM. The panelists consisted of two DRM makers (Crypto Research and RSA Security) and two media companies (Warner Bros. and Fox.) Which is to say, watching the panel was like watching the choir preach to itself. The only opposing viewpoint came from the audience's tendency to applaud at precisely the wrong times, on purpose. The Warner Bros. representative engaged in a bit of historical revisionism about DeCSS (claiming it appeared on Windows before Linux, and that there were Linux DVD player apps before DeCSS was written.) There was some spectacular ignorance, some of it willful -- they all seemed to accept the (incorrect) axiom that CSS is a form of copy protection for DVDs (it's not -- if what you want to do is copy discs, you don't have any reason to break CSS), and save for the Crypto Research guy, none of them even seemed to comprehend the trusted client problem (the fact that if a person can play a disc, they can copy it, and there's absolutely nothing you can do to change that fundamental fact.) One interesting thing was that they agreed that DRM schemes must contain some provision in them that allows them to be relaxed if the initial settings and policies turn out to be too draconian, and they went so far as to suggest upgradeable firmware for consumer electronics devices. This, of course, would be heaven for device hackers, who would produce "custom" firmware in a heartbeat. Also, we learned that the HDTV Broadcast Flag is Andy Setos's fault (he was one of the panelists.) It will, of course, never work, but in the world of DRM, when has that ever stopped anyone? Also, Setos claimed that the media companies have no desire to interfere with legitimate home use of media that doesn't involve transferrence -- a claim that seems to be contradicted by the DVD-CCA's current lawsuit against Kaleidascape (a company that makes hard drive based DVD jukebox appliances that have no transferrence capabilities.) All in all, it seems DRM makers and media companies are continuing their proud tradition of stumbling around in the dark while claiming they can see perfectly.
The second session was a keynote by a VP of Sun Microsystems. This was actually a good, interesting presentation. He started out with an interesting observation -- we don't put brakes on cars because we want to stop, we put brakes on cars so we can drive faster. If we just wanted to stop, sticking a stick out the door would be sufficient so long as you never went above 2-3 mph. Security is not a feature, it's an enabler -- we want security not for its own sake, but to facilitate the other things we want to do, and we need to present and sell it that way. He thinks levels of authority need to be commensurate with levels of authentication -- imagine a network where you can do some things (e.g. email) with just a password, and more if you have a smart card as well as a password, that sort of thing. Bank websites where a password will check balances, but a smart card or one-time password token is needed to actually move money around. He wants to remove people from security processes, and automate more -- pretty much the opposite apprach from Schneier's. After that he moved to the inevitable sales pitch, but he'd said enough we didn't mind overmuch. They're putting integrated virtualization and partitioning in Solaris 10 -- centralized management of virtual machines across multiple physical machines in a cluster. It's actually a pretty cool multi-machine hypervisor setup. I think the demo was mostly faked, but I was still impressed, aside from the fact that his software makes multiple use of the non-word "Configurator," which sounds like some kind of bureaucratic superhero (enemy of the Bureauc-rat?) He had an identity management demo that was outright incomprehensible; it made me think they had a somewhat cool product with a user interface worthy of SAP (the reigning champions of crappy UI design.) There was also a video about automatic payment at vending machines and stores via your mobile phone in Japan; apparently Japanese mobile phones have proximity smart card functionality, which is an interesting idea since everyone carries their mobile phone all the time anyway. Also, Sun is moving to a new license model where they license their entire middleware product line to your company for $140/employee-year regardless of how much actual software you get; it's the kind of licensing agreement that gets Microsoft sued. We'll see how it works for them.
Out third presenter was "crowd favorite" (or so the illustrious conference emcee tells us) Stratton Sclavos of VeriSign. He was going to present on "the Evolution of Strong Authentication." He reiterated a very common theme at this conference, that in today's IT industry innovation is found in integration, not invention. Funny how integration is good as long as anyone but Microsoft is doing it. The short-short version of his speech is that he thinks the Internet needs a federated identity scheme (basically a form of PKI, which was tried 43 times in the 90s and failed every time) using one-time password tokens, which, conveniently, VeriSign makes.
(For those who don't know, one-time password tokens (OTPs for short) are things like RSA SecurID that display a different code every minute; everyone in your organization gets a different one, and to log in they have to use both their password and the current code. Thus, to break their account, you need to have their password as well as their token -- this is called two-factor identification. The three factors are something you have (like a token), something you know (like a password), and something you are (a biometric, like a fingerprint or retina scan.) Most authentication systems use only one factor; two-factor authentication provides greater security. At work, I know of one door whose lock requires three-factor authentication.)
Federated identity was talked about a lot at this conference, and Bruce Schneier pointed out the big problem with it -- quite simply, people do not want one identification scheme for all systems because we want to provide different information to different entities. There's no reason why your driver's license number couldn't double as your bank account number, all your credit card numbers, etc. For that matter, there's no technical reason your drivers' license couldn't function as the key to your house and your car. But most people don't really want it to. A guy from J.P. Morgan Chase (to be discussed in tomorrow's writeup) also pointed out that the financial industry doesn't like the idea of a single point of failure at all, and federated identity schemes have to potential to create this.
Also, in the VeriSign guy's inevitable sales pitch (see a pattern here?), he showed us two demos, one with a person using current technology and one with them using VeriSign's OTP devices (which are not just OTP devices but also encrypted USB keys for transferring data that do a health check on any computer they're plugged into; pretty neat, actually.) However, during his "before" scenario, the person in the scenario actually lost the object he was carrying his important data on. The "after" scenario addressed all the things that went wrong in the "before" scenario except this one, which actually would be made a lot worse -- since without his VeriSign OTP, not only would he be missing his important data, he'd be unable to log into... well, anything. This is such a flagrant flaw in the demo and so easily solved (just don't have the guy lose his files in the "before" scenario and none of us would have even thought about it) that I'm amazed they put this in. It's basically the only thing I remember about the demo and it drew my attention to a flaw in their system -- not a good thing for VeriSign.
The next event was a CSO Panel. We had the CSOs of Oracle (who, unlike the others, is responsible for securing a product rather than an IT infrastructure), Microsoft (where she is responsible for our IT infrastructure; product security belongs to SWI, another team), and two other companies. They basically just discussed, and there were quite a few interesting observations made. Oracle delivers all their patches quarterly just after financial reporting season to avoid interfering with people's book-closing. Someone pointed out that security professionals just think of things -- all things -- differently; we look at a system and think of everything that could go wrong, and most people just don't think that way. They just plug the wireless access point into their office network jack and the idea of other people using it doesn't occur to them. Oracle's CEO said "market failure," so she loses 200 points. She made back a few by pointing out that procurement contracts (i.e. customer demands) are more powerful and flexible than legislation anyway.
The other CSOs were asked what their advice to Microsoft would be. Interestingly, all of them were quite positive about MS. They wanted us to continue taking the notion of developing secure software to heart, share insights on security with customers, and keep driving a security culture. These are all things we're doing, and they all said Microsoft had markedly improved on security in the last two years (since the beginning of Trustworthy Computing and the Secure Windows Intitiative.) The Secure Windows Initiative, by the way, is this team in MS that reviews all the products, code, etc. made by other teams and makes sure they're secure; if SWI won't sign off on your design, or code, or whatever, you can't release. if they don't like your design, you get to redesign your product and start over (needless to say, we run designs past them early.) And if there's one known security bug, they won't sign off.
Someone asked the panel what their advice is to people who want to become a CSO. They said to learn to talk about business, to learn about risk management, and to understand business culture and risk tolerance. The CSO's job is not security, but managing risk. The CSO has to be willing to say what people don't want to hear (and hence sometimes gets known as "the Vice-President of 'No.'")
At that point we broke for lunch. Sarah was meeting a friend of hers and I didn't want more Indian food with Himani, so I went off to Quizno's. I had a small Classic Italian, and it was food.
After lunch, I had sessions. The first session was entitled "CyberCop Case Studies," and was three law enforcement officers discussing what IT admins should and shouldn't do if they need to call in law enforcement for an investigation. One of the presenters was quite amusing, as he was a big, muscular, loud, shaven-headed NYPD homicide-detective-turned-computer-crime-investigator. It was like he stepped out of Die Hard and started talking about network logs. All in all, the advice was pretty common sense, and it was quite obvious from all of them that nothing annoys the police like calling them up, asking for help, then refusing to cooperate with their investigation. They don't like it when they have to subpoena the victim of a crime who called them in in order to get the information they need to carry out the investigation. Which I suppose is understandable.
The second session was ostensibly on forensics and digital evidence. Really it was about Sarbanes-Oxley Compliance, which everything seems to be these days. The cost of that legislation to the American economy must be unfathomable. Essentially, the CEO is required to certify not only that their company's accounting records are correct, but that their IT infrastructure is secure as well, and if they turn out to be wrong, they go to prison. I suppose this is great for IT security professionals (and we've already seen how great it is for accountants,) but it seems a collosal waste of time and money as well. They did mention in the panel that there are quite a few issues with rules of evidence and procedure, since they're based on the physical world rather than the electronic, so the controls are sometimes inappropriate. There's now a lot more scrutiny on digital evidence -- it's no longer good enough to just show that you have logging functionality and have the judge and jury believe it like an oracle ("The computer says it, so it must be right!") There are no technical controls on electronic documents -- they're equated to paper. But this is a problem because they're so mutable, even back in time (forging timestamps is easy in many systems,) which may cause a backlash against all digital evidence if this sort of tampering happens too often. In addition, legally, a digital document -- the actual source data -- is just a stream of zeroes and ones; the document you see on the screen is just a view of a view of the data. It's an interpretation of the zeroes and ones, and lacks ascertainability. This may become a legal issue in some cases.
The last presentation of the day was also the best. It was entitled "Management that Measures Up: Metrics for Information Risk Reduction and Decision Making." It was given by an uber-geek from CyberTrust, an association of associations that gathers massive amounts (2+ gigs a day) of security information. The presenter's point was that we focus too much on vulnerability and not enough on risk. The best solution for a single computer may be the wrong solution for a network -- patching is an example of this. To make an analogy, the best solution for cholera in a person is antibiotics, rest, and lots of fluids, but this would be a bad prescription for cholera in a third-world country -- the best solution there is to separate the latrines from the drinking water by as much distance as possible and improve hygiene. A community is not cured in the same way as an individual -- for the community, you want to manage risk.
He disagreed with most of the industry's "best practices," asserting that strong passwords do not appreciably reduce risk, and nor do encrypted Internet connections. Going wireless actually improves most home users' security, since the additional risk of someone jumping on their network is actually far lower than the risk they've mitigated by being behind a default-deny NAT (which wireless routers are.) Improving an existing countermeasure (i.e. requiring strong passwords rather than just any password) is almost always inferior to adding an additional countermeasure. The reason for this is that risk is multiplicative. Risk = Threat * Vulnerability * Impact, where threat is the rate of attacks, vulnerability is the likelihood of an attack to succeed, and impact is the cost of a successful attack (including intangibles.) Thus, reducing any of these reduces your overall risk.
Because of this multiplication, workarounds are often better than the "right" answer. Being behind a firewall so you can't get a worm may protect you better than patching your systems to be immune to the worm, even though patching is the "correct" solution. To mitigate risk, you have to do one of four things: deter (reduce the threat by making people less likely to attack,) protect (reduce the vulnerability of your systems,) detect (reduce the impact by noticing attacks early,) recover (reduce the impact by lowering the cost of a successful attack,) or transfer (reduce the impact by buying insurance.) Deterrence is not usually possible for a company -- hackers will hack, and there's not much you can do about it other than be a low-profile target.
His point is that mitigations multiply. If you already have passwords, you're mitigating say 70% of the vulnerability. If you strengthen your password policy, you might raise this to 80%. But adding a firewall might mitigate 50% of the vulnerability, too -- and this is the equivalent of going from 70 to 85%. Stack many, many 50% effective things on top of each other, and you get a very high percentage. Adding another layer of low-effectiveness security is often better than raising the one or two of your measures fail, you still have backups. This is just another look at defense in depth.
Another thing he looked at are amplified and dampened countermeasures. Some countermeasures work better than others against specific attacks. Against worms, for instance, patching is dampened -- you need 99% or more of your systems patched before a worm is even appreciably slowed in your organization. Sure, the patched systems are safe -- but three systems with Slammer on them will bring down your entire network, even if the other 50,000 system sare safe. On the other hand, firewalling off network segments is an amplified countermeasure -- even a little bit of it helps immensely. Unfortunately, which countermeasures are amplified and which are dampened varies depending on the attack -- there's no one best countermeasure.
Thus ended Day 3 of the conference, so I walked back to my hotel. I called up Sarah, one of my coworkers, to see if she was interested in dinner; she was not, but redirected me to Greg, who was going to a seafood place called the Tadich Grill at 8:00. So I joined Greg, Himani, and two guys named Dave (one PM, one test) at the Tadich Grill. I took a taxi there, but it only ran $5, so it was quite near my hotel. The Tadich Grill was quite packed, and served quite good seafood -- and a lot of it. I got the grilled petrale sole, and it was good, but I could only actually eat half of it. Atferwards, it was about 9:45, and Greg and the Daves were going to go get coffee and dessert somewhere, but Himani and I both wanted to go back to the hotel and get some sleep. Since it was quite comfortable out (aside from being raining), we just walked back to the hotel. Along the way, we discussed, of all things, martial arts and guns. Her topics, so she doesn't think I'm a loon now. :)
Back at the hotel, I signed on to World of Warcraft for a few minutes, which turned into an hour when I discovered someone selling a really nice weapon and had to borrow money from
![[livejournal.com profile]](https://www.dreamwidth.org/img/external/lj-userinfo.gif)