Another School Attack in China

Seriously what the fuck is going on over there? This is the fourth school attack in that country this month. This time a man used a hammer to beat down five kids. But according to the anti-gunners if we ban guns from schools no more mass attacks will happen (a China proved yesterday when 28 kids and three adults were stabbed by a man with a knife, which are illegal to carry around in China).

Something I’ve Often Wondered

Tam brings up a good pointer here. I often hear people say you should train like you fight. I agree. But apparently my idea of how I will need to fight and most people who proclaim this different greatly.

When somebody tells me to train like I fight they usually train with tools that I, nor I assume most people, don’t expect to have at my disposal. For the purposes of this post we’ll create a hypothetical gun owner who claims to train like he fights. Let’s call him Timmy Tactical. Timmy Tactical goes to the range every weekend to train. His training regiment is strict and to the letter. His first half hour is spent getting his MOLLE vest, drop leg holster, gloves, knee pads, elbow pads, combination infrared/night vision/4th dimension goggles, radio gear, spike toed ninja combat boots, bullet proof vest equipped with six trauma plates, and sun glasses (super tactical of course) on. He then spends the next couple of hours running drills with his M4gery carbine equipped with a flashlight, laser sight, bayonet, EOTech holographic site with accompanying magnifier, and two extra rounds stored in the pistol grip. He creates scenarios for himself based on likely situations including communist invasion, zombie outbreak, and invasion by aliens hailing from the far reaches of the Betelgeuse system.

What Timmy Tactical doesn’t train with is his concealed .38 caliber snubby revolver that is on his person the rest of the time.

This has always begged the question how do you really plan to fight? Personally the only people I know who fight like Tactical Timmy are the men and women in our military who fight as their daily job. They can fight like that because throughout their day they actually have all their on them. The reason for this should be obvious, they are in a job where extended firefights with multiple enemy combatants is not only possible but down right expected.

Those of us living the lives of sovereign individuals don’t generally have that kind of gear on our person. We don’t walk around all day decked out in combat gear. Well at least nobody I know does or have seen does. Most armed citizens have a small handgun concealed on their person and hopefully a reload in one of their pockets. That’s their fighting gear. And this fighting gear reflects the most likely confrontational scenarios that an average Joe is going to encounter. When walking the streets of a city your most likely enemy is going to be one hostile individual. You may have to defend yourself against multiple people but the number is generally not going to be above three (and before anybody says it, yes I know it can and you should always be as prepared as possible for surprise situations).

That’s why when I train like I fight it involves my carry gun in a concealed holster and a spare magazine in my back pocket. The scenarios I envision usually involve one or two combatants at close range. That’s not to say I don’t bring out my M1A SOCOM 16 every now and then and act like I’m fighting off a zombie invasion, but I don’t really consider it training. That’s just plain fun and games for me. But realistically if our fine country is ever invaded by communists hailing from Alpha Centauri I’m gathering up those I care about and heading for safe territory to hide until the shit settles down.

Likewise when it comes to home defense I train with my carry gun (the one I’m most likely to have should somebody go bump in the night) and shotgun (the gun I’ll try to get should I have the chance) assuming I’ll be in tight corridors and hallways.

Protecting yourself, like anything involving security, is all about threat profiles. I agree you should train like you fight. But how you fight should be based on threat profile. Always ask yourself what the most common situation is. For most people it’s either going to be a mugger coming at you on the street in the middle of the night or some jerk off busting into your home at a similar time frame. Due to this your should train with what equipment you will likely have at hand in those situations (and unless you sleep in your MOLLE gear it’s not going to be that). It certainly doesn’t hurt to train with your super cool combat rifle but you should be most proficient in the use of your carry and home defense gun assuming you won’t have time to gear up.

Firefox For The Truly Paranoid

A while back I mentioned that I dropped Google Chrome and returned to Firefox. My reasoning revolved around features unavailable in Chrome which was available in Firefox through extensions. Well the two features I wanted most have been added in a previous build of Chrome: the ability to block all scripting except for pages I white list, and better cookie management. Yes I’m still on Firefox. Why? Because Chrome’s script blocking and cookie management features are severally lacking in my opinion.

In Chrome’s advanced settings you can chose to block all scripting and cookies from sites not on your white list. This is exactly what I want as scripting is the defacto method of exploiting a computer these days and cookies are tools for spying on sites you visit. The problem is Chrome’s interface for it’s script blocking sucks. If a site has scripts that are being blocked an icon appears in the address bar. If you click on this icon you have two options: keep blocking scripts or white list the sight. NoScript on Firefox gives a third option I’m very fond of, temporarily allow scripting. I only white list sites I trust and visit frequently. But oftentimes I find myself visiting websites that require scripting to be enabled in order to gleam information from. In this case I temporarily allow scripting, get the information I need, and know that scripting will be disabled automatically for that site when I close my browser. It’s a great feature.

Likewise NoScript blocks more than scripting. It also notifies you of things like attempted cross-site scripting attacks, forces cookies from an secured site to be sent via HTTPS, and blocks all plugin components like Flash movies until I give my expressed go ahead. But Firefox has some other features available via plugins that I can’t replace via Chrome because frankly Chrome’s extension support sucks. In Chrome an extension can’t block items from being downloaded when you view a page. For instance if you install Adblock in Chrome the advertisements from any websites you visit will always be downloaded but Adblock will simply hide them through the use of CSS. Firefox on the other hand gives extension developers granulated control. For instance if I set NoScript to block scripting on www.example.com no JavaScript files will be downloaded when I navigate to www.example.com. Likewise Flash advertisements will not be downloaded unless I enable scripting and click on the individual Flash item.

Overall Chrome is more secure than Firefox’s default installation. In Chrome everything runs in a sandbox which means in order to exploit the browser you must exploit its rendering engine (WebKit) and it’s sandbox. Using the right extensions in Firefox I can ensure no potentially malicious scripts are even downloaded to begin with. An ounce of prevention is worth a pound of cure. Ensuring malicious code is never even downloaded in the first place is a better security option than downloading the code and depending on the sandbox to prevent anything bad from happening. Ideally having both abilities is the best option which Chrome allows for JavaScript but again it doesn’t check for other potential malicious content like NoScript does.

So yes Firefox is a much slower browser that is a big on resources. But the power extension developers have in Firefox means you can make the browser extremely secure whereas in Chrome you can’t enhance its security outside of methods Google allows. Due to this I’m still on Firefox and will be for the foreseeable future. Since I’m here I thought I’d let everybody know what security related extensions I’m using.

NoScript: I love this extensions. I will go so far as to say this extension is the primary reason I’m still using Firefox. What it does is blocks all scripting on all websites unless you add said site to your white list. You can add a site to your white list either permanently or only temporarily if it’s a site you don’t plan on visiting again. It complicates web browsing and therefore isn’t for everybody (or even most people I’d venture to say). As a benefit most of those annoying flashing advertisements get blocked when using NoScript. This extension is constantly being updated with new security related features.

CookieSafe: Cookie safe is a plugin that allows you to managed website cookies. There are three options available for each web site. The first, and default settings, is to block cookies all together. The second option is to temporarily allow cookies (they will be wiped out upon closing your browser) and the third option is to add the website to your white list which will allow cookies for that domain. The plugin only allows cookies from specific domains meaning you don’t have to worry about third party cookies getting onto your system (although this feature is available on most major browsers the implementations generally suck).

Certificate Patrol: I’ve mentioned a research paper I’ve read recently that talks about SSL security and it’s ability to be exploited by governments. Although there is no sure fire way to detect and prevent this kind of exploit you can strongly mitigate it. Certificate Patrol is an extension that displays all major certificate information for a secure web page the first time you visit it or when the certificate changes. So when you visit www.example.com the certificate information (we’ll assume it’s a secure site) will be promptly displayed by Certificate Patrol the first time you navigate your way there. If the certificate changes when you visit the site again the new certificate information will be displayed including what has changed. One mechanism to catching a certificate is looking at the issuer. For instance Internet Explorer trusts the root certificate for the Hong Kong Post Office. If you visit www.example.com and Certificate Patrol notifies you that the certificate has changed and the new one is provided by a different root authority you know something could be up. If the site’s certificate was previously provided by VeriSign and the new one is provided by the Hong Kong Post Office you know something is probably fishy. This could point to the fact the sight is not actually www.example.com but a site made by the Chinese government in order to capture information about dissidence who visit www.example.com (obviously some DNS spoofing would be required to redirect visitors to their site as well).

Those three extensions help mitigate many common web based attacks. This post is not to say none of this can be done in Chrome though. For instance you can manually check for certificate changes in Chrome but you will have to do it every time you visit a site to see if the certificate changed or not. Certificate Patrol simply automates that task. Likewise you can block cookies and scripting in Chrome but the interface to do either is more cumbersome than using CoockieSafe and NoScript.

Personally I value security over performance and that is why I’m still sticking with Firefox.

How Police Should Be

Uncle let’s us know what it means to be a good police officer. An 89 year-old woman successfully defended her home with a gun (which impossible, according the the anti-gunners the bugler would have just taken her gun and used it against her). The police responded in the following manner:

“This is a .22. And the police reloaded it for me,” said Turner. “I know how to work that gun. I just hope and pray to God it don’t happen again.”

Good on you officers.

On Government Sanctioned Assassinations

Bruce Schneier has a link to an interesting piece [New York Times so you might hit the pay wall and be unable to read the article] talking about Obama’s recent authorization to kill an American citizen. The article doesn’t get into the politics so much as explain why targeted killings of terrorist organization leaders is a bad idea:

Particularly ominous are Jordan’s findings about groups that, like Al Qaeda and the Taliban, are religious. The chances that a religious terrorist group will collapse in the wake of a decapitation strategy are 17 percent. Of course, that’s better than zero, but it turns out that the chances of such a group fading away when there’s no decapitation are 33 percent. In other words, killing leaders of a religious terrorist group seems to increase the group’s chances of survival from 67 percent to 83 percent.

The data is referenced from this study [PDF]. Needless to say killing the leader more often than not increases the likelihood of the organization surviving. That makes sense considering these organizations believe they are being targeted by their enemies and seeing a demonstration of such is going to strengthen their resolve.

In Security the Key Phrase is Trust No One

Last month I posted a story about an interesting Windows security issue dealing with how the operating system handles SSL root certificates. After reading the linked research paper I’ve started scrounging the sourced information within and I must say the phrase trust no one is made very apparently. The paper cites several stories dealing with government entities coercing private companies into allowing bypassing in place security measures to allow surveillance. Lets look at a few of these stories.

The first one relates to an online e-mail service called Hushmail. According to Hushmail’s own site:

Every day, people around the world send billions of emails. The vast majority of these are transmitted without using any form of encryption. When you send an email without encryption, it can be monitored, logged, analyzed and stored by your employer, your internet service provider, or worse – a hacker
….
Hushmail keeps your emails private by encoding each message using encryption. Encryption is a way of transforming a message so that it is unreadable to anyone but the sender and its recipients. Hushmail makes encryption seamless and transparent – we encrypt your message automatically before it is sent, and then restore it back to its original form when the recipient reads it.

And from another section on their site:

In some countries, government sponsored projects have been set up to collect massive amounts of data from the Internet, including emails, and store them away for future analysis. This data collection is done without any search warrant, court order, or subpoena. One example of such a program was the FBI’s Carnivore project. By using Hushmail, you can be assured that your data will be protected from that kind of broad government surveillance.

You’ll notice they chose their wording very carefully. They imply their service will prohibit government surveillance but only so long as it’s warrantless. That page also describe in detail the fact that they will surrender information upon lawful request. Of course there is a reason they disclose this information now:

Zimmermann, who sits on Hushmail’s advisory board, spoke to THREAT LEVEL after we published a piece contrasting the site’s promises that it had no access to the contents of customers’ encrypted emails stored on their servers with a court case showing that the Canadian company turned over 12 CDs of readable emails to U.S. authorities.

At one point Hushmail advertised itself as not being able to access user’s e-mails. Of course they eventually turned over 12 CDs worth of customer e-mails and then backtracked. Mr. Zimmermann makes a very good point that everybody should realize:

“If your threat model includes the government coming in with all of force of the government and compelling service provider to do things it wants them to do, then there are ways to obtain the plaintext of an email ,” Zimmermann said in a phone interview. “Just because encryption is involved, that doesn’t give you a talisman against a prosecutor. They can compel a service provider to cooperate.”

It should go without saying that if the company can get access to the plain text of the e-mails stored on its servers then somebody else can as well. Needless to say even if an online service proclaims they securely store your data and it can not be accessed that is not usually true. The only secure option is to encrypt the data while it’s still on your machine and then send it out. For instance I backup much of my data to an online store service. Before the data leaves my system it’s put into a TrueCrypt partition. Only I have the key to decrypt the partition so even if a government entity forced my storage provider to hand over my data there is no way for that provider nor the government to decrypt it (obviously I mean before I die, they could brute force the key but it would take practically a century and I doubt I’ll still be alive when they find out my encrypted partition contained nothing important nor incriminating).

So that’s one example that was cited in the paper. The next one is even more insidious in my opinion but has a happier ending. I’m sure everybody who is reading this is at least familiar with OnStar. It’s an in vehicle service provided with Government General Motors produced vehicles. It allows such services as calling somebody via the press of a button or getting help in an emergency. It also allows law enforcement personnel to track and find the vehicle should it get stolen. To do it’s services there are two things that it needs: The ability to output vocal data which is provided by the car’s stereo system, and a microphone so you can communicated with OnStar employees.

People buying GM cars see this services as a convenience but government sees it as something else, a mechanism of spying on the citizenry:

The court did not reveal which brand of remote-assistance product was being used but did say it involved “luxury cars” and, in a footnote, mentioned Cadillac, which sells General Motors’ OnStar technology in all current models. After learning that the unnamed system could be remotely activated to eavesdrop on conversations after a car was reported stolen, the FBI realized it would be useful for “bugging” a vehicle, Judges Marsha Berzon and John Noonan said.

Yes the FBI decided OnStar was a great service. You simply flip on the microphone remotely and you can monitor conversations taking place inside the vehicle. Great! Fortunately after doing this the courts decided it was a no-no:

In a split 2-1 rulingthe majority wrote that “the company could not assist the FBI without disabling the system in the monitored car” and said a district judge was wrong to have granted the FBI its request for surreptitious monitoring.

But not for the reasons you’re thinking:

David Sobel, general counsel at the Electronic Privacy Information Center, called the court’s decision “a pyrrhic victory” for privacy.

“The problem (the court had) with the surveillance was not based on privacy grounds at all,” Sobel said. “It was more interfering with the contractual relationship between the service provider and the customer, to the point that the service was being interrupted. If the surveillance was done in a way that was seamless and undetectable, the court would have no problem with it.”

See in order to activate the microphone remotely without the vehicle occupants knowing OnStar’s recovery mode had to be disabled. This presented a violation of the service agreement between OnStar and the vehicle owner:

Under current law, the court said, companies may only be ordered to comply with wiretaps when the order would cause a “minimum of interference.” After the system’s spy capabilities were activated, “pressing the emergency button and activation of the car’s airbags, instead of automatically contacting the company, would simply emit a tone over the already open phone line,” the majority said, concluding that a wiretap would create substantial interference.

Personally I don’t trust any system in my vehicle that can be remotely activated and for good reason. Having a remotely activated microphone in your vehicle is just asking to be eavesdropped on. This also includes cellular phones but Tam pointed out a simple solution for that.

The final cited source I’m going to bring up from that paper (seriously go read it [PDF]) deals with RIM’s Blackberry phones. In this case the problem wasn’t related to RIM but a cellular phone carrier who cells their devices. I know the United Arab Emirates aren’t known for their love of basic human rights but when you get carriers to install spyware on phones to monitor all users of Blackberry devices that’s simply shitting all over privacy.

Details on the spyware application itself can be found here. Although the spyware did appear to be actively monitoring peoples’ communications by default it was capable of being remotely activated at any time. Of course the expected activation would be done by law enforcement personnel but anything they can activate a resourceful malicious hacker can activate. Now I do want to make it clear RIM didn’t have any knowledge of this and did release the following public statement:

In the statement, RIM told customers that “Etisalat appears to have distributed a telecommunications surveillance application… independent sources have concluded that it is possible that the installed software could then enable unauthorised access to private or confidential information stored on the user’s smartphone”.

It adds that “independent sources have concluded that the Etisalat update is not designed to improve performance of your BlackBerry Handheld, but rather to send received messages back to a central server”.

This was a case of the UAE government getting a local carrier, Etisalat, to cooperate and install the spyware. The scariest thing here is the software wouldn’t have even been noticed if it wasn’t for the fact it was poorly coded and causing phone instabilities. Needless to say the phrase trust no one is very relevant everywhere in the world.

These stories exemplify that security is something you need to take into your own hands. You can’t expect other people to do it nor can you expect your government to do it. Nobody is going to protect your life, property, or privacy except you. This requires you obtain pertinent knowledge on the technology you use. Take time to understand the technology and devices you use in your everyday life and try to come up with ways those things can be used against you. Once you realize how those things can be used you can develop countermeasures.

On The Collateral Murder Video

I’m sure everybody has seen the video of the Apache helicopter crew shooting a group of civilians and two reporters. I wasn’t there so I’m no going to comment on the even itself, I’ll leave that to people who want to argue about that. But an interesting point is brought up by Bruce Schneier. The following was stated on the WikiLeak Twitter stream:

Finally cracked the encryption to US military video in which journalists, among others, are shot. Thanks to all who donated $/CPUs.

Bruce’s question is simple:

Surely this isn’t NSA-level encryption. But what is it?

So WikiLeaks is saying the Collateral Murder video was encrypted upon receipt. They rented “super computer time” to break the video encryption. So what the Hell scheme was used to break the encryption? Although Wikipedia is far from a valid source of information I’m going to link to the article on AES encryption because it gives a good overview. Specifically this part:

The National Security Agency (NSA) reviewed all the AES finalists, including Rijndael, and stated that all of them were secure enough for US Government non-classified data. In June 2003, the US Government announced that AES may be used to protect classified information:

The design and strength of all key lengths of the AES algorithm (i.e., 128, 192 and 256) are sufficient to protect classified information up to the SECRET level. TOP SECRET information will require use of either the 192 or 256 key lengths. The implementation of AES in products intended to protect national security systems and/or information must be reviewed and certified by NSA prior to their acquisition and use.”[8]

So considering this video was classified it would most likely have been encrypted using AES. There are some attacks currently available against AES but none of them allow breaking in a reasonable amount of time (depending on the implementation of AES used of course). Of course there is the possibility that the video was encrypted using a poorly chosen key and the WikiLeaks people simply performed a brute force attack against the video. It would seem idiotic that somebody would both encrypting this video using a strong encryption algorithm but not both using a good key. Then again this is the government we’re talking about and they are known for incompetence.

I would like to hear from WikiLeaks what method was used to encrypted this video. It would be interesting to find out not only what algorithm was used but also if the video was encrypted by the military, other government personnel, or the person who leaked the video.

Kind of Scary When You Think About It

According to the Department of Defense:

According to a Department of Defense report, there have been at least 32 “accidents involving nuclear weapons.” And the report only counts US accidents which occurred before 1980.

What kind of accidents you ask? Well:

They include such gaffes as nuclear bombs inadvertently falling through bomb bay doors; the accidental firing of a retrorocket on an ICBM; the vast dispersal of radioactive debris; and the loss of enriched fissile material and nuclear bombs (which are “still out there somewhere”).

I’m sure after each of these accidents the only words uttered were, “Oops.” Read the entire report here (It’s a PDF document so be warned).

Interesting Windows Security Issue

Note that I didn’t say security hole nor security flaw, that was intentional. The nerd part of my brain has been working in overdrive as of late which means I’ve been looking into geeky things. One thing that always intrigues me is the field of security. Well I found the following story on Wired that talks about a security issue in SSL/TLS (The security mechanisms used prominently by web browsers to secure web pages). The article leads to a “no duh” paper that shows how government entities can use their power to subvert SSL/TLS security by cohering certificate authorities into issuing valid certificates (Anybody who knows how SSL/TLS work already knew this was a possibility).

The part that interested me most was an exert from one of the sited sources in the paper. See back in the day there was some kerfuffle over the fact that Microsoft included a couple hundred trusted root certificates in their operating system. Root certificates are what ultimately get used to validate a certificate issued to a website. Thus root certificates are the ultimate “authority” in determine if a website you are visiting is valid or not. The more root certificates you have the large the possibility of a malicious certificate being certified as trusted (Statistically speaking of course. This assumes that with more root certificates the possibility of one of those root certificate “authorities” being corruptible increases). Anyways Microsoft eventually trimmed down the number of root certificates included in their operating system. But they didn’t actually cut down the number of certificates because according to their own developer documentation:

Root certificates are updated on Windows Vista automatically. When a user visits a secure Web site (by using HTTPS SSL), reads a secure email (S/MIME), or downloads an ActiveX control that is signed (code signing) and encounters a new root certificate, the Windows certificate chain verification software checks the appropriate Microsoft Update location for the root certificate. If it finds it, it downloads it to the system. To the user, the experience is seamless. The user does not see any security dialog boxes or warnings. The download happens automatically, behind the scenes.

Microsoft just pulled a security theater here. They didn’t cut down the number of trusted certificates, they just moved them somewhere people wouldn’t see them. If you connect to a web page that has a certificate that can’t be validated against a root certificate Windows will automatically go out to Microsoft’s servers and see if a root certificate there will validate the web site’s certificate. If one of those root certificates will validate the web site certificate it is downloaded onto your machine automatically and the site is listed as trusted. In essence Windows trusts more root certificates than it lets on.

So what does this mean? Well it means the window for having corrupted root certificate authorities is larger. With the exception of Firefox all major web browsers depend on the underlying operating system’s root certificate store to validate web pages (Firefox actually ships with it’s trusted root certificates and uses it’s own store as opposed to the underlying operating system’s). This also gives two potential locations to place a malicious root certificate. If an attacker was able to gain access to Microsoft’s online root certificate store and upload their own root certificate any SSL/TLS page they created using that root certificate for validation would show as trusted in all versions of Windows (Firefox still would show the site as untrusted). Granted the window for this attack would be small as Microsoft would most likely find it almost immediately and remove it. Likewise the likelihood of such an attack occurring a very small considering the short time frame it would be valid for. But it’s interesting thing to ponder regardless. Additionally the same attack could create a binary of Firefox with the same malicious root certificate included and make it available for download causing the same problem for Firefox users.

No matter what operating system or browser you use the validity of SSL/TLS connections eventually requires that you trust somebody (Which goes against the trust no one security motto). The question here is who are you willing to trust. Only you can determine that but knowing how a security system works and how it’s implemented are important in making that decision. Anyways I just thought that was interesting.