A Geek With Guns

Chronicling the depravities of the State.

Archive for the ‘Security’ tag

Great Claims Request Great Evidence

without comments

A couple of months ago Bloomberg made big waves with an article that claimed China had inserted hardware bugs into the server architecture of many major American companies, including Amazon and Apple. Doubts were immediately raised by a few people because the Bloomberg reporters weren’t reporting on a bugged board that they had seen, they merely cited claims made by anonymous sources (always a red flag in a news article). But the hack described, although complicated in nature, wasn’t outside of the realm of possibility. Moreover, Bloomberg isn’t a tabloid, the organization has some journalistic readability, so the threat was treated seriously.

Since the threat was being taken seriously, actual investigations were being performed by the companies named in the article. This is where the credibility of the article started to falter. Apple and Amazon both announced that after investigating the matter they no evidence that their systems were compromised. Finally the company specifically named as the manufacturer of the compromised servers announced that an independent audit found no evidence to support Bloomberg’s claims:

SAN FRANCISCO (Reuters) – Computer hardware maker Super Micro Computer Inc told customers on Tuesday that an outside investigations firm had found no evidence of any malicious hardware in its current or older-model motherboards.

In a letter to customers, the San Jose, California, company said it was not surprised by the result of the review it commissioned in October after a Bloomberg article reported that spies for the Chinese government had tainted Super Micro equipment to eavesdrop on its clients.

Could Apple, Amazon, and Super Micro all be lying about the findings of their investigations as some have insinuated? They certainly could be. But I subscribe to the idea that great claims require great evidence. Bloomberg has failed to produce any evidence to back its claims. If the hack described in its article was as pervasive as the article claimed, it should have been easy for the journalists to acquire or at least see one of these compromised boards. There is also the question of motivation.

Most reports indicated that China has had great success hacking systems the old fashioned way. One of the advantages to remote software hacks is that they leave behind little in the way of hard evidence. The evidence that is left behind can usually be plausibly denied by the Chinese government (it can claim that Chinese hackers unaffiliated with the government performed a hack for example). Why would China risk leaving behind physical evidence that is much harder to deny when it is having success with methods that are much easier to deny?

Unless Bloomberg can provide some evidence to support its claims, I think it’s fair to call bullshit on the article at this point.

Written by Christopher Burg

December 12th, 2018 at 10:30 am

Posted in Technology

Tagged with ,

Chip-and-Fail

with 2 comments

EMV cards, those cards with the chip on the front, were supposed to reduce fraud but credit card fraud is rising. What gives? It turns out that the security provided by Chip-and-PIN doesn’t work when you don’t use it:

The reasons seem to be twofold. One, the US uses chip-and-signature instead of chip-and-PIN, obviating the most critical security benefit of the chip. And two, US merchants still accept magnetic stripe cards, meaning that thieves can steal credentials from a chip card and create a working cloned mag stripe card.

A lot of stores still don’t have credit card readers that can handle cards with a chip so you’re stuck using the entirely insecure magnetic strip. And most credit cards equipped with chips don’t require entering a PIN because Americans are fucking lazy:

The reason banks say they don’t want to issue PINs is that they’re worried it will add too much friction to transactions and make life difficult for their customers. “The credit-card market is pretty brutally competitive, so the first issuer who goes with PINs has to worry about whether the consumers are going to say, ‘Oh, that’s the most inconvenient card in my wallet,’’ says Allen Weinberg, the co-founder of Glenbrook Partners. “There’s this perception that maybe it’s going to be less convenient, even though some merchants would argue that PINs take less time than signatures.”

Since card holders face little in the way of liability for fraudulent transactions, they have little motivation to enter a four to six digit PIN every time they purchase something. If card holders aren’t motivated to enter a PIN, card issuers aren’t likely to require holder to enter a PIN because it might convince them to get a different card. It’s tough to improve security when nobody gives a damn about security.

Eventually the level of fraud will rise to the point where card issuers will take the risk of alienating some holders and mandate the use of a PIN. When that day finally comes, card issuers will discover that Americans are absolutely able to overcome any barrier if doing so allows them to continue buying sneakers with lights in them.

Written by Christopher Burg

November 16th, 2018 at 11:00 am

Bitwarden Completes Security Audit

without comments

In my opinion one of the easiest things an individual can do to improve their overall computer security is use a password manager. I had been using 1Password for years and have nothing but good things to say about it. However, when I decided to move from macOS to Linux, I decide that I needed a different option. 1Password’s support on Linux is only available through 1Password X, which is strictly a browser plugin. Moreover, in order to use 1Password X, you need to pay a subscription (I was using a one-time paid license for 1Password 7 on macOS as well as the one-time paid version for iOS), which I generally prefer to avoid.

Bitwarden bubbled to the top of my list because it’s both open source and can be self-hosted (which is what I ended up doing). While Bitwarden lacks several nice features that 1Password has, using it has been an overall pleasant experience. Besides missing some features that I’ve come to enjoy, another downside to Bitwarden has been the lack of a security audit. Two days ago the Bitwarden team announced that a third-party vendor has completed a code audit and the results were good:

In the interest of providing full disclosure, below you will find the technical report that was compiled from the team at Cure53 along with an internal report containing a summary of each issue, impact analysis, and the actions taken/planned by Bitwarden regarding the identified issues and vulnerabilities. Some issues are informational and no action is currently planned or necessary. We are happy to report that no major issues were identified during this audit and that all issues that had an immediate impact have already been resolved in recent Bitwarden application updates.

The full report can be read here [PDF].

With this announcement I’m of the opinion that Bitwarden should be given serious consideration if you’re looking for a password manager. It’s an especially good option if you want to go the self-hosted route and/or want support for Linux, macOS, and Windows.

Written by Christopher Burg

November 14th, 2018 at 10:00 am

Posted in Technology

Tagged with ,

Your Vote Matters

without comments

After the last election the Democrats were throwing a fit over supposed Russian interference with the presidential election (funny how politicians here get bent out of shape when somebody interferes with their elections). Implied in the accusation is that an extremely sophisticated enemy such as a state actor is necessary to interfere with a United States election. However, the security of many election machines and election-related sites is so bad that an 11-year-old can break into them:

An 11-year-old boy on Friday was able to hack into a replica of the Florida state election website and change voting results found there in under 10 minutes during the world’s largest yearly hacking convention, DEFCON 26, organizers of the event said.

Thousands of adult hackers attend the convention annually, while this year a group of children attempted to hack 13 imitation websites linked to voting in presidential battleground states.

The boy, who was identified by DEFCON officials as Emmett Brewer, accessed a replica of the Florida secretary of state’s website. He was one of about 50 children between the ages of 8 and 16 who were taking part in the so-called “DEFCON Voting Machine Hacking Village,” a portion of which allowed kids the chance to manipulate party names, candidate names and vote count totals.

Florida’s website isn’t an isolated incident. The entire infrastructure supporting elections here in the United States is a mess:

Even though most states have moved away from voting equipment that does not produce a paper trail, when experts talk about “voting systems,” that phrase encompasses the entire process of voting: how citizens register, how they find their polling places, how they check in, how they cast their ballots and, ultimately, how they find out who won.

Much of that process is digital.

“This is the problem we always have in computer security — basically nobody has ever built a secure computer. That’s the reality,” Schneier said. “I want to build a robust system that is secure despite the fact that computers have vulnerabilities, rather than pretend that they don’t because no one has found them yet. And people will find them — whether it’s nation-states or teenagers on a weekend.”

And before you think that you’re state is smart for not using voting machines, you should be aware that computers are involved in various steps of any modern voting process. Minnesota, for example, uses paper ballots but they’re fed into an electronic machine. Results from local ballot counts are transmitted electronically. Those results are then eventually transmitted electronically to media sources and from there to the masses.

If you go to cast your ballot today, know that there is no reason to believe that it will matter. There are far too many pieces of the voting infrastructure that are vulnerable to the machinations of 11-year-olds.

Written by Christopher Burg

November 6th, 2018 at 11:00 am

Making Surveillance Easy

without comments

We’re only a few days away from yet another “most important election in our lifetime.” Since the Republicans are in power, the Democrats and their sympathizers are pissed and when they’re pissed it’s not uncommon for them to protest (Remember the last time they were out of power? They actually protested the wars that the party in power started! Those were the days!). Nobody likes it when people protest again them so the party in power wants to keep tabs on the people who might take action against them. Fortunately for them, most protesters make this easy:

The United States government is accelerating efforts to monitor social media to preempt major anti-government protests in the US, according to scientific research, official government documents, and patent filings reviewed by Motherboard. The social media posts of American citizens who don’t like President Donald Trump are the focus of the latest US military-funded research. The research, funded by the US Army and co-authored by a researcher based at the West Point Military Academy, is part of a wider effort by the Trump administration to consolidate the US military’s role and influence on domestic intelligence.

The vast scale of this effort is reflected in a number of government social media surveillance patents granted this year, which relate to a spy program that the Trump administration outsourced to a private company last year. Experts interviewed by Motherboard say that the Pentagon’s new technology research may have played a role in amendments this April to the Joint Chiefs of Staff homeland defense doctrine, which widen the Pentagon’s role in providing intelligence for domestic “emergencies,” including an “insurrection.”

A couple of years ago a few friends and I had the opportunity to advise some protesters on avoiding government surveillance. They were using Facebook to organize and plan their protests. We had to explain to them that using Facebook for that purpose meant that every local law enforcement agency was likely receiving real-time updates on their plans. We made several recommendations, most of which involved moving planning from social media to more secure forms of communications (Signal, RetroShare, etc.). In the end they thanked us for our advice, decided that using anything but Facebook was too difficult (which made me suspect that there were undercover law enforcers amongst them), and kept handing law enforcement real-time information.

The moral of the story is that government agencies pour resources into social media surveillance because it works because most protesters are more concerned about convenience than operational security.

Security for Me, Not for Thee

without comments

Google has announced several security changes. However, it’s evident that those changes are for its security, not the security of its users:

According to Google’s Jonathan Skelker, the first of these protections that Google has rolled out today comes into effect even before users start typing their username and password.

In the coming future, Skelker says that Google won’t allow users to sign into accounts if they disabled JavaScript in their browser.

The reason is that Google uses JavaScript to run risk assessment checks on the users accessing the login page, and if JavaScript is disabled, this allows crooks to pass through those checks undetected.

Conveniently JavaScript is also used to run a great deal of Google’s tracking software.

Disabling JavaScript is a great way to improve your browser’s security. Most browser-based malware and a lot of surveillance capabilities rely on JavaScript. With that said, disabling JavaScript entirely also makes much of the web unusable because web developers love to use JavaScript for everything, even loading text. But many sites will provide at least a hobbled experience if you choose to disable JavaScript.

Mind you, I understand why Google would want to improve its security and why it would require JavaScript if it believed that doing so would improve its overall security. But it’s important to note what is meant by improving security here and what potential consequences it has for users.

Written by Christopher Burg

November 2nd, 2018 at 10:30 am

Posted in Technology

Tagged with ,

Deafening the Bug

with 2 comments

I know a lot of people who put a piece of tape over their computer’s webcam. While this is a sane countermeasure, I’m honestly less worried about my webcam than the microphone built into my laptop. Most laptops, unfortunately, lack a hardware disconnect for the microphone and placing a piece of tap over the microphone input often isn’t enough to prevent it from picking up sound in whatever room it’s located. Fortunately, Apple has been stepping up its security game and now offers a solution to the microphone problem:

Little was known about the chip until today. According to its newest published security guide, the chip comes with a hardware microphone disconnect feature that physically cuts the device’s microphone from the rest of the hardware whenever the lid is closed.

“This disconnect is implemented in hardware alone, and therefore prevents any software, even with root or kernel privileges in macOS, and even the software on the T2 chip, from engaging the microphone when the lid is closed,” said the support guide.

The camera isn’t disconnected, however, because its “field of view is completely obstructed with the lid closed.”

While I have misgivings with Apple’s recent design and business decisions, I still give the company credit for pushing hardware security forward.

Implementing a hardware cutoff for the microphone doesn’t require something like Apple’s T2 chip. Any vendor could put a hardware disconnect switch on their computer that would accomplish the same thing. Almost none of them do though, even if they include hardware cutoffs for other peripherals (my ThinkPad, for example, has a build in cover for the webcam, which is quite nice). I hope Apple’s example encourages more vendors to implement some kind of microphone cutoff switch because being able to listen to conversations generally allows gathering more incriminating evidence that merely being able to look at whatever is in front of a laptop.

Written by Christopher Burg

November 1st, 2018 at 11:00 am

Good News from the Arms Race

without comments

Security is a constant arms race. When people celebrate good security news, I caution them from getting too excited because bad news is almost certainly soon to follow. Likewise, when people are demoralized by bad security news, I tell them not to lose hope because good news is almost certainly soon to follow.

Earlier this year news about a new smartphone cracking device called GrayKey broke. The device was advertised as being able to bypass the full-disk encryption utilized by iOS. But now it appears that iOS 12 renders GrayKey mostly useless again:

Now, though, Apple has put up what may be an insurmountable wall. Multiple sources familiar with the GrayKey tech tell Forbes the device can no longer break the passcodes of any iPhone running iOS 12 or above. On those devices, GrayKey can only do what’s called a “partial extraction,” sources from the forensic community said. That means police using the tool can only draw out unencrypted files and some metadata, such as file sizes and folder structures.

Within a few months I expect the manufacturer of the GrayKey device to announce an update that gets around iOS’s new protections and within a few months of that announcement I expect Apple to announce an update to iOS that renders GrayKey mostly useless again. But for the time being it appears that law enforcers’ resources for acquiring data from a properly secured iOS device are limited.

Written by Christopher Burg

October 26th, 2018 at 10:30 am

Trade-offs

without comments

I frequently recommend Signal as a secure messaging platform because it strikes a good balance between security and usability. Unfortunately, as is always the case with security, the balance between security and usability involves trade-offs. One of the trade-offs made by Signal has recently become the subject of some controversy:

When Signal Desktop is installed, it will create an encrypted SQLite database called db.sqlite, which is used to store the user’s messages. The encryption key for this database is automatically generated by the program when it is installed without any interaction by the user.

As the encryption key will be required each time Signal Desktop opens the database, it will store it in plain text to a local file called %AppData%\Signal\config.json on PCs and on a Mac at ~/Library/Application Support/Signal/config.json.

When you open the config.json file, the decryption key is readily available to anyone who wants it.

How could the developers of Signal make such an amateurish mistake? I believe the answer lies in the alternative:

Encrypting a database is a good way to secure a user’s personal messages, but it breaks down when the key is readily accessible to anyone. According to Suchy, this problem could easily be fixed by requiring users to enter a password that would be used to generate an encryption key that is never stored locally.

In order to mitigate this issue the user would be required to do more work. If the user is required to do more work, they’ll likely abandon Signal. Since Signal provides very good transport security (the messages are secure during the trip from one user to another) abandoning it could result in the user opting for an easier to use tool that didn’t provide as effective or any transport security, which would make them less secure overall.

iOS and many modern Android devices have an advantage in that they often have dedicated hardware that encryption keys can be written to but not read from. Once a key is written to the hardware data can be sent to it to be either encrypted or decrypted with that key. Many desktops and laptops have similar functionality thanks to Trusted Platform Modules (TPM) but those tend to require user setup first whereas the smartphone option tends to be seamless to the user.

There is another mitigation option here, which is to utilize full-disk encryption to encrypt all of the contents on your hard drive. While full-disk encryption won’t prevent resident malware from accessing Signal’s database, it will prevent the database from being copied from the computer by a thief or law enforcers (assuming they seized the computer when it was off instead of when the operating system was booted up and thus the decryption key for the drive was resident in memory).

Written by Christopher Burg

October 25th, 2018 at 10:30 am

The End of TLS 1.0 and 1.1

with one comment

Every major browser developer has announced that they will drop support for Transport Layer Security (TLS) 1.0 and 1.1 by 2020:

Apple, Google, Microsoft, and Mozilla have announced a unified plan to deprecate the use of TLS 1.0 and 1.1 early in 2020.

TLS (Transport Layer Security) is used to secure connections on the Web. TLS is essential to the Web, providing the ability to form connections that are confidential, authenticated, and tamper-proof. This has made it a big focus of security research, and over the years, a number of bugs that had significant security implications have been found in the protocol. Revisions have been published to address these flaws.

Waiting until 2020 gives website administrators plenty of time to upgrade their sites, which is why I’ll be rolling my eyes when the cutoff date arrives and a bunch of administrators whine about the major browsers “breaking” their websites.

Every time browser developers announced years ahead of time that support will be dropped for some archaic standard, there always seems to be a slew of websites, include many major websites, that continue relying on the dropped standard after the cutoff date.

Written by Christopher Burg

October 17th, 2018 at 11:00 am