A Geek With Guns

Chronicling the depravities of the State.

Archive for the ‘Security’ tag

The End of TLS 1.0 and 1.1

with one comment

Every major browser developer has announced that they will drop support for Transport Layer Security (TLS) 1.0 and 1.1 by 2020:

Apple, Google, Microsoft, and Mozilla have announced a unified plan to deprecate the use of TLS 1.0 and 1.1 early in 2020.

TLS (Transport Layer Security) is used to secure connections on the Web. TLS is essential to the Web, providing the ability to form connections that are confidential, authenticated, and tamper-proof. This has made it a big focus of security research, and over the years, a number of bugs that had significant security implications have been found in the protocol. Revisions have been published to address these flaws.

Waiting until 2020 gives website administrators plenty of time to upgrade their sites, which is why I’ll be rolling my eyes when the cutoff date arrives and a bunch of administrators whine about the major browsers “breaking” their websites.

Every time browser developers announced years ahead of time that support will be dropped for some archaic standard, there always seems to be a slew of websites, include many major websites, that continue relying on the dropped standard after the cutoff date.

Written by Christopher Burg

October 17th, 2018 at 11:00 am

A Lot of Websites Don’t Fix Security Issues

without comments

Last year Google announced that it would be removing the Symantec root certificate from Chrome’s list of trusted certificates (this is because Symantec signed a lot of invalid certificates). This notification was meant to give web administrators time to acquire new certificates to replace their Symantec signed ones. The time of removal is fast approaching and many web administrators still haven’t updated their certificates:

Chrome 70 is expected to be released on or around October 16, when the browser will start blocking sites that run older Symantec certificates issued before June 2016, including legacy branded Thawte, VeriSign, Equifax, GeoTrust and RapidSSL certificates.

Yet despite more than a year to prepare, many popular sites are not ready.

Security researcher Scott Helme found 1,139 sites in the top one million sites ranked by Alexa, including Citrus, SSRN, the Federal Bank of India, Pantone, the Tel-Aviv city government, Squatty Potty and Penn State Federal to name just a few.

The headline of this article is, “With Chrome 70, hundreds of popular websites are about to break.” A more accurate headline would have been, “Administrators of hundreds of websites failed to fix major security issue.” Chrome isn’t the culprit in this story. Google is doing the right thing by removing the root certificate of an authority that failed to take proper precautions when issuing certificates. The administrators of these sites on the other hand have failed to do their job of providing a secure connection for their users.

Written by Christopher Burg

October 10th, 2018 at 10:30 am

All Data Is for Sale

without comments

What happens when a website that sells your personal information asks you to input your phone number to enable two-factor authentication? Your phone number is sold to advertisers:

Facebook is not content to use the contact information you willingly put into your Facebook profile for advertising. It is also using contact information you handed over for security purposes and contact information you didn’t hand over at all, but that was collected from other people’s contact books, a hidden layer of details Facebook has about you that I’ve come to call “shadow contact information.” I managed to place an ad in front of Alan Mislove by targeting his shadow profile. This means that the junk email address that you hand over for discounts or for shady online shopping is likely associated with your account and being used to target you with ads.

There really is no reason for a website to require a phone number to enable two-factor authentication. Short Message Service (SMS) is not a secure protocol so utilizing it for two-factor authentication, which many websites sadly do, is not a good idea. Moreover, as this study has demonstrated, handing over your phone number just gives the service provider another piece of information about you to sell.

Instead of SMS-based two-factor authentication websites should at a minimum offer two-factor authentication that utilizes apps that use Time-based One-time Password Algorithm (TOTP) or HMAC-based One-time Password Algorithm (HOTP) like Authy and Google Authenticator. Better yet websites should offer two-factor authentication that utilizes hardware tokens like YubiKeys.

Written by Christopher Burg

October 4th, 2018 at 10:30 am

Properly Warning Users About Business Model Changes

without comments

I have an update from my previous article about how the developers of GPGTools botched their changeover from offering a free software suite to a paid software suite. It appears that they listened to those of us who criticized them for not properly notifying their users that the latest update will change the business model because this is the new update notification:

That’s how you properly inform your users about business model changes.

Written by Christopher Burg

October 3rd, 2018 at 10:00 am

Cloudflare Makes Tor Use More Bearable

without comments

One of the biggest annoyances of using the Tor Browser is that so many sites that rely on Cloudflare services throw up CAPTCHA challenges before allowing you to view content. Yesterday Cloudflare announced a change to its service that should make life more bearable for Tor users:

Cloudflare launched today a new service named the “Cloudflare Onion Service” that can distinguish between bots and legitimate Tor traffic. The main advantage of this new service is that Tor users will see far less, or even no CAPTCHAs when accessing a Cloudflare-protected website via the Tor Browser.

The new Cloudflare Onion Service needed the Tor team to make “a small tweak in the Tor binary,” hence it will only work with recent versions of the Tor Browser –the Tor Browser 8.0 and the new Tor Browser for Android, both launched earlier this month.

Hallelujah!

Written by Christopher Burg

September 21st, 2018 at 10:00 am

The Power of Public Shaming

without comments

Every major security breach is followed by calls for politicians to enact more stringent regulations. When I see people demanding additional government regulations I like to point out that there is a list of alternative solutions that can yield far better results (especially since regulations, being a product of government, are extremely rigid and slow to change, which makes them a solution ill-suited to fast moving markets). One of those solutions is public shaming. It turns out that public shaming is often a viable solution to security issues:

See the theme? Crazy statements made by representatives of the companies involved. The last one from Betfair is a great example and the entire thread is worth a read. What it boiled down to was the account arguing with a journalist (pro tip: avoid arguing being a dick to those in a position to write publicly about you!) that no, you didn’t just need a username and birth date to reset the account password. Eventually, it got to the point where Betfair advised that providing this information to someone else would be a breach of their terms. Now, keeping in mind that the username is your email address and that many among us like cake and presents and other birthday celebratory patterns, it’s reasonable to say that this was a ludicrous statement. Further, I propose that this is a perfect case where shaming is not only due, but necessary. So I wrote a blog post..

Shortly after that blog post, three things happened and the first was that it got press. The Register wrote about it. Venture Beat wrote about it. Many other discussions were held in the public forum with all concluding the same thing: this process sucked. Secondly, it got fixed. No longer was a mere email address and birthday sufficient to reset the account, you actually had to demonstrate that you controlled the email address! And finally, something else happened that convinced me of the value of shaming in this fashion:

A couple of months later, I delivered the opening keynote at OWASP’s AppSec conference in Amsterdam. After the talk, a bunch of people came up to say g’day and many other nice things. And then, after the crowd died down, a bloke came up and handed me his card – “Betfair Security”. Ah shit. But the hesitation quickly passed as he proceeded to thank me for the coverage. You see, they knew this process sucked – any reasonable person with half an idea about security did – but the internal security team alone telling management this was not cool wasn’t enough to drive change.

As I mentioned above, regulations tend to be rigid and slow to change. Public shaming on the other hand is often almost instantaneous. It seldom takes long for a company tweet that makes an outrageous security claim to be bombarded with criticism. Within minutes there are retweets by people mocking the statement, replies from people explaining why the claim is outrageous, and journalists writing about how outrageous the claim is. That public outrage, unlike C-SPAN, quickly reaches the public at large. Once the public becomes aware of the company’s claim and why it’s bad, the company has to being worrying about losing customers and by extent profits.

Written by Christopher Burg

September 19th, 2018 at 10:00 am

Don’t Trust Snoops

without comments

Software that allows family members to spy on one another is big business. But how far can you trust a company that specializes in enabling abusers to keep a constant eye on their victims? Not surprisingly, such companies can’t be trusted very much:

mSpy, the makers of a software-as-a-service product that claims to help more than a million paying customers spy on the mobile devices of their kids and partners, has leaked millions of sensitive records online, including passwords, call logs, text messages, contacts, notes and location data secretly collected from phones running the stealthy spyware.

Less than a week ago, security researcher Nitish Shah directed KrebsOnSecurity to an open database on the Web that allowed anyone to query up-to-the-minute mSpy records for both customer transactions at mSpy’s site and for mobile phone data collected by mSpy’s software. The database required no authentication.

Oops.

I can’t say that I’m terribly surprised by this. Companies that make software aimed at allowing family members to spy on one another already have, at least in my opinion, a pretty flexible moral framework. I wouldn’t be surprised if all of the data collected by mSpy was stored in plaintext in order to make it easily accessible to other buyers.

Written by Christopher Burg

September 11th, 2018 at 11:00 am

You Are Responsible for Your Own Security

without comments

One of the advertised advantages of Apple’s iOS platform is that all software loaded onto iOS devices has to be verified by Apple. This so-called walled garden is meant to keep the bad guys out. However, anybody who studies military history quickly learns that sitting behind a wall is usually a death sentence. Eventually the enemy breaches the wall. Enemies have breached Apple’s walls before and they continue to do so:

In a blog post entitled “Location Monetization in iOS Apps,” the Guardian team detailed 24 applications from the Apple iOS App Store that pushed data to 12 different “location-data monetization firms”—companies that collect precise location data from application users for profit. The 24 identified applications were found in a random sampling of the App Store’s top free applications, so there are likely many more apps for iOS surreptitiously selling user location data. Additionally, the Guardian team confirmed that one data-mining service was connected with apps from over 100 local broadcasters owned by companies such as Sinclair, Tribune Broadcasting, Fox, and Nexstar Media.

iOS has a good permission system and users can prevent apps from accessing location information but far too many people are willing to grant access to their location information to any application that asks. If a walled garden were perfectly secure, users wouldn’t have to worry about granting unnecessary permissions because the wall guards wouldn’t allow anything malicious inside. Unfortunately, the wall guards aren’t perfect and malicious stuff does get through, which brings me to my second point.

What happens when a malicious app manages to breach Apple’s walled garden? Ideally it should be immediately removed but the universe isn’t ideal:

Adware Doctor is a top app in Apple’s Mac App Store, sitting at number five in the list of top paid apps and leading the list of top utilities apps, as of writing. It says it’s meant to prevent “malware and malicious files from infecting your Mac” and claims to be one of the best apps to do so, but unbeknownst to its users, it’s also stealing their browser history and downloading it to servers in China.

In fairness to Apple, the company did eventually remove Adware Doctor from its app store. Eventually is the keyword though. How many other malicious apps have breached Apple’s walled garden? How long do they manage to hide inside of the garden until they are discovered and how quickly do the guards remove them once they are discovered? Apparently Apple’s guards can be a bit slow to react.

Even in a walled garden you are responsible for your own security. You need to know how to defend yourself in case a bad guy manages to get inside of the defensive walls.

Written by Christopher Burg

September 11th, 2018 at 10:30 am

Posted in Technology

Tagged with , ,

Another Day, Another Exploit Discovered in Intel Processors

without comments

The last couple of years have not been kind to processor manufacturers. Ever since the Meltdown and Specter attacks were discovered, the speculative execution feature that is present on most modern processors has opened the door to a world of new exploits. However, Intel has been hit especially hard. The latest attack, given the fancy name Foreshadow, exploits the speculative execution feature on Intel processors to bypass security features meant to keep sensitive data out of the hands of unauthorized processes:

Foreshadow is a speculative execution attack on Intel processors which allows an attacker to steal sensitive information stored inside personal computers or third party clouds. Foreshadow has two versions, the original attack designed to extract data from SGX enclaves and a Next-Generation version which affects Virtual Machines (VMs), hypervisors (VMM), operating system (OS) kernel memory, and System Management Mode (SMM) memory.

It should be noted that, as the site says, this exploit is not known to work against ARM or AMD processors. However, it would be wise to keep an eye on this site. The researchers are still performing research on other processors and it may turn out that this attack works on processors not made by Intel as well.

As annoying as these hardware attacks are, I’m glad that the security industry is focusing more heavily on hardware. Software exploits can be devastating but if you can’t trust the hardware that the software is running on, no amount of effort to secure the software matters.

Written by Christopher Burg

August 15th, 2018 at 10:30 am

Posted in Technology

Tagged with ,

The Body Camera Didn’t Record the Summary Execution Because It Was Hacked

without comments

The aftermath of DEF CON when the high profile exploits discussed at the event hit the headlines is always fun. Most of the headlines have focused on the complete lack of security that exists on electronic voting machines. I haven’t touch on that because it’s an exercise in beating a dead horse at this point. A story that I found far more interesting due to its likely consequences is the news about the exploits found in popular law enforcer body cameras:

At Def Con this weekend, Josh Mitchell, a cybersecurity consultant with Nuix, showed how various models of body cameras can be hacked, tracked and manipulated. Mitchell looked at devices produced by five companies — Vievu, Patrol Eyes, Fire Cam, Digital Ally and CeeSc — and found that they all had major security flaws, Wired reports. In four of the models, the flaws could allow an attacker to download footage, edit it and upload it again without evidence of any of those changes having occurred.

I assume that these exploits are a feature, not a bug.

Law enforcers already have a problem with “malfunctioning” body cameras. There are numerous instances where multiple law enforcers involved in a shooting with highly questionable circumstances all claimed that their body cameras malfunctioned simultaneously. What has been missing up until this point is a justification for those malfunctions. I won’t be surprised if we start seeing law enforcers claim that their body cameras were hacked in the aftermath of these kinds of shootings. Moreover, the ability of unauthorized individuals to download, edit, and upload footage is another great feature because footage that reflects poorly on law enforcers can be edited and if the edit is discovered, officials can claim that it must have been edited by evil hackers.

Written by Christopher Burg

August 14th, 2018 at 11:00 am