A Geek With Guns

Chronicling the depravities of the State.

Archive for the ‘Technology’ Category

Another Bang Up Job

with 2 comments

Legacy cellular protocols contained numerous gaping security holes, which is why attention was paid to security when Long-Term Evolution (LTE) was being designed. Unfortunately, one can pay attention to something and still ignore it or fuck it up:

The attacks work because of weaknesses built into the LTE standard itself. The most crucial weakness is a form of encryption that doesn’t protect the integrity of the data. The lack of data authentication makes it possible for an attacker to surreptitiously manipulate the IP addresses within an encrypted packet. Dubbed aLTEr, the researchers’ attack causes mobile devices to use a malicious domain name system server that, in turn, redirects the user to a malicious server masquerading as Hotmail. The other two weaknesses involve the way LTE maps users across a cellular network and leaks sensitive information about the data passing between base stations and end users.

Encrypting data is only one part of the puzzle. Once data is encrypted the integrity of the data must be protected as well. This is because encrypted data looks like gibberish until it is decrypted. The only way to know whether the encrypted data you’ve received hasn’t been tampered with is if some kind of cryptographic integrity verification has been implemented and used.

How can you protect yourself form this kind of attack? Using a Virtual Private Network (VPN) tunnel is probably your best bet. The OpenVPN protocol is used by numerous VPN providers that provide clients for both iOS and Android (as well as other major operating systems such as Windows, Linux, and macOS). OpenVPN, unlike LTE, verifies the integrity of encrypted data and rejects any data that appears to have been tampered with. While using a VPN tunnel may not prevent a malicious attacker from redirecting your LTE traffic, it will ensure that the attacker can’t see your data as a malicious VPN tunnel will fail to provide data that passes your client’s integrity checker and thus your client will cease receiving or transmitting data.

Written by Christopher Burg

July 3rd, 2018 at 11:00 am

Welcome to Postliterate America

with 3 comments

In my opinion the United States shows all the signs of a society beginning a descent into postliteracy. One of the biggest signs is the rapidly declining lack of interest in recreational reading:

The share of Americans who read for pleasure on a given day has fallen by more than 30 percent since 2004, according to the latest American Time Use Survey from the Bureau of Labor Statistics.

In 2004, roughly 28 percent of Americans age 15 and older read for pleasure on a given day. Last year, the figure was about 19 percent.

That steep drop means that aggregate reading time among Americans has fallen, from an average of 23 minutes per person per day in 2004 to 17 minutes per person per day in 2017.

I can’t say that I’m surprised by these results. The idea behind a postliterate society is that multimedia technology has advanced to the point where the ability to read and write is unnecessary. In our age of cheap data storage, data transmission, and devices capable of rendering high-definition sound and video, many of which fit in a pocket, we are less reliant on written information than we once were. Moreover, voice dictation is advancing rapidly. When I first tried voice dictation on a computer I wrote it off as useless because at the time it was. Today my phone’s voice dictation is actually pretty decent. What’s probably more amazing than the improvement of voice dictation software is the fact that it’s not nearly as important as it once was because I can just send the audio clip itself to somebody.

Will literacy go the way of shorthand and cursive? It very well could. The technology is already at a point where literacy isn’t as important as it once was. In a few more years it will probably advance to the point where literacy is almost an entirely unnecessary skill. Once that happens it may take only one or two generations until literacy is a skill held exclusive by a handful of individuals who have an interest in archaic knowledge.

Written by Christopher Burg

July 3rd, 2018 at 10:30 am

Posted in Technology

Tagged with

Another Processor Vulnerability

without comments

Hardware has received far less scrutiny in the past than software when it comes to security. That has changed in recent times and, not surprisingly, the previous lack of scrutiny has resulted in a lot of major vulnerabilities being discovered. The latest vulnerability relates to a feature found in Intel processors referred to as Hyperthreading:

Last week, developers on OpenBSD—the open source operating system that prioritizes security—disabled hyperthreading on Intel processors. Project leader Theo de Raadt said that a research paper due to be presented at Black Hat in August prompted the change, but he would not elaborate further.

The situation has since become a little clearer. The Register reported on Friday that researchers at Vrije Universiteit Amsterdam in the Netherlands have found a new side-channel vulnerability on hyperthreaded processors that’s been dubbed TLBleed. The vulnerability means that processes that share a physical core—but which are using different logical cores—can inadvertently leak information to each other.

In a proof of concept, researchers ran a program calculating cryptographic signatures using the Curve 25519 EdDSA algorithm implemented in libgcrypt on one logical core and their attack program on the other logical core. The attack program could determine the 256-bit encryption key used to calculate the signature with a combination of two milliseconds of observation, followed by 17 seconds of machine-learning-driven guessing and a final fraction of a second of brute-force guessing.

Like the last slew of processor vulnerabilities, the software workaround for this vulnerability involves a performance hit. Unfortunately, the long term fix to these vulnerabilities involves redesigning hardware, which could destroy an assumptions on which modern software development relies: hardware will continue to become faster.

This assumption has been at risk for a while because chip designers are running into transistor size limitations, which could finally do away with Moore’s Law. But designing secure hardware may also require surrendering a bit on the performance front. It’s possible that the next generation of processors won’t have the same raw performance as the current generation of processors. What would this mean? Probably not much for most users. However, it could impact software developers to some extent. Many software development practices are based on the assumption that the next generation of hardware will be faster and it is therefore unnecessary to focus on writing performant code. If the next generation of processors have the same performance as the current generation or, even worse, less performance, an investment in performant code could pay dividends.

Obviously this is pure speculation on my behalf but it’s an interesting scenario to consider.

Written by Christopher Burg

June 27th, 2018 at 11:00 am

The End of Enforceable Prohibitions

with one comment

I’m fond of pointing out to prohibitionists that the era of enforceable prohibitions is over:

In the very near future, governments will lose the ability to keep guns, drones, and other forbidden goods out of the hands of their subjects. They’ll also be rendered impotent to enforce trade and technology embargoes. Power is shifting from the state to individuals and small groups courtesy of additive manufacturing—aka 3D printing—technology.

Additive manufacturing is poised to revolutionize whole industries—destroying some jobs while creating new opportunities. That’s according to a recent report from the prestigious RAND Corporation, and there’s plenty of evidence to support the dynamic and “disruptive” view of the future that the report promises.

Throughout history power has ebbed and flowed. At times centralized authorities are able to wield their significant power to oppress the masses. At other times events weaken those centralized authorities and the average person once again finds themselves holding a great deal of power.

Technological advancements are quickly weakening the power of the centralized nation-states. Encryption technology is making their surveillance apparatus less effective. Cryptocurrencies are making it difficult for nation-states to monitor and block transactions. Manufacturing technology is allowing individuals to make increasingly complex objects from the comfort of their own homes. The Internet has made freely trading information so easy that censorship is quickly becoming impossible.

We live in exciting times.

Written by Christopher Burg

June 12th, 2018 at 11:00 am

How Things Change

without comments

The big news in developer circles this week is that Microsoft acquired GitHub. I admit that the news didn’t fill me with happiness since I’m not a fan of the trend of everything being gobbled up by a handful of big companies. But Microsoft has been making a rather dramatic shift in recent years. The company has becoming far friendlier towards the open source community and has been releasing a lot of terrific developer tools. This shift has made me hopeful that Microsoft will be a good steward for GitHub. Moreover, things could have turned out far worse:

Microsoft was not alone in chasing GitHub, which it agreed to acquire for $7.5 billion on Monday. Representatives from Alphabet’s Google were also talking to the company about an acquisition in recent weeks, according to people familiar with the deal talks.

Not too long ago if you had told me that both Microsoft and Google were looking to acquire GitHub, I’d have hoped for Google. But today I’m happy that of the two companies Microsoft ended up buying GitHub.

The biggest problem I have with Google, besides its business model based on surveilling users, is its habit of abandoning products. Google Reader, Google Talk, Google Health, Google Wave, and more have been discontinued by Google. Some of the products were discontinued shortly after they were released and/or were discontinued with little notice given to users. Microsoft, on the other hand, is well-known for supporting products for a long time and giving reasonable notice when it does decide to discontinue a product. If Google had acquired GitHub, there’s no telling how long it would have been kept around. Since Microsoft acquired GitHub, it’ll probably be around for a long time.

It’s funny how things can change so rapidly. Google was the darling child of the technology industry but now its star is descending. Meanwhile, Microsoft went from the epitome of evil but is now improving its reputation.

Written by Christopher Burg

June 6th, 2018 at 10:30 am

Posted in Technology

Tagged with

Hardware is Cheaper than Developer Time

without comments

Is your application performing poorly? Just throw more hardware at it! This attitude has become mainstream thanks to the widespread availability of cheap hardware and the high cost of developer time. Why pay a team of developers tens or hundreds of thousands of dollars to improve the performance of an application when you can buy a handful of relatively cheap servers and still be able to provide the performance your customers need?

What’s interesting about this equation is that consumers have been mostly shielded from it. However, when this equation does impact consumers, it usually raises some important questions:

Capcom will give Japanese Switch owners a chance to play last year’s Resident Evil 7 on the Switch later this week. But the port will only be playable as an online stream running on Capcom’s own servers, rather than a downloaded version that would run directly on the Switch’s relatively low-powered hardware.


But such a port would have required time and programming resources that Capcom might not have been willing to spare. With cloud streaming, on the other hand, getting the game onto the Switch is likely just a matter of setting up some servers to run the existing PC version, then writing a simple client to stream inputs and video/audio to and from the Switch. Streaming to the Switch means not having to compromise on graphical detail, but it could lead to stuttering and frame rate issues if the Internet connection isn’t absolutely solid.

Nintendo has been at a disadvantage for the last several console generations. Its consoles have been less powerful than its competitors, which has contributed to developers not porting games to Nintendo’s consoles. When games have been ported, developer time had to be invested in down scaling the game enough to run on the less powerful hardware.

With the widespread availability of high-speed Internet connectivity, an alternative strategy to porting a game directly has become possible. Instead of porting the game itself, the game can be run on more powerful hardware and the video can be streamed to the player. This would theoretically allow any game to run on almost any platform. A user could just as easily stream the game on their Switch as their phone.

But the universe abhors perfection so this strategy naturally has trade offs. The most obvious of these trade offs is latency. If the game is being run on a remote server, every button pressed by the player must be transmitted to that server. Even with a high-speed Internet connection that latency can be noticeable, especially for extremely fast paced games. But the more sinister trade off in my opinion is the fact that players can’t own the game since it exists exclusively on remote servers. At some point Capcom will decide that continuing to operate the Biohazard 7 servers is costing more money than the game is making. When that happens, the servers will be turned off and the players who paid for the game will no longer be able to play it.

I’ve lamented about the fact that consumers own fewer of the products they “buy.” The idea that paying a producer money for a product resulted in exclusive ownership has been replaced by the idea of licensing. You don’t purchase a tractor, you pay to license the software that runs on it and John Deere just happens to throw in the hardware for free. In the case of Biohazard 7, gamers aren’t buying the game, they’re paying for the privilege to stream the game for as long as Capcom allows.

Written by Christopher Burg

May 22nd, 2018 at 10:30 am

Posted in Technology

Tagged with

Eight Percent of the Time It Works Every Time

without comments

The Transportation Security Agency (TSA) is the embodiment of government incompetence. It has failed 95 percent of red team exercises, which doesn’t bode well for the agency’s general ability to detect weapons before air travelers are able to enter the “secure” area of an airport. However, the United States doesn’t have a monopoly on government incompetence. The United Kingdom (UK) also has its own program that has a failure rate of 90 percent:

A British police agency is defending (this link is inoperable for the moment) its use of facial recognition technology at the June 2017 Champions League soccer final in Cardiff, Wales—among several other instances—saying that despite the system having a 92-percent false positive rate, “no one” has ever been arrested due to such an error.

Of course nobody has been arrested due to a false positive. When a system has a false positive rate of 92 percent it’s quickly ignored by whomever is monitoring it.

False positives can be just as dangerous as misses. While misses allow a target to avoid a detection system, false positives breed complacency that quickly allows false positives to turn into misses. If a law enforcer is relying on a system to detect suspects and it constantly tells him that it found a suspect but hasn’t actually found a suspect, the law enforcer quickly ignores any report from the system. When the system does correctly identify the suspect, there’s a good chance that the law enforcer monitoring it won’t even bother to look at the report to verify it. Instead they’ll just assume it’s another false positive and continue sipping their tea or whatever it is that UK law enforcers do most of the time.

Written by Christopher Burg

May 9th, 2018 at 10:00 am

The Subtle Ways Technology Shapes Our Lives

with 3 comments

Some schools in the United Kingdom have announced that they’re removing analog clocks because students are unable to read them:

Some U.K. schools are ditching analog clocks from test rooms because a generation of kids raised on digital clocks can’t read them and are getting stressed about time running out during tests, London’s Telegraph reports.

“The current generation aren’t as good at reading the traditional clock face as older generations,” Malcolm Trobe, deputy general secretary of the U.K.’s Association of School and College Leaders, told The Telegraph.

I, along with many other people, initially scoffed at this announcement. Teaching somebody how to read an analog clock takes a matter of minutes. On the other hand, as a few friends pointed out to me, the skill is almost entirely unnecessary today. Most of us carry a pocket computer that displays the current time. Those pocket computers usually display the time in the friendlier digital format. Since most people carry around a time telling device, public clocks are less important than they were. People who have a pocket computer that displays the time in a digital format don’t need to know how to read an analog clock.

This is just another subtle, albeit major, way that technology is shaping our lives. Another example is cursive writing. I learned how to write in cursive around second or third grade and continue the practice today because it’s faster than writing block letters. However, cursive is indecipherable to many younger individuals. Why? Because the ability to write quickly is less important in a world where computers are prevalent. It’s rare for me to be in a situation where I have to write something. Usually I can type it out on a computer or tap it into my phone. The generation that came after mine never knew a world where computers weren’t prevalent and the current generation is growing up with touchscreen devices (a technology I once saw in my youth, although in a very rudimentary form, and thought it was the coolest thing ever) that fit in their pockets and can automatically transform their spoken words into typed text or transmit it directly.

When I was in school, pocket calculators were already prevalent, which caused us students to ask our math teachers why we had to memorize so many mathematical operations. Our teachers responded that we wouldn’t always have a calculator with us. I can’t say that they were wrong. At the time I rarely carried a calculator with me. Pocket space was at a premium and I couldn’t carry every with me. Fast forward to today. I always have a calculator with me because it’s an app on my phone. My teachers’ response to my question, although true back then, is no longer true.

Remember paper maps and compasses? I do because I used to have to use them to navigate in unfamiliar areas. If I was in an unfamiliar city and needed to get somewhere, I had to either get out of my car and ask somebody for direction (which may or may not result in receiving good directions) or pull out a paper map to determine my current location, the location of my destination, and the best route to get there. I then used a compass to keep myself going in the right direction. Now I type my destination into my phone and let it guide me to my destination. In addition to being faster because it already knows where everything is, it can also provide me a better route because it also knows the current traffic conditions. Navigating with a map and compass is another skill that is largely irrelevant in a world of ubiquitous smartphones and cellular coverage.

Many of the skills that I learned were important at one time but are of little importance today. When I sit down to think about it, it’s fascinating how technology has changed my world in so many subtle ways. My skills of reading an analog clock, cursive writing, performing math in my head, and navigating with a map and compass are pretty much irrelevant. I wonder what other skills that I learned will be made less relevant by technology in the coming years.

Written by Christopher Burg

May 8th, 2018 at 11:00 am

Posted in Technology

Tagged with

Oftentimes Dumb is Better

without comments

The philosophy modern hardware manufacturers seem to predominantly follow is that any product can be improved by putting a chip in it. While it may be convenient to have speakers that can wirelessly connect to you phone and stream music from it, there is a significant downside to such a convenience, near future obsolesce:

But more important to me, the Nocs app — which you need to configure to use Wi-Fi networking and update firmware — hasn’t been updated since October 2014, meaning that the iOS app doesn’t work at all anymore, since Nocs never updated it with a 64-bit version. (There’s apparently an Android app, but reviews indicate that it seems to crash more often than not, so that probably isn’t a great solution, either.)

This would all be less of a problem if I had another way to use the speakers, but since I don’t have the Bluetooth model, I’m stuck with either Airplay or a 3.5mm cable (which isn’t super convenient to access, since they’re on a bookshelf). Plus, Airplay itself as a standard is on its way out, so even if the NS2 pair that I have work without any problems, they’ll be obsolete and incompatible with the new wave of speakers that will be out whenever Apple decides to finally release Airplay 2.

In this case the author has the fortune of being able to fallback to a standard 3.5mm headphone jack but many “smart” devices don’t include legacy support.

Dumb devices tend to have a longer shelf life than their smart brethren. This is because dumb device tend to operate on standards that have been around for decades. Speakers that attach to receivers using two copper cables have been around for decades and will likely be around for decades to come. What makes dumb speakers even better is that they’re modular. If a smart speaker becomes obsolete, you have to replace the whole speaker. If the receiver you plug your dumb speakers into becomes obsolete, you can replace the receiver while keeping your bitchin’ speakers.

There are a lot of legitimate reasons to add a chip to old products but there is also a trade off. In many cases, at least in my opinion, dumb devices enjoy enough advantages in shelf life that they remain superior to their smart brethren.

Written by Christopher Burg

April 17th, 2018 at 10:00 am

Posted in Technology

Tagged with

Embracing the Darknet

without comments

Big changes came to the Internet shortly after Congress passed the Stop Enabling Sex Traffickers Act (SESTA). SESTA, like most legislation, has a name that sounds good on the surface but actually conceals some heinous provisions. One of those major provisions is holding website owners criminally liable for user generated content. This resulted in some drastic changes to sites like Reddit and Craiglist:

So far, four subreddits related to sex have banned: Escorts, Male Escorts, Hookers, and SugarDaddy. None were what could accurately be described as advertising forums, though (to varying degrees) they may have helped connect some people who wound up in “mutually beneficial relationships.” The escort forums were largely used by sex workers to communicate with one another, according to Partridge. Meanwhile, the “hooker” subreddit “was mostly men being disgusting,” according to Roux, “but also was a place that sometimes had people answering educational questions in good faith.”


Reddit yesterday announced changes to its content policy, now forbidding “transactions for certain goods and services,” including “firearms, ammunition, or explosives” and “paid services involving physical sexual contact.” While some of the prohibited exchanges are illegal, many are not.

Yet they run close enough up against exchanges that could be illegal that it’s hard for a third-party like Reddit to differentiate. And the same goes for forums where sex workers post educational content, news, safety and legal advice. Without broad Section 230 protections, Reddit could be in serious financial and legal trouble if they make the wrong call.

The passage of SESTA set a precedence that will certainly expand. Today Section 230 protections can be revoked for user generated content about sex trafficking. Tomorrow it could be revoked for user generated content involving hate speech, explaining the chemistry and biology behind how prohibited drugs work, showing the mechanics of how a machine gun operates, and so on. User generated content is now a liability and will only become more of a liability as the precedence is expanded.

Will this rid the world of content about sex work, drugs, and guns? Of course not. It will merely push that content to anonymized servers, commonly referred to as the “darkweb.” As laws make hosting content on the non-anonymized Internet a legal hazard, Internet users will find that they need tools like I2P and the Tor Browser to access more and more of the content they desire. The upside to this is that it will lead to a tremendous increase in resources available to developers and operators of “darkweb” technologies. Eventually the laws passed to thwart unapproved behavior will again make restricting unapproved behavior all but impossible.

Written by Christopher Burg

March 27th, 2018 at 11:00 am