There Is No Free Web

Ad blockers are wonderful plugins that save bandwidth (and therefore money for people paying by usage) and protect computers against malware. But a lot of people, namely website operators that rely on advertisements for revenue, hate them:

This is an exciting and chaotic time in digital news. Innovators like BuzzFeed and Vox are rising, old stalwarts like The New York Times and The Washington Post are finding massive new audiences online, and global online ad revenue continues to rise, reaching nearly $180 billion last year. But analysts say the rise of ad blocking threatens the entire industry—the free sites that rely exclusively on ads, as well as the paywalled outlets that rely on ads to compensate for the vast majority of internet users who refuse to pay for news.

[…]

Sean Blanchfield certainly doesn’t share Carthy’s views. He worries that ad blocking will decimate the free Web.

As the war between advertisers and ad blockers wages there’s something we need to address: the use of the phrase “free web.” There is no “free web.” There has never been a “free web.” Websites have always required servers, network connectivity, developers, content producers, and other costs. This war isn’t between a “free web” and a pay web; it’s between a revenue model where viewers are the product and a revenue model where the content is the product.

If you’re using a service and not paying for it the content isn’t the product, you are. The content exists only to get you to access the website to either increase the number of page clicks and therefore give the owners a good argument for why advertisers should advertise on their sites or hand over your personal information so it can be sold to advertisers. In exchange for being the product other costs are also pushed onto you such as bandwidth and the risk of malware infection.

Ad blockers can’t decimate the “free web” because it doesn’t exist. What they will likely do is force website operators to find alternate means of generating revenue. Several content providers have started experimenting with new revenue models. The Wall Street Journal, for example, puts a lot of article behind a paywall and the New York Times gives readers access to a certain number of articles per month for free but expects payment after that. Other content providers like Netflix charge a monthly subscription for access to any content. There are a lot of ways to make money off of content without relying on viewers as a product.

As this war continues always remember TANSTAAFL (there ain’t no such thing as a free lunch) otherwise you might get suckered into believing there is a “free web” and let that color your perception.

Peripherals Are Potentially Dangerous

Some auto insurance companies are exploring programs where customers can receive reduced rates in exchange for attaching a dongle to their vehicle’s on-board diagnostics (OBD) port. The dongles then use the diagnostics information provided by the vehicle to track your driving habits. If you’re a “good” driver you can get a discount (and if you’re a “bad” driver you’ll probably get charged more down the road). It seems like a good deal for drivers who always obey speed limits and such but the OBD port has access to everything in the vehicle, which means any dongle plugged into it could cause all sorts of havoc. Understandably auto insurance companies are unlikely to use such dongles for evil but that doesn’t mean somebody else won’t:

At the Usenix security conference today, a group of researchers from the University of California at San Diego plan to reveal a technique they could have used to wirelessly hack into any of thousands of vehicles through a tiny commercial device: A 2-inch-square gadget that’s designed to be plugged into cars’ and trucks’ dashboards and used by insurance firms and trucking fleets to monitor vehicles’ location, speed and efficiency. By sending carefully crafted SMS messages to one of those cheap dongles connected to the dashboard of a Corvette, the researchers were able to transmit commands to the car’s CAN bus—the internal network that controls its physical driving components—turning on the Corvette’s windshield wipers and even enabling or disabling its brakes.

“We acquired some of these things, reverse engineered them, and along the way found that they had a whole bunch of security deficiencies,” says Stefan Savage, the University of California at San Diego computer security professor who led the project. The result, he says, is that the dongles “provide multiple ways to remotely…control just about anything on the vehicle they were connected to.”

I guarantee any savings you get from your insurance company from attaching one of these dongles to your OBD port will be dwarfed in comparison to the cost of crashing your vehicle due to your brakes suddenly being disabled.

This is a perfect example of two entities with little experience in security compounding their failures to create a possible catastrophe. Automotive manufacturers are finally experiencing the consequences of having paid no attention to the security of their on-board systems. Insurance agencies now have a glimpse of what can happen when you fail to understand the technology you’re working with. While a dongle that tracks the driving behavior of customers seems like a really good idea if that dongle is remotely accessible and insecure it can actually be a far bigger danger than benefit.

I wouldn’t attach such a device to my vehicle because it creates a remote connection to the vehicle (if it didn’t the insurance companies would have any reliable way of acquiring the data from the unit) and that is just asking for trouble at this story shows.

Oracle. Because You Suck. And We Hate You.

Unpatched vulnerabilities are worth a lot of money to malicious hackers. Hoping to outbid more nefarious types many large software companies; including Google, Microsoft, and Mozilla; have begun offering cash payments for disclosed vulnerabilities. Companies that don’t have bounty programs will often publicly credit you for the discovery. But Oracle will do neither. In fact Oracle’s Chief Security Office went out of her way to describe Oracle’s official policy regarding vulnerability disclosure (the blog post was later, smartly, removed from Oracle’s site but the Internet is forever so we get to laugh anyways). The post contains some real gems:

If we determine as part of our analysis that scan results could only have come from reverse engineering (in at least one case, because the report said, cleverly enough, “static analysis of Oracle XXXXXX”), we send a letter to the sinning customer, and a different letter to the sinning consultant-acting-on-customer’s behalf – reminding them of the terms of the Oracle license agreement that preclude reverse engineering, So Please Stop It Already. (In legalese, of course. The Oracle license agreement has a provision such as: “Customer may not reverse engineer, disassemble, decompile, or otherwise attempt to derive the source code of the Programs…” which we quote in our missive to the customer.) Oh, and we require customers/consultants to destroy the results of such reverse engineering and confirm they have done so.

It’s good to get this out of the way early. Oracle, upon receiving a report of a vulnerability, will first investigate whether discovering the vulnerability required reverse engineering its code. If it did Oracle’s way of saying thanks is to send you a legal threat for violating the license agreement. Although I’ve never sold a vulnerability to a malicious hacker I’m fairly certain their reaction is not to threaten you with legal action. Score one for the “bad guys” (I’m using quotes here because I’m not sure if malicious hackers really are bad guys when compared to Oracle).

Q. What does Oracle do if there is an actual security vulnerability?

Pay the person who disclosed it instead of selling it to malicious hackers, right?

A. I almost hate to answer this question because I want to reiterate that customers Should Not and Must Not reverse engineer our code. However, if there is an actual security vulnerability, we will fix it. We may not like how it was found but we aren’t going to ignore a real problem – that would be a disservice to our customers. We will, however, fix it to protect all our customers, meaning everybody will get the fix at the same time. However, we will not give a customer reporting such an issue (that they found through reverse engineering) a special (one-off) patch for the problem. We will also not provide credit in any advisories we might issue. You can’t really expect us to say “thank you for breaking the license agreement.”

Or not. People kindly disclosing discovered vulnerabilities to Oracle will only receive the legal threat. No payment or even public credit will be given. Meanwhile malicious hackers will give you cash for unpatched vulnerabilities so they score another point.

Q. But one of the issues I found was an actual security vulnerability so that justifies reverse engineering, right?

Under these circumstances I’m sure Oracle will forgive you for violating the license agreement since malicious hackers aren’t going to abide by it either, right?

A. Sigh. At the risk of being repetitive, no, it doesn’t, just like you can’t break into a house because someone left a window or door unlocked.

I guess not. Although I’m not sure how breaking into a house is an accurate analogy here. A better analogy would be buying a lock, taking it apart, and discovering a mechanical flaw that makes it easy to bypass. Entering a home uninvited is quite a bit different than being inviting into a home, and a customer who paid Oracle for a license was certainly invited to use the company’s software, and discovering that the locks inside the home could be easily bypassed due to a design flaw. Most homeowners would probably thank you for pointing out the locks they purchased are shitty. Regardless of the analogy a malicious hacker isn’t likely to care that you “broke into a house” or violated a license agreement. Score yet another point to them.

Q. Hey, I’ve got an idea, why not do a bug bounty? Pay third parties to find this stuff!

That’s a good question. Oracle can’t possibly argue that bug bounty programs are a bad idea, right?

A. Bug bounties are the new boy band (nicely alliterative, no?) Many companies are screaming, fainting, and throwing underwear at security researchers**** to find problems in their code and insisting that This Is The Way, Walk In It: if you are not doing bug bounties, your code isn’t secure. Ah, well, we find 87% of security vulnerabilities ourselves, security researchers find about 3% and the rest are found by customers. (Small digression: I was busting my buttons today when I found out that a well-known security researcher in a particular area of technology reported a bunch of alleged security issues to us except – we had already found all of them and we were already working on or had fixes. Woo hoo!)

Jesus Christ. Really? Since Oracle finds 87 percent of vulnerabilities bug bounty programs are useless? I guess the other 13 percent are somehow valueless because they’re the minority? Seriously, what the fuck is Oracle thinking here? Malicious hackers pay per vulnerability. They don’t give a shit if it’s part of a minority of irrelevant metric kept by Oracle. And it only takes one vulnerability to put your customers at risk. That’s the fourth point for malicious hackers.

Q. Surely the bad guys and some nations do reverse engineer Oracle’s code and don’t care about your licensing agreement, so why would you try to restrict the behavior of customers with good motives?

I’m not even going to waste your time with asking if Oracle has found some common sense by now. We know it hasn’t.

A. Oracle’s license agreement exists to protect our intellectual property. “Good motives” – and given the errata of third party attempts to scan code the quotation marks are quite apropos – are not an acceptable excuse for violating an agreement willingly entered into. Any more than “but everybody else is cheating on his or her spouse” is an acceptable excuse for violating “forsaking all others” if you said it in front of witnesses.

Oracle seems to have the same mentality as those who put up those retched “no guns allowed” signs. That is a belief that words can somehow stop people from acting in a certain fashion. The question or malicious hackers reverse engineering Oracle’s code in violation of its license agreement isn’t one that lends itself to arguing about the moral high ground. They are doing it so it’s in your best interest to have other people, people who want to help you thwart the malicious hackers, doing the same. Once again we return to the fact malicious hackers aren’t going to give you a speech on morality, they’re going to pay you. That’s five points to them and zero to Oracle.

Considering what we learned in this blog post what motivation does anybody have to disclose discovered vulnerabilities in Oracle’s software? At worst you’ll receive a legal threat and at best you’ll receive nothing at all. Meanwhile malicious hackers will pay you cash for that vulnerability.

The reason companies like Google, Microsoft, and Mozilla established bounty programs is because they realize vulnerabilities are a valuable commodity and they have to outbid the competition.

I’ve long wondered why anybody does business with Oracle considering the company’s history. But this post really confirmed my dislike of the company. There are times where you have to set aside trivial disagreements, like a customer violating a license agreement, for the good of your business (which is also the good of the customers in this case). If somebody discloses a vulnerability to you you shouldn’t waste time asking a bunch of irrelevant legal questions and you certainly shouldn’t threaten them with legal action. Instead you should verify the bug and pay the person who disclosed it to you instead of disclosing it to somebody with a vested interest in exploiting your customers. Make it worth somebody’s while to disclose vulnerabilities to you so they don’t disclose them to people who are going to target your customers.

Without Government Who Would Pollute The Rivers

I’ve been told the Environmental Protection Agency (EPA) is the lone barrier that stands between us and the entire country being turned into an uninhabitable wasteland by greedy corporations that want to fill our lakes and rivers with industrial waste. But I’ve also been told that socialism can work so I don’t put a lot of weight into what others have told me. The EPA, as with most government agencies, doesn’t really do what its name implies. It doesn’t protect the environment so much as licenses pollution. When somebody is dumping waste into a body of water the EPA steps in and demands a little piece of the action in exchange for looking the other way. And if nobody is polluting a body of water the EPA steps in and does it:

DURANGO — A spill that sent 1 million gallons of wastewater from an abandoned mine into the Animas River, turning the river orange, set off warnings Thursday that contaminants threaten water quality for those downstream.

The Environmental Protection Agency confirmed it triggered the spill while using heavy machinery to investigate pollutants at the Gold King Mine, north of Silverton.

I know somebody reading this will feel the need to point out that the EPA didn’t do this on purpose, which I’m sure is true. That’s not the point. The point is the lack of recourse. When an individual or corporation dumps waste into a body of water people usually sic the EPA on them. But what happens in this case? Who watches the watchmen? Does the EPA sue itself and transfer some of its money to itself? Will another agency, maybe an oversight committee, step in to find the EPA and therefore transfer some of the state’s wealth from itself to itself?

Herein lies the problem. Then government, which is the biggest polluter, is held entirely unaccountable because it has declared a monopoly on environmental protection. As it has declared this monopoly for itself there is no way to hold it accountable because it’s in its best interest to not enforce its own laws against itself. And if anybody else tries to hold it accountable it attacks them for breaking the law.

The biggest failure of environmentalism is its reliance on the state. A state has no interest in protecting the environment, its interests lie in polluting it without consequence and getting a piece of any polluting action.

The Dangers Of Centralization

Markets tend to have redundancies. We generally refer to this characteristic as “competition.” When there is demand for a good or service everybody wants a piece of the action so monopolies are almost nonexistent in free markets. Statism, on the other hand, tends towards centralization. Where markets have competitors trying to provide you with the best good or service possible states actively try to push out any competition and establish monopolies.

The problem with centralization is that when a system fails any dependent system necessarily fails along with it. The State Department, which has a monopoly on issuing visas, recently experienced, what it referred to as, a computer glitch that effectively stopped the issuance of any visas:

The State Department says it is working around the clock on a computer problem that’s having widespread impact on travel into the U.S. The glitch has practically shut down the visa application process.

Of the 50,000 visa applications received every day, only a handful of emergency visas are getting issued.

I’m sure this news made the neocons and neoliberals giddy because it meant foreign workers couldn’t enter the country. But this news should give everybody cause for concern because it gives us another glimpse into how fragile statism is. In a free market this kind of failure would be a minor annoyance as another provider would need to be sought out.

Centralization is the antithesis of robustness, which is one reason statism is so dangerous. Under statism a single failure can really hurt millions of people whereas failures in a free market environment tend to be limited in scope and only insomuch as forcing customers to seek alternate providers.

Why You Want Paranoid People To Comment On Features

When discussing security with the average person I’m usually accused of being paranoid. I carry a gun in case I have to defend myself? I must be paranoid! I only allow guests at my dwelling to use an separate network isolated from my own? I must be paranoid! I encrypt my hard drive? I must be paranoid! It probably doesn’t help that I live by the motto, just because you’re paranoid doesn’t mean they’re not out to get you.

Paranoid people aren’t given enough credit. They see things that others fail to see. Consider all of the application programming interface (API) calls the average browser has available to website developers. To the average person, and even to many engineers, the API calls available to website developers aren’t particularly threatening to user privacy. After all, what does it matter if a website can see how much charge is left in your batter? But a paranoid person would point out that such information is dangerous because it gives website developers more data to uniquely identify users:

The battery status API is currently supported in the Firefox, Opera and Chrome browsers, and was introduced by the World Wide Web Consortium (W3C, the organisation that oversees the development of the web’s standards) in 2012, with the aim of helping websites conserve users’ energy. Ideally, a website or web-app can notice when the visitor has little battery power left, and switch to a low-power mode by disabling extraneous features to eke out the most usage.

W3C’s specification explicitly frees sites from needing to ask user permission to discover they remaining battery life, arguing that “the information disclosed has minimal impact on privacy or fingerprinting, and therefore is exposed without permission grants”. But in a new paper from four French and Belgian security researchers, that assertion is questioned.

The researchers point out that the information a website receives is surprisingly specific, containing the estimated time in seconds that the battery will take to fully discharge, as well the remaining battery capacity expressed as a percentage. Those two numbers, taken together, can be in any one of around 14 million combinations, meaning that they operate as a potential ID number. What’s more, those values only update around every 30 seconds, however, meaning that for half a minute, the battery status API can be used to identify users across websites.

The people who developed the W3C specification weren’t paranoid enough. It was ignorant to claim that reporting battery information to websites would have only a minimal impact on private, especially when you combine it with all of the other uniquely identifiable data websites can obtain about users.

Uniquely identifying users becomes easier with each piece of data you can obtain. Being able to obtain battery information alone may not be terribly useful but combining it with other seemingly harmless data can quickly give a website enough data points to identify a specific user. Although that alone may not be enough to reveal their real identity it is enough to start following them around on the web until enough personal information has been tied to them to reveal who they are.

The moral of this story is paranoia isn’t properly appreciated.

Security Is A Growing Threat To Security

Where a person stands on the subject of effective cryptography is a good litmus test for how technically knowledgeable they are. Although any litmus test is limited you can tell immediately that an individual doesn’t understand cryptography if they in any way support state mandated weaknesses. Mike Rogers, a former Michigan politician, expressed his ignorance of cryptography in an editorial that should demonstrate to everybody why his opinion on this matter can be safety discarded:

Back in the 1970s and ’80s, Americans asked private companies to divest from business dealings with the apartheid government of South Africa. In more recent years, federal and state law enforcement officials have asked — and required — Internet service providers to crack down on the production and distribution of child pornography.

You know where it is going when the magical words “child pornography” are being mentioned in the first paragraph.

Take another example: Many communities implement landlord responsibility ordinances to hold them liable for criminal activity on their properties. This means that landlords have certain obligations to protect nearby property owners and renters to ensure there isn’t illicit activity occurring on their property. Property management companies are typically required to screen prospective tenants.

Because of the title of the editorial I know this is supposed to be about encryption. By using the words “child pornography” I know this article is meant to argue against effective cryptography. However, I have no bloody clue how landlords play into this mess.

The point of all these examples?

There’s a point?

That state and federal laws routinely act in the interest of public safety at home and abroad. Yet now, an emerging technology poses a serious threat to Americans — and Congress and our government have failed to address it.

Oh boy, this exercise in mental gymnastics is going to be good. Rogers could be going for the gold!

Technology companies are creating encrypted communication that protects their users’ privacy in a way that prevents law enforcement, or even the companies themselves, from accessing the content. With this technology, a known ISIS bomb maker would be able to send an email from a tracked computer to a suspected radicalized individual under investigation in New York, and U.S. federal law enforcement agencies would not be able to see ISIS’s attack plans.

Child pornography and terrorism in the same editorial? He’s pulling out all the stops! Do note, however, that he was unable to cite a single instance where a terrorist attack would have been thwarted if only effective encryption hadn’t been in the picture. If you’re going to opt for fear mongering it’s best to not create hypothetical scenarios that can be shot down. Just drop the boogeyman’s name and move on otherwise you look like an even bigger fool than you would.

What could a solution look like? The most obvious one is that U.S. tech companies keep a key to that encrypted communication for legitimate law enforcement purposes. In fact, they should feel a responsibility and a moral obligation to do so, or else they risk upending the balance between privacy and safety that we have so carefully cultivated in this country.

Here is where his entire argument falls apart. First he claims “state and federal laws routinely act in the interest of public safety” and now he’s claiming that state and federal laws should work against public safety.

Let’s analyze what a hypothetical golden key would do. According to Rogers it would allow law enforcement agents to gain access to a suspect’s encrypted data. This is true. In fact it would allow anybody with a copy of that key to gain access to the encrypted data of anybody using that company’s products. Remember when Target and Home Depot’s networks were breached and all of their customers’ credit card data was compromised? Or that time Sony’s PlayStation Network was breached and its customers’ credit card data was compromised? How about the recent case of that affair website getting breached and its customers’ personal information ending up in unknown hands? And then there was the breach that exposed all of Hacking Team’s dirty secrets and many of its private keys to the Internet. These are not hypothetical scenarios cooked up by somebody trying to scare you into submission but real world examples of company networks being breached and customer data being compromised.

Imagine the same thing happening to a company that held a golden key that could decrypt any customer’s encrypted data. Suddenly a single breach would not only compromise personal information but also every device every one of the company’s customers possessed. If Apple, for example, were to implement Rogers’ proposed plan and its golden key was compromised every iOS user, which includes government employees I might add, would be vulnerable to having their encrypted data decrypted by anybody who acquired a copy of the key (and let’s not lie to ourselves, in the case of such a compromise the key would be posted publicly on the Internet).

Network breaches aren’t the only risk. Any employee with access to the golden key would be able to decrypt any customer’s device. Even if you trust law enforcement do you trust one or more random employees at a company to protect your data? A key with that sort of power would be worth a lot of money to a foreign government. Do you trust somebody to not hand a copy of the key over to the Chinese government for a few billion dollars?

There is no way a scenario involving a golden key can end well, which brings us to our next point.

Unfortunately, the tech industry argues that Americans have an absolute right to absolute privacy.

How is that unfortunate? More to the point, based on what I wrote above, we can see that the reason companies don’t implement cryptographic backdoors isn’t because they believe in some absolute right to privacy but because the risks of doing so are too great of a liability.

The only thing Rogers argued in his editorial was his complete ignorance on the subject of cryptography. Generally the opinions of people who are entirely ignorant on a topic are discarded and this should be no exception.

Without Government Who Would Arm The Terrorists

What’s the most effective way reduce gun violence in the United States? According to those who oppose self-defense mandating background checks for every firearm transfer would reduce gun violence. It’s an idea that sounds good to a lot of people on paper but only because they haven’t stopped to think about what that entails. Background checks require government approval for firearm transfers. Mandating background checks for every firearm transfer would, according to opponents of self-defense, ensure bad guys couldn’t acquire firearms. The biggest flaw in this plan is that it relies on government, which is more than happy to provide firearms to violent individuals:

Five years before he was shot to death in the failed terrorist attack in Garland, Texas, Nadir Soofi walked into a suburban Phoenix gun shop to buy a 9-millimeter pistol.

At the time, Lone Wolf Trading Co. was known among gun smugglers for selling illegal firearms. And with Soofi’s history of misdemeanor drug and assault charges, there was a chance his purchase might raise red flags in the federal screening process.

Inside the store, he fudged some facts on the form required of would-be gun buyers.

What Soofi could not have known was that Lone Wolf was at the center of a federal sting operation known as Fast and Furious, targeting Mexican drug lords and traffickers. The idea of the secret program was to allow Lone Wolf to sell illegal weapons to criminals and straw purchasers, and track the guns back to large smuggling networks and drug cartels.

Instead, federal agents lost track of the weapons and the operation became a fiasco, particularly after several of the missing guns were linked to shootings in Mexico and the 2010 killing of U.S. Border Patrol Agent Brian Terry in Arizona.

This is actually the same flaw every plan that relies on government suffers. How can you rely on an entity that steals, kidnaps, assaults, and murders people to stop theft, kidnappings, assaults, and murders? Do you really think an entity that drops bombs on child in foreign countries and pardons the violent actions of its law enforcers is going to have any moral opposition to handing a firearm to a person known to have a history of violence? Fast and Furious was a Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF) operation that involved selling firearms to people suspected of being involved with violent drug cartels. Supposedly the operation was meant to track where the firearms went. Those firearms did end up in the hands of drug cartels but the ATF didn’t do a very good job of tracking them.

A background check systems can’t work if it relies on an entity that is motivated to provide firearms to violent people. Since the government is motivated to do exactly that the background check system supported by opponents of self-defense can’t decrease gun violence.

The White House Is Still Pissed At Edward Snowden

Since Edward Snowden aired the National Security Agency’s (NSA) dirty laundry the United States government has wanted his head. Meanwhile far saner individuals have been begging the White House to pardon him. This begging came in the form of a petition posted on the White House website that has been ignored since 2013. After two long years the White House has finally given its answer — Edward Snowden will not be pardoned:

Unsurprisingly, the White House formally announced Tuesday that it will not be granting a pardon to Edward Snowden anytime soon.

Immediately after Snowden was formally charged in 2013 with espionage, theft, and conversion of government property, supporters began petitioning the White House to pardon the famed former National Security Agency contractor.

I don’t think anybody is surprised. Snowden’s actions made the Internet a safer place for everybody and that directly conflicts with the White House’s desire to spy on everybody. Any decent nation would give somebody like Snowden, who revealed unlawful activities being perpetrated by a government agency, a medal and declare a nation holiday in his honor.

Adding further insult to injury Lisa Monaco, who is apparently the president’s adviser on homeland security and counterterrorism, made this laughable statement to justify the White House’s decision not to granted a pardon:

Instead of constructively addressing these [civil liberties] issues, Mr. Snowden’s dangerous decision to steal and disclose classified information had severe consequences for the security of our country and the people who work day in and day out to protect it.

If he felt his actions were consistent with civil disobedience, then he should do what those who have taken issue with their own government do: Challenge it, speak out, engage in a constructive act of protest, and—importantly—accept the consequences of his actions. He should come home to the United States, and be judged by a jury of his peers—not hide behind the cover of an authoritarian regime. Right now, he’s running away from the consequences of his actions.

I say the statement is laughable because the last time a whistle blower tried to “constructively address” the NSA’s unlawful activities the state sicced the Federal Bureau of Investigations (FBI) on them. Back in 2001 William Binney tried going through the appropriate channels to get the NSA’s domestic spying activities addressed. He ended up looking down the barrel of several FBI agents’ guns as they raided him home in an attempt to intimidate him into shutting up. That was one of several good stories he told on the panel discussion I was on with him.

When you threaten somebody at gunpoint for trying to get the NSA’s domestic spying addressed through proper channels you can’t expect the next person to do the same.

Bernie Sanders The National Socialist

The candidates running for the 2016 presidential election truly are the bottom of the barrel. None of them are qualified to lead a herd of cattle into a slaughterhouse, let alone a nation. Although the playing field will likely change between now and the actual election the current darling child of the Democratic Party is Bernie Sanders. The interesting thing about Sanders is that he, unlike most of those wishy washy ninnies in the Democratic Party, outright admits he’s a socialist. There are two major types of socialists, national and international, and the question has been which of the two schools does Sanders belong to. Now we know:

Ezra Klein: You said being a democratic socialist means a more international view. I think if you take global poverty that seriously, it leads you to conclusions that in the US are considered out of political bounds. Things like sharply raising the level of immigration we permit, even up to a level of open borders. About sharply increasing …

Bernie Sanders: Open borders? No, that’s a Koch brothers proposal.

Ezra Klein: Really?

Bernie Sanders: Of course. That’s a right-wing proposal, which says essentially there is no United States. …

Ezra Klein: But it would make …

Bernie Sanders: Excuse me …

Ezra Klein: It would make a lot of global poor richer, wouldn’t it?

Bernie Sanders: It would make everybody in America poorer —you’re doing away with the concept of a nation state, and I don’t think there’s any country in the world that believes in that. If you believe in a nation state or in a country called the United States or UK or Denmark or any other country, you have an obligation in my view to do everything we can to help poor people. What right-wing people in this country would love is an open-border policy. Bring in all kinds of people, work for $2 or $3 an hour, that would be great for them. I don’t believe in that. I think we have to raise wages in this country, I think we have to do everything we can to create millions of jobs.

He wants to keep all of the “benefits” of socialism to the United States so he’s firmly in the national socialist camp. It’s also hilarious to hear him claim that open borders is a Koch brothers conspiracy, err, proposal. The Koch brothers are to the left-wing statists what George Soros is to the right-wing statists, a boogeyman responsible for all that is wrongs in the world.

Sanders also subscribes to the camp that believes open borders would hamper the creation of millions of jobs. Apparently he thinks the government should protect the jobs of individuals who are legitimately challenged by individuals for foreign lands who have no formal education and can barely speak English. Personally I disagree (because if you suck at your job that much you deserve to be replaced) but I also don’t acknowledge the nation state as a legitimate thing, unlike national socialist Sanders.