A Geek With Guns

Chronicling the depravities of the State.

Archive for the ‘Technology’ tag

Who Needs Copy and Paste Anyways

without comments

WordPress 5.0 was rolled out on Friday and with it came the new Gutenberg Editor. I’m not a curmudgeon who’s unwilling to give new features a chance. However, I found myself wanting to disable Gutenberg within seconds of trying to use it. Why? Because I couldn’t get the stupid thing to accept pasted text.

Most of my posts involve linking to a story and posting an excerpt of the part on which I want to comment. Needless to say copy and paste is pretty bloody important for what I do. Moreover, copy and paste are two of the most basic operations for an editor. It turns out that I’m not the only one unhappy with Gutenberg. During my quick search to find a way to revert to WordPress’s previous editor I came across a WordPress plugin called Disable Gutenberg. It has over 20,000 active installations and a five star rating, which indicates that it does its job well and the job it does is in high demand.

My setup isn’t anything special. I use Firefox with a few basic add-ons (HTTPS Everywhere, Privacy Badger, uBlock Origin, Multi-Account Containers, Auto Tab Discard, and Bitwarden). This setup worker well with the previous WordPress editor. This leads me to believe that WordPress’s developers didn’t thoroughly test Gutenberg before releasing it. Failing to perform thorough testing before releasing a major update isn’t unique to WordPress though, it has become the standard operating procedure for technology companies.

When I see a new update for any piece of software I use, I become a bit wary. When I see that the update includes new features, I become downright nervous. More often than not new features are released half baked. The weeks (or months) following the release of a new feature are usually spent making it work properly or at least provide the same functionality as the feature it replaced. This is annoying to say the least. I would much rather see the technology industry move develop an attitude that saw reliability as a critical feature instead of an afterthought. But I doubt this will happen. Reliability is a difficult feature to sell to most consumers and the work needed to make a product reliable is boring.

Written by Christopher Burg

December 11th, 2018 at 10:00 am

Never Trust a Surveillance Company

without comments

The parliament of the United Kingdom (UK) decided to pull a Facebook on Facebook by collecting the company’s personal information. Not only did the parliament collect Facebook’s personal information but it’s now airing the company’s dirty laundry. There are a lot of interesting tidbits to be found within the documents posted by the parliament but one in particular shows Facebook’s ruthlessness when it comes to collecting your personal information:

The emails show Facebook’s growth team looking to call log data as a way to improve Facebook’s algorithms as well as to locate new contacts through the “People You May Know” feature. Notably, the project manager recognized it as “a pretty high-risk thing to do from a PR perspective,” but that risk seems to have been overwhelmed by the potential user growth.

Initially, the feature was intended to require users to opt in, typically through an in-app pop-up dialog box. But as developers looked for ways to get users signed up, it became clear that Android’s data permissions could be manipulated to automatically enroll users if the new feature was deployed in a certain way.

In another email chain, the group developing the feature seems to see the Android permissions screen as a point of unnecessary friction, to be avoided if possible. When testing revealed that call logs could be collected without a permissions dialog, that option seems to have been obviously preferable to developers.

“Based on our initial testing,” one developer wrote, “it seems that this would allow us to upgrade users without subjecting them to an Android permissions dialog at all.”

If you’re using Facebook on a Google operating system, you’re in the center of a surveillance Eiffel Tower, and I’m not talking about the monument!

The history of Android’s permission system has not been a happy one. Until fairly recently Android had an all or nothing model where you either had to grant an application all the permissions it asked for or you couldn’t use it. Not surprisingly this resulted in almost every app requesting every possible permission, which turned the permissions dialog into a formality. Android 6.0 changed the permission system to mirror iOS’s. When an app running on Android 6.0 or later wants to access a protected feature such as text messages, the user is presented with a dialog alerting them to the attempted access and asks if they want to allow it.

If you read the excerpts, you’ll see that Facebook was concerned about the kind of public relations nightmare asking for permission to access call and text message logs could bring. At first the company was planning to only request permission to access call logs, hoping it wouldn’t cause a ruckus. However, once somebody figured out a way to add the additional capabilities without triggering any new permission requests, Facebook moved forward with the plan. So we know for a fact that Facebook knew what it was doing was likely to piss off its users and was willing to use underhanded tactics to do it without getting caught.

You should never trust a company that profits by collecting your personal information to respect your privacy. In light of the information released by the UK’s parliament, this goes double for Facebook.

Written by Christopher Burg

December 7th, 2018 at 11:00 am

This Neopuritan Internet Is Weird

without comments

Just days after Tumblr announced that it will be committing corporate seppuku Facebook has announced that it too is joining the neopuritan revolution:

Facebook will now “restrict sexually explicit language”—because “some audiences within our global community may be sensitive to this type of content”—as well as talk about “partners who share sexual interests,” art featuring people posed provocatively, “sexualized slang,” and any “hints” or mentions of sexual “positions or fetish scenarios.”

[…]

The new Sexual Solicitation policy starts by stating that while Facebook wants to faciliate discussion “and draw attention to sexual violence and exploitation,” it “draw[s] the line…when content facilitates, encourages, or coordinates sexual encounters between adults.” Can we pause a moment to appreciate how weird it is that they lump those things together in the first place? Whatever the intent, it reads as if only content coding sex as exploitative, violent, and negative will be tolerated on the site, while even “encouraging” consensual adult sex is forbidden.

This is a rather odd attitude for a website that recently rolled out a dating service. Does Facebook seriously believe its dating service isn’t being used to facilitate, encourage, and coordinate sexual encounters between adults?

This neopuritan Internet is getting weird. Both Tumblr and Facebook have mechanisms that allow content to be walled off from the general public. These mechanisms serve as a good middle ground that allow users to post controversial content while protecting random passersby from seeing it. But instead of utilizing them, these two services are opting for a scorched Earth policy. It seems like a waste of money to pay developers to create mechanisms to hide controversial content form the public and not utilize them.

Written by Christopher Burg

December 7th, 2018 at 10:30 am

Posted in Technology

Tagged with

Shooting Yourself in the Foot… with a Machine Gun

without comments

Tumblr has been known for two things: pornography and social justice blogs. After December 17th it will only be known for social justice blogs. The service announced that it was going to commit corporate suicide by remove all pornography from the site. But Tumblr isn’t taking the easy way out. Instead it has opted to prolong its misery, to commit corporate seppuku if you will, by using machine learning to remove pornography from its site:

For some reason, the blogging site hopes that people running porn blogs will continue to use the site after the December 17 ban but restrict their postings to the non-pornographic. As such, the company isn’t just banning or closing blogs that are currently used for porn; instead, it’s analyzing each image and marking those it deems to be pornographic as “explicit.” The display of explicit content will be suppressed, leaving behind a wasteland of effectively empty porn blogs.

This would be bad enough for Tumblr users if it were being done effectively, but naturally, it isn’t. No doubt using the wonderful power of machine learning—a thing companies often do to distance themselves from any responsibility for the actions taken by their algorithms—Tumblr is flagging non-adult content as adult content, and vice versa. Twitter is filling with complaints about the poor job the algorithm is doing.

Machine learning has become the go-to solution for companies that want to make it appear as though they’re “doing something” without taking on the responsibilities. We’re already seeing the benefits of this decision. A lot of non-porn material is being removed by whatever algorithms they’re using and when users complain Tumblr can say, “Don’t blame us! The machine screwed up!” Thus Tumblr absolves itself of responsibility. Of course the three people who post non-pornographic content to Tumblr are likely to flee after tiring of playing Russian roulette with the porn scanning algorithm but I’m fairly certain Verizon, which owns Tumblr now, just wants to shutdown the service without listening to a bunch of people who still use the platform whine.

Written by Christopher Burg

December 6th, 2018 at 10:30 am

Changing the Rules Way After the Sale

without comments

Nintendo believes it can use its intellectual property claims to prevent you from monetizing any footage you make of its video games. Restrictions like this are generally only presented in the end user license agreement (EULA) after you’ve purchased the game. But what happens when the restriction is implemented retroactively?

Today it’s understood that when you purchase a software package, you will be presented with pages and pages of legalese when you first attempt to use it. That wasn’t always the case. When you purchased old Nintendo Entertainment System (NES) or Super Nintendo Entertainment System (SNES) games, the boxes didn’t include contracts that you had to sign and send off to Nintendo before receiving an actual copy of the game nor did the games themselves present you with a EULA to which you had to agree before playing.

Nintendo is a notoriously litigious company and a few years ago was using the Digital Millennium Copyright Act (DMCA) to have footage of things like altered Super Mario World levels removed from YouTube. Because of the conditions I mentioned above, nobody who purchased a copy of Super Mario World for the SNES agreed to not alter the contents of the cartridge. They didn’t agree to any restrictions whatsoever. But through the magical process of intellectual property, namely the copyrights granted to Nintendo by the government over the characters that appear in Super Mario World as well as the software itself, Nintendo is able to change the rules way after the sales occurred.

This absurdity is compounded by the fact that copyrights can remain valid for the life of the creator plus 70, 95, or 120 years after their death [PDF] depending on the type of work. Compounding the absurdity even more is the fact that copyright terms that were already ridiculously long were extended whenever the copyright for Micky Mouse was about to expire (hence why it is often called the Mickey Mouse Law). If we go by precedent, the stupidly long terms we’re currently suffering under will likely be extended again and again. That means Nintendo could continue adding new restrictions to old NES and SNES games for decades to come.

Imagine if this characteristic of copyright law was applied to physical property. Let’s say you purchased a Ford F-150 today. Now let’s fast forward two decades. You still own the F-150 and have had to resort to having new parts custom fabricated because all of the major replacement parts manufacturers stopped producing new parts. One day you receive a letter in the mail from Ford, which is a cease and desist order for installing custom fabricated parts in the truck. Ford decided to pull a John Deere by claiming its copyrights to the software on the truck grant it the right to restrict you from maintaining your 20-year-old truck. It sounds pretty absurd, doesn’t it? But that’s the reality people are facing with NES and SNES games that they purchased two decades ago.

Written by Christopher Burg

December 5th, 2018 at 11:00 am

Changing the Rules After the Sale

with 5 comments

As I noted last week, the concept of intellectual property is an oxymoron. Today I want to expand on that by pointing out another absurdity of intellectual property.

Let’s consider a hypothetical situation where I own an electronics store and you just purchased a laptop from me. There was nothing unusual about the transaction. You didn’t have to read any contracts or sign any papers. You handed me cash and I handed you a laptop. The laptop is yours, right? Not so fast.

When you get home and power up your new laptop for the first time, you are presented with a legal contract that says you can’t make any modifications to the laptop’s hardware, install any operating system other than the one that came with the laptop, or install any software not distributed by the manufacturer’s app store. If you don’t agree with the contract, you can’t use the computer.

What I just described is a slightly hyperbolic version of a shrink wrap license. When you purchase a piece of software, you usually aren’t presented with the end user license agreement (EULA), the document that lays out what you can and can’t do with the software, until after the sale. No big deal, you may think, because if you don’t agree with the post-sale EULA, you can just return the software, right? You may find that easier said than done. Most stores won’t take back copies of software that have been opened and if you read the EULAs for online app stores, there are often severe restrictions in place in regards to returning purchases. But even if you can return the software, why should that be considered your only form of recourse? Why should you be bound to any terms presented after the transaction has been concluded?

This is yet another characteristic of intellectual property that I doubt most people would so willingly accept if it were applied to physical property. If you purchased a car and the dealer decided to foist a bunch of restriction on you after you paid for the vehicle but before you drove it off of the lot (i.e. it’s your property but you haven’t gotten into the car since it became your property), would you take them seriously? Most people probably wouldn’t. I certainly wouldn’t. So why is such a practice considered acceptable for intellectual property?

Written by Christopher Burg

December 4th, 2018 at 11:00 am

Unexpected Microsoft

without comments

Microsoft has been making all sorts of unexpected moves in the last few years. The company released Visual Studio Code, which is not only an excellent code editing environment but available under the open source MIT License. In addition to that, Microsoft also released an open source version of its .NET framework and Windows Subsystem for Linux. Needless to say, it’s becoming more difficult to hate the company lately.

Now to top it all off it sounds like Microsoft is going to abandon its customer HTML rendering engine and replace it with Chromium:

Because of this, I’m told that Microsoft is throwing in the towel with EdgeHTML and is instead building a new web browser powered by Chromium, which uses a similar rendering engine first popularized by Google’s Chrome browser. Codenamed “Anaheim,” this new browser for Windows 10 will replace Edge as the default browser on the platform, according to my sources, who wish to remain anonymous. It’s unknown at this time if Anaheim will use the Edge brand or a new brand, or if the user interface (UI) between Edge and Anaheim is different. One thing is for sure, however; EdgeHTML in Windows 10’s default browser is dead.

I have mixed feeling about this. On the one hand, it’s good to see Microsoft moving towards an open source rendering engine. On the other hand, I don’t enjoy seeing the rendering engine market turning into a duopoly (with the only major non-Chromium engine, Firefox’s, having a paltry percentage of market share).

Watching Microsoft do an about face from being the satanic figure to the open source community has been fun to watch. It probably is the greatest testament to the viability of open source software out there.

Written by Christopher Burg

December 4th, 2018 at 10:00 am

Designer Babies

with one comment

A lot of people are up in arms after news broke that a Chinese scientist has claimed to have created the first genetically edited baby:

Speaking at a genome summit in Hong Kong, He Jiankui said he was “proud” of altering the genes of twin girls so they cannot contract HIV.

His work, which he announced earlier this week, has not been verified.

Many scientists have condemned his announcement. Such gene-editing work is banned in most countries, including China.

Assuming Jiankui’s announcement is true, which is an assumption that takes a great deal of liberty since his announcement hasn’t been verified, I can say that if I were a parent planning to conceive a child and a scientist came to me with a proven track record of genetically editing out susceptibility to diseases and genetic disorders, my checkbook coming out of my pocket would be the first thing in the universe to knowingly exceed the speed of light. But I’d also try to provide my child the best education possible because I’d be one of those asshole parents who care not one bit about what’s “fair.”

As a parent I would want to provide every advantage I could to my child so I understand why parents would allows Jiankui to perform an experiment that might remove susceptibility to HIV from their child.

Written by Christopher Burg

November 28th, 2018 at 11:00 am

Posted in Science

Tagged with

That New Car Smell

without comments

I’m always interested in cultural differences. For example, here in the United States people generally love the smell of a new car. It’s easy to think that since people here love that smell that the love of that smell is universal but that isn’t the case. Chinese in general apparently hate that smell. In fact they hate it so much that Ford developed a method of getting that smell out of new cars:

In the US, “new car smell” is a beloved scent. People even try to make their cars smell new with after-market cleaning products. But in China, customers find the same odor repulsive. As the Chinese auto market grows, car makers are looking for a way to make the aroma of their new vehicles more amenable to Chinese taste

Early this month, Ford filed a patent to reduce the odor of some of the adhesive, leather, and other materials that produce Volatile Organic Compounds (VOCs) that contribute to new car smell. The patent appears to include software that senses the car’s location and the weather it’s experiencing, then it possibly detects whether the owner has “requested volatile organic compound removal from the vehicle.” Next, on a sunny day, the car will roll down a window and turn on the engine, the heater, and a fan in order to bake off the VOCs and their accompanying smell.

Often individuals make the mistake of believing that since they like something, it is universally liked. I learned at a young age that even smell, which is nothing more than a neurological response to stimuli and thus would seem to be a good candidate for being common amongst most humans, differs from person to person. My grandfather introduced me to sardines, which I enjoy to this day. I don’t find their smell repulsive but most people I know do. Likewise, I don’t find the smell of sauerkraut repulsive but most of the people I know do. Meanwhile, many of the body sprays and perfumes that people claim to like are repulsive to me.

Written by Christopher Burg

November 23rd, 2018 at 11:00 am

Some Thoughts After Moving from macOS to Linux

with one comment

It has been two weeks and change since I moved from my MacBook Pro to a ThinkPad P52s running Linux. Now that I have some real use time under my belt I thought it would be appropriate to give some of my thoughts.

The first thing I’d like to note is that I have no regrets moving to Linux. My normal routine is to use my laptop at work and whenever I’m away at home and use another computer at home (because I’m too lazy to pull my laptop out of my laptop bag every night). The computer I was using at home was a 2010 Mac Mini. I replaced it with my old MacBook Pro when I got my ThinkPad. I realized the other day that I haven’t once booted up my MacBook Pro since I got my ThinkPad. Instead I have been pulling my ThinkPad out of its bag and using it when I get home. At no point have I felt that I need macOS to get something done. That’s the best testament to the transition that I can give. That’s not to say Linux can do anything that macOS can. I’m merely fortune in that the tools I need are either available on Linux or have a viable alternative.

I’m still impressed with the ThinkPad’s keyboard. One of my biggest gripes about the new MacBooks is the ultra slim keyboards. I am admittedly a bit of a barbarian when it comes to typing. I don’t so much type as bombard my keyboard from orbit. Because of this I like keys with a decent amount of resistance and depth. The keyboard on my 2012 MacBook Pro was good but I’m finding the keyboard on this ThinkPad to be a step up. The keys offer enough resistance that I’m not accidentally pressing them (a problem I have with keyboards offering little resistance) and enough depth to feel comfortable.

With that said the trackpad is still garbage when compared to the trackpad on any MacBook. My external trackball has enough buttons where I can replicate the gestures I actually used on the MacBook though and I still like the TrackPoint enough to use it when I don’t have an external mouse connected.

Linux has proven to be a solid choice on this ThinkPad as well. I bought it with Linux in mind, which means I didn’t get features that weren’t supported in Linux such as the fingerprint reader or the infrared camera for facial recognition (which is technically supported in Linux but tends to show up as the first camera so apps default to it rather than the 720p webcam). My only gripe is the Nidia graphics card. The P52s includes both an integrated Intel graphics card and an Nvidia Quadro P500 discrete graphics card, which isn’t supported by the open source Nouveau driver. In order to make it work properly, you need to install Nvidia’s proprietary drivers. Once that’s installed, everything works… except secure boot. In order to make the P52s boot after installing the Nvidia driver, you need to go into the BIOS and disable secure boot. I really wish there was a laptop with an discrete AMD graphics card that fit my needs on the market.

One thing I’ve learned from my move from macOS to Linux is just how well macOS handled external monitors. My P52s has a 4k display but all of the external monitors I work with are 1080p. Having different resolution screens was never a problem with macOS. On Linux it can lead to some rather funky scaling issues. If I leave the built-in monitors resolution at 4k, any app that opens on that display looks friggin’ huge when moved to an external 1080p display. This is because Linux scales up apps on 4k displays by a factor of two by default. Unfortunately, scaling isn’t done per monitor by default so when the app is moved to the 1080p display, it’s still scaled by two. Fortunately, a 4k display is exactly twice the resolution as a 1080p display so changing the built-in monitor’s resolution to 1080p when using an external display is an easy fix that doesn’t necessitate everything on the built-in display looking blurry.

I’ve been using Gnome for my graphical environment. KDE seems to be the generally accepted “best” desktop environment amongst much of the Linux community these days. While I do like KDE in general, I find that application interfaces are inconsistent whereas Gnome applications tend to have fairly consistent interfaces. I like consistency. I also like that Gnome applications tend to avoid burying features in menus. The choice of desktop environment is entirely subjective but so far my experience using Gnome has been positive (although knowing that I have a ship to which I can jump if that changes is reassuring).

As far as applications go, I used Firefox and Visual Studio Code on macOS and they’re both available on Linux so I didn’t have to make a change in either case. I was using Mail.app on macOS so I had to find a replacement e-mail client. I settled on Geary. My experience with Geary has been mostly positive although I really hate that there is no way, at least that I’ve found, to quickly mark all e-mails as read. I used iCal on macOS for calendaring and Gnome’s Calendar application has been a viable replacement for it. My luck at finding a replacement for my macOS task manager, 2Do, on Linux hasn’t been a positive experience. I’m primarily using Gnome’s ToDo application but it lacks a feature that is very important to me, repeating tasks. I use my task manager to remind me to pay bills. When I mark a bill as paid, I want my task manager to automatically create as task for next month. 2Do does this beautifully. I haven’t found a Linux task manager that can do this though (and in all fairness, Apple’s Reminder.app doesn’t do this well either). I was using Reeder on macOS to read my RSS feeds. On Linux I’m using FeedReader. Both work with Feedbin and both crash at about the same rate. I probably shouldn’t qualify that as a win but at least it isn’t a loss.

The biggest change for me has probably been moving from VMWare Fusion to Virtual Machine Manager, which utilized libvirt (and thus KVM and QEMU). Virtualizing Linux with libvirt is straight forward. Virtualizing Windows 10 wasn’t straight forward until I found SPICE Windows guest tools. Once I installed that guest tool package, the niceties that I came to love about VMWare Fusion such as shared pasteboards and automatically changing the resolution of the guest machine when the virtual machine window is resized worked. libvirt also makes it dead simple to set a virtual machine to automatically start when the system boots.

One major win for Linux over macOS is software installation. Installing software from the Mac App Store is dead simple but installing software from other sources isn’t as nice of an experience. Applications installed from other sources have to include their own update mechanism. Most have have taken the road of including their own embedded update capabilities. While these work, they can usually only run when the application is running so if you haven’t used the application for some time, the first thing you end up having to do is update it. Lots of packages still don’t include automatic update capabilities so you have to manually check for new releases. Oftentimes these applications are available via MacPorts or Homebrew. On the Linux side of things almost every software package is available via a distro’s package manager, which means installation and updates are handled automatically. I prefer this over the hodgepodge of update mechanisms available on macOS.

So in closing I’m happy with this switch, especially since I didn’t have to drop over $3,000 on a laptop to get what I wanted.

Written by Christopher Burg

November 21st, 2018 at 11:00 am

Posted in Side Notes

Tagged with , ,