Technobabbles https://technobabbl.es I try to sound like I know what I'm talking about. Don't be fooled. Wed, 01 Aug 2018 06:50:53 +0000 en-US hourly 1 https://wordpress.org/?v=5.0 https://images.technobabbl.es/2017/11/cropped-technobabbles_logo-32x32.png Technobabbles https://technobabbl.es 32 32 27826256 Where’s My LED Bulb Mesh Network? https://technobabbl.es/2016/02/wheres-my-led-bulb-mesh-network/?utm_source=rss&utm_medium=rss&utm_campaign=rss https://technobabbl.es/2016/02/wheres-my-led-bulb-mesh-network/#comments Tue, 02 Feb 2016 17:15:44 +0000 https://technobabbl.es/?p=2650 Read more

]]>
On the heels of the news this week that GE will wind down manufacturing CFL bulbs for the American market and focus on LEDs, I had a thought.

LED bulbs replace a typical incandescent lamp (drawing 60W) with a low-power (around 10–12W, from the few I looked at) light source that often contains a microprocessor controller and wireless hardware. The more advanced models connect to an existing WiFi network to allow controlling their brightness and color from smartphone apps. Why don’t we take this as an opportunity to build mesh networking into homes bit by bit as old incandescent and CFL bulbs fail and get replaced by LED units?

This is kind of what I'm picturing, or at least it's the closest thing I could find in a reasonable time fram on Google Images. Source

This is kind of what I’m picturing, or at least it’s the closest thing I could find in a reasonable time frame on Google Images. Source

Of course, not all LED bulbs need to have the capability. Some people just won’t have a use for it, at least in the short term. But there’s something to be said for expanding the range of your home wireless network just by screwing in a new light bulb or two—which needed replacing anyway. Forget buying range extenders and figuring out where to put them. Every room needs at least one light bulb, and wireless range should be good enough, on average, for a pretty good mesh.

There are some short-term issues, I know. Light switches are the big one—since if you turn off the lights with a wall switch designed for “dumb” incandescent lamps, there will be no power going to the socket. That part of the mesh will die until the lights are turned back on. It’s debatable whether that’s actually bad, because if the lights are off there’s a lower chance of someone being in the room using the network. (And maybe it would work as motivation to get the hell off the Internet when you turn out the lights to sleep.)

But smart LED bulbs with color/temperature and brightness controls will probably warrant new wall switches eventually, new switches that don’t turn off the power to the socket. Instead, the smart bulb is told to turn off its LEDs, but the radio remains on. Because the radio needs to reach only as far as necessary to connect with the mesh, the power requirements would be much lower than a typical range extender. The controller could also put the radio in a low-power mode if no device is connected, and there are other power-saving tricks available or to-be-developed that would also help even when devices are connected.

I also envision the bulbs possibly using a form of Ethernet over Power-line to connect with each other and/or a compatible router so the wireless network can be made available even in parts of the home where the wireless signal from the base station cannot reach.

Basically, I see potential.

]]>
https://technobabbl.es/2016/02/wheres-my-led-bulb-mesh-network/feed/ 1 2650
Protocol Relative URLs (and why not to use them) https://technobabbl.es/2016/01/protocol-relative-urls-and-why-not-to-use-them/?utm_source=rss&utm_medium=rss&utm_campaign=rss https://technobabbl.es/2016/01/protocol-relative-urls-and-why-not-to-use-them/#comments Tue, 12 Jan 2016 23:56:36 +0000 http://technobabbl.es/?p=1693 Read more

]]>
Back in October 2010 (that long?!) I noticed a commit to Paul Irish’s (awesome) HTML5 Boilerplate project on GitHub that piqued my curiosity. I hadn’t really noticed the trick of linking to a resource in a protocol-independent manner before. So I drafted this post and then promptly forgot about publishing it. It’s still cool five years later—but not quite as cool, for reasons I’ll explain in a sec.

For the longest time, I thought links had to have a protocol specified, no matter what. I thought that was why Google Analytics used a kind of ugly detection hack to check document.location.protocol and switch the script src accordingly. Turns out that Google used that hack not because of the protocol itself, but because Analytics offered HTTPS on a different subdomain.

My mistake.

Cool

That commit got me to look it up, and sure enough, protocol-independent links are a thing. Until then I had no idea the protocol could be implied—though I knew the domain name is implied if the URI starts with /, and the entire base path is implied if there’s no initial slash at all.

So, adding a script that will load securely if the page is secured and by normal HTTP if not is as easy as removing the http: or https: from the src attribute, leaving a URI that looks like //domain.name/path/to/script.js

Not So Cool

But there’s a big caveat these days. In a 2014 update to that same post, Paul points out that as cool as this is, it’s an anti-pattern now. Why? Because if an attacker can trick the parent page into loading over an insecure connection, she can also get the user’s browser to load fake JavaScript. Long story short, China used insecure JavaScript to attack GitHub and it wasn’t pretty.1

So basically, I learned something five years ago that is now kind of frowned upon if you actually use it. Use HTTPS to load resources if the origin server supports it, period. Both server hardware and server software have gotten so good at encryption that the old argument—”It adds too much overhead”—no longer applies, and the upsides far outweigh the downsides.

Speaking of learning things: I looked today, and Google Analytics uses this trick now.2 I guess at some point they decided a separate subdomain for SSL was silly.3 Now it’ll take them who knows how long to decide that serving insecure JavaScript is silly, and just load over HTTPS all the time.


Notes:

  1. It’s ironic that NETRESEC’s site doesn’t even load over HTTPS. You’d think a network security blog would implement that, at the very least.
  2. I don’t know for how long, because the last time I actually touched tracking code myself must have been 2011 at the latest.
  3. It always was silly to put secure traffic on a separate subdomain, though I still see sites do it. I still don’t know why—especially in cases like Google Analytics where the hostname already load balances across dozens or hundreds of machines.
]]>
https://technobabbl.es/2016/01/protocol-relative-urls-and-why-not-to-use-them/feed/ 1 1693
Tempera Painting Over LightWord: Theme Switch https://technobabbl.es/2016/01/tempera-painting-over-lightword-theme-switch/?utm_source=rss&utm_medium=rss&utm_campaign=rss https://technobabbl.es/2016/01/tempera-painting-over-lightword-theme-switch/#respond Sat, 09 Jan 2016 01:10:22 +0000 https://technobabbl.es/?p=2580 Read more

]]>
Since 2009, the venerable LightWord theme has provided this site’s look and feel. However, LightWord’s developer sold it to a third party years ago after losing interest in maintaining it,1 and the last update was pushed to WordPress Trac nearly 4 years ago, on January 11, 2012.2

WordPress has come a long, long way in that time—and LightWord already lacked support for a few of the newest WordPress features back in 2012. While setting up a new site this week for a college friend, and helping her choose a theme, I realized just how much I’ve been missing out since (attempting to) revive my blogging habit.

So, I’m painting over LightWord with Tempera, which was last updated less than a month ago. The team, judging by the tone of the settings pages, has a good, wry sense of humor. (They say something about the colors in Tempera all being mixed from just two things: coffee and their own blood.) Hopefully I’ll be motivated to post more often now that I have a theme that supports post formats, and not everything has to be a standard-post-with-a-title. If nothing else, I’ll have to keep coming back for a while to finish tweaking the layout.

Oh, and if you happen to notice something that looks odd, it would be really helpful if you could let me know. Thanks!


Notes:

  1. My source for this is personal communication with the developer. I’ve had commit access to LightWord on GitHub for years, but that repository is not used for updates and I’m not particularly interested in rewriting the whole thing.
  2. I have no idea where the 2015 date on the WordPress Theme Directory page comes from. Trac’s log clearly shows the last commit in 2012.
]]>
https://technobabbl.es/2016/01/tempera-painting-over-lightword-theme-switch/feed/ 0 2580
It’s not a cap, but it is Comcapstic https://technobabbl.es/2015/11/its-not-a-cap-but-it-is-comcapstic/?utm_source=rss&utm_medium=rss&utm_campaign=rss https://technobabbl.es/2015/11/its-not-a-cap-but-it-is-comcapstic/#comments Sun, 08 Nov 2015 19:26:59 +0000 https://technobabbl.es/?p=2550 Read more

]]>
A snippet of the internal documents leaked from Comcast on Reddit.

A snippet of the internal documents leaked from Comcast on Reddit.

Comcast, what are you on about?

Apparently using more than 300 GB of broadband at home in a month is too much. Comcast is expanding the markets in which it’s “trialling” a 300 GB soft-cap past which users will be charged $10 per 50 GB of usage.

And it’s telling customer service not to call it a “data cap”.

Well, technically, it’s “not a cap”. The customer service training materials originally leaked via Reddit aren’t lying when they say, “We do not limit a customer’s use of the Internet in any way at or above 300 GB.” But there’s a fine line between a cap and price discrimination. In either case, customers who use more data every month end up paying more.

Thing is, it probably doesn’t cost Comcast (or any other ISP) much, if anything, to provide 3 TB of bandwidth over the course of a month instead of 300 GB. It certainly doesn’t cost them $10 / 50 GB = 20¢ per GB. A decade ago, the cost per GB for an ISP was around a penny. There’s no way the price hasn’t gone way down since then, but even if it hasn’t the markup is around 20x. (Read this Reddit comment thread for more on the numbers)

And while it’s technically not a cap, it is a form of limit. For an extra $30 or $35 per month, the 300 GB threshold goes away. It’s ridiculous, as -jackschitt- explains.

I’ve seen arguments that this is intended to reduce streaming, because Comcast is also a cable company and they don’t like cord-cutters using Internet bandwidth to watch video they could be buying from the cable TV service. I’ve seen arguments that they’re trying to discourage torrenting, or watching YouTube, because those entertainment options also compete with Comcast’s cable business in a way.

But I see this as simple corporate greed. Does Comcast have a right to make a profit? America hasn’t decreed (yet) that ISPs are public utilities, so I’d say they’re entitled to some healthy profit margins. Thing is, the extra usage costs Comcast practically nothing. They have to maintain their network no matter how much data flows through. It’s not like there’s extra strain on the equipment. Routers and cables are engineered to be used at or near peak capacity as much as possible.

Let’s put it this way: Remember when cellular text messages were 25¢ each? That was also back in the era when bandwidth cost ISPs roughly 1¢ per GB. And those text messages cost essentially nothing to deliver, because SMS was built on the inter-node signaling architecture that was already part of the cellular network architecture—built in because it’s an integral part of how the network functions. American cellular carriers figured out that they could use idle time on those communication channels to make an obscene amount of money on text messages.

Need more convincing that Comcast is just in it for the money? There’s allegedly a policy of forgiving the first three overages. According to at least one customer in a trial area, Comcast actually charges for those outages in advance. If that’s true, and Comcast still bills that way in the trial service areas, even customers who don’t ever go over the limit still end up paying for it.

Internet service in America is already far more expensive than comparable service in other countries. PBS NewsHour published a report earlier this year showing a comparison between a number of cities both within the US and internationally, rating the speed and cost of Internet access.

Even though the Internet was invented in the United States, Americans pay the most in the world for broadband access. And it’s not exactly blazing fast.

For an Internet connection of 25 megabits per second, New Yorkers pay about $55 — nearly double that of what residents in London, Seoul, and Bucharest, Romania, pay. And residents in cities such as Hong Kong, Seoul, Tokyo and Paris get connections nearly eight times faster.

PBS NewsHour, April 26, 2015

If this pisses you off—and it should—submit complaints to the FCC describing how anti-consumer Comcast’s trial is. If enough people write in, the FCC will be able to investigate and—hopefully—step in to regulate it.

But at least Comcast isn’t claiming that they need to experiment with caps to “manage congestion” any more.

]]>
https://technobabbl.es/2015/11/its-not-a-cap-but-it-is-comcapstic/feed/ 1 2550
RIP Ingress 2012–2015 https://technobabbl.es/2015/11/rip-ingress-2012-2015/?utm_source=rss&utm_medium=rss&utm_campaign=rss https://technobabbl.es/2015/11/rip-ingress-2012-2015/#comments Thu, 05 Nov 2015 18:24:59 +0000 https://technobabbl.es/?p=2545 Read more

]]>
Well, Ingress, you’ve had a good run.

At launch, in November 2012, you motivated me to start putting money aside for a new phone that could run heavy games like you.

Throughout 2013 and much of 2014, you got me out and about, wherever I was, whomever I was with. You connected me with new people when I went to California for the summer, and brought my friend group closer together at college by getting us to roam campus together for hours late at night.

But, Niantic, you’re losing your way.

Cheating is still widespread. Your lackadaisical response to reports of location spoofing and multi-account users makes us legitimate players wonder if you’re really in this to make a good game.

We wonder how much you really value those of us who submit new Portals, when you decree that submissions will no longer count toward the Seer badge, making it nothing more than a relic lingering on our profiles. I no longer bother submitting portals. Rumor has it that overwhelming portal submission volume motivated the change—but you should have just disabled submissions until you could catch up. Instead, you killed a medal that many of us wanted to eventually turn Onyx.

Since the Seer fiasco, not much happened. The occasional promotional item was introduced, but gameplay remained pretty stagnant. My friends lost interest. I lost interest. I got back into the game, on occasion. This summer I finally got the Onyx Guardian medal after happening to capture a lucky portal in the middle of nowhere. But I don’t really care about the game any more.

And now, you’re adding a new currency to the game. Unlike XM, the new Chaotic Matter Units (CMU) cannot be gathered in-game.

They can only be bought.

With real money.

Every game with this sort of exclusive currency system has lost my interest in a matter of weeks, or even just days. Games that let you earn currency through gameplay, instead of paying for it, have a much better chance at retaining my interest. Even if I don’t want to buy the currency, it’s still motivating to play for the rewards. It takes longer to earn in-game purchases, but there’s still satisfaction at having gotten to that point—at having had the dedication to get there.

Gameloft’s Asphalt 8, back when it still had only one currency, was motivating. Could credits be bought? Sure. But they could also be “bought” by racing a lot.

Future Games of London’s Hungry Shark Evolution has a coin store, yes. But the game also allows earning enough coins to progress simply by playing. (Sometimes it feels slow, but any game can feel that way sometimes. Even games without in-app purchases or PC games.)

Twodots’ Dots let me earn enough dots to never worry about buying power-ups or themes when I wanted them—and I never bought the Dot Doubler. Again, just by playing enough, I can earn whatever I want.

But your new Ingress store, Niantic, doesn’t allow that. It uses a currency that can’t be earned—not in any way you’ve documented beyond a vague FAQ answer that mentions “promotions”. CMU can only be bought. And that feels shitty.

I’m sure Key Lockers were created in response to requests from players. Since before I even started playing, players have been asking for “keyrings” to save souvenir keys and the like. And now our requests have been answered, finally. But not for free.

If I want any Key Lockers, it looks like paying $10—none of this $9.99 crap; just call it $10—to get enough CMU for the five-pack is the only option. And again, that’s shitty.

There’s still no motivation to keep playing. Leveling up past Level 10 won’t change what I can do in-game. If I could earn CMU by playing, the way I can earn coins or credits in other games by playing, there would be motivation—at least for a while.

But I can’t, Niantic. You made that choice.

So I’ll probably keep playing just enough every day to keep my Guardian charged, and then shut off the app. Why bother doing anything more, when it seems that all you want from me these days is money?

]]>
https://technobabbl.es/2015/11/rip-ingress-2012-2015/feed/ 9 2545
Windows 10 and Microsoft’s Bullying https://technobabbl.es/2015/11/windows-10-microsoft-bullying/?utm_source=rss&utm_medium=rss&utm_campaign=rss https://technobabbl.es/2015/11/windows-10-microsoft-bullying/#comments Tue, 03 Nov 2015 20:00:12 +0000 https://technobabbl.es/?p=2539 Read more

]]>
There’s been a lot of buzz around the internet lately about the tactics Microsoft is using to get users to update to Windows 10. For much of the summer, it’s been “automatic downloads” this and “automatic installation” that.

But Windows 10 is better than Windows 7 or Windows 8(.1), isn’t it? True, I’ve heard plenty of testimonials from people who’ve gotten the new OS and love it. It has a completely rebuilt browser, Microsoft Edge, that allegedly blows Internet Explorer out of the water (not that Microsoft is above scaring novice computer users when they try to switch to Chrome or Firefox). Personalized, voice-activated search is built in thanks to deep Cortana integration (leaving aside the fact that some of us prefer Google Now). There should be lots of good reasons to upgrade.

Whoa, Silver!

Trouble is, it’s probably not a good idea to upgrade—not if you have any sense of privacy.

Windows 10 is worse than the town gossip. Windows 10 is the friend who can’t keep anything secret. Windows 10, simply put, is an uncontrollable blabbermouth.

Investigations have shown that Windows 10’s privacy settings do not actually provide full control over what Microsoft calls “usage information”. Turning off every option to send data back to Microsoft still lets some information get sent back. According to Ars Technica UK, certain requests are still made to Microsoft servers even with every possible setting disabled. That’s creepy, sleazy, and probably illegal in the European Union. (If it’s not, it should be.)

The Competition

Do other operating systems “phone home” like this? Sure they do.

The last time I set up Ubuntu 14.04 on a desktop machine, I had to jump through some hoops to disable sending local search queries to Amazon, because Canonical makes money partly through commissions on sales generated when users click on products within the Unity UI’s search function. But that’s one of the very few places Ubuntu actually sends out information, to my knowledge—and more importantly, it obeys a single setting that turns off all Web results within the Unity launcher. Flip that one switch off and BAM! no more searches for your own files and apps get sent off to the internet.

Apple’s Mac OS X has included online search results in Spotlight for a couple years now, too. But, again, it’s easily disabled. The other times OS X wants to send information to Apple, so far as I know, it prompts the user (usually after an app crashes or the system experiences a severe problem).

Under Windows XP and 7 (I have essentially zero personal experience with Vista or 8), Microsoft offered the same kind of optional feedback mechanism as Mac OS X. If an app crashed, the OS gave the option to send an error report. If Windows itself failed, it offered to send crash details to Microsoft the next time it booted successfully. If the user clicked “Don’t Send”, in either case, the information never left the local machine.

None of these are creepy, because they respect the user’s choice.

Me, Myself, and Windows

Over the last several years, I accumulated a veritable collection of (pre-owned) computers from other students at college. My physical holdings include a 2009 Sager gaming notebook, a 2013 Sony Vaio, and a 2006 Acer desktop that’s undergone numerous upgrades.1 All of these machines run some edition of Windows 7.

I object to Windows 10 because of the combination of privacy concerns and strong-arming of users into upgrading. And I’ll have to take swift geek action to block the upgrade before Microsoft tries to force Windows 10 onto all of these machines. Consumers like me are unfortunately in the minority, but we do not like being pushed around by our operating systems.

When I tell my computer that I don’t want it to send data back to its maker, it should respect that choice.

When I tell my computer that I don’t want it to upgrade to the next version of its operating system, it should respect that choice.

When I tell my computer not to do anything, it should respect that choice.

If the choice is ill-advised, the computer may show the user a warning explaining why—but only if there is actual risk. (And no, Windows 10, switching to Chrome or Firefox is not actually risky.)

The user must have control. The computer is a tool. It should not contradict the user’s commands. Asimov got this one very, very right.

If Microsoft really does go through with this upgrade push, I doubt there will be anything they could possibly do to regain my trust. I’ll never buy a Windows license—even pre-installed on a new computer—again.


Notes:

  1. These are just the Windows machines. I also have a 2011 MacBook Pro, and a 2007-era white polycarbonate MacBook rebuilt with spare parts (which proved too underpowered for my intended uses). Feel free to get in touch if you have use for the old MacBook; I’m open to offers, though it’s currently in storage halfway across the country.
]]>
https://technobabbl.es/2015/11/windows-10-microsoft-bullying/feed/ 3 2539
Blast from the Past: Offline Foursquare https://technobabbl.es/2015/11/blast-from-the-past-offline-foursquare/?utm_source=rss&utm_medium=rss&utm_campaign=rss https://technobabbl.es/2015/11/blast-from-the-past-offline-foursquare/#respond Sun, 01 Nov 2015 18:52:36 +0000 http://technobabbl.es/?p=1754 Read more

]]>
This is a revamp of a post originally drafted in June 2011. The Foursquare universe has changed a lot since then, as have smartphones in general, but it was worth partially rewriting as a window into how things used to be.

For a number of months after getting my first Android phone (the LG Optimus V), I spent a lot of time thinking about ways to check in on Foursquare without an internet connection. At the time, I used the phone on Wi-Fi only, for Internet things. I didn’t start on a service plan, because I had an older prepaid phone with a lot of accumulated airtime that I wanted to sell first.

Note: In this article, “Foursquare” and “Swarm” mean basically the same thing—the app from Foursquare-the-company that features check-ins. I’m temporarily ignoring the recommendation engine, because it’s irrelevant to this idea.

Foursquare could be frustrating to play under such conditions. It expected to be always connected, like most Android apps (of the time) tied to an online service, and wouldn’t work at all without Internet access. The modern replacement doesn’t take kindly to being offline, either.

Back in 2011, when I was still living under such conditions of limited network access, I thought up some ideas that Foursquare’s engineers might use as a basis for enabling greater use of the app.

Reasoning

The Internet connection’s primary use during the check-in process, then as now, is to retrieve the list of venues, and then submit the user’s choice (with annotations1). But ultimately, the app needs to know what’s nearby, so the user can choose a venue and “check in”.

The thing is, all that’s really needed to look up nearby venues is the device’s location, which Foursquare retrieves on each launch.

Back in 2011, I solved the problem somewhat by installing a developer tool to provide “mock locations” to the OS, so I could check into venues I’d visited earlier in the day when I had a connection. Needless to say I never, say, spoofed my location onto another continent. The entire purpose was to maintain as complete a history as possible for my own future use—and I’ve enjoyed the historical data surfacing after checking into a location I’d not been to for years.

But I hated dealing with the mock location app. Setting my location was cumbersome, and the check-in timestamps were wrong, of course. Why couldn’t Foursquare integrate this?

Details

My idea was that, if no connection was available, Foursquare could offer an option to “check in later”, a button or menu option that would store the device’s GPS coordinates and a timestamp. The user could call up these delayed check-ins and select a venue for each one at a later time, when the network became available again.

The only casualty of this system would be the real-time notification feature that lets the user know that friends are nearby, and lets friends know where the user is—not the biggest of deals, since being offline precludes sending or receiving such notifications anyway.

The beauty of this concept is that the device knows where it was when the check-in was logged, precluding the need for any location-mocking apps and preserving the original time of check-in.

Expanded Use Cases

The ability to check in without a connection to Foursquare’s servers could also come in handy when those servers are down. It hasn’t happened for quite a while now, but in the early days of Foursquare it wasn’t unheard of for a maintenance window or server overload to take the servers down and prevent checking in at all.

This method could also help level the playing field between users of Wi-Fi–only devices and users with cellular data connections—or help users with limited cellular data plans conserve bandwidth for the essentials. Tablet or limited-data users could queue a check-in anywhere they go, and then complete their check-ins when they have a Wi-Fi connection, for example.

Fast-Forward to the Future

It would have been nice to have offline check-ins over this past summer. I traveled through some places that had almost no cell signal, and checking in was an absolute pain. Checking in was only possible in some places because I’d planned ahead and enabled SMS check-ins, and those are hard to do because they rely so much on ZIP codes.

What’s interesting is that Foursquare implemented GPS history–based belated check-ins back in September of this year. Though it doesn’t appear to allow actively choosing to check in retroactively—the setting is buried in notifications, and merely places “suggestions” in your history—I’m tickled that Foursquare is starting to support this sort of thing now that I have an unlimited data plan.


Notes:

  1. Swarm check-ins can have a text comment, photos, a “sticker” badge attached to the user’s avatar, and tagged friends who may be checked into the venue with the user at the same time.
]]>
https://technobabbl.es/2015/11/blast-from-the-past-offline-foursquare/feed/ 0 1754
Getting back in the blogging game https://technobabbl.es/2015/10/getting-back-in-the-blogging-game/?utm_source=rss&utm_medium=rss&utm_campaign=rss https://technobabbl.es/2015/10/getting-back-in-the-blogging-game/#respond Thu, 29 Oct 2015 18:37:05 +0000 https://technobabbl.es/?p=2221 Read more

]]>
My poor WordPress site didn’t get much attention during college. I used it to cross-post a few things for my Digital Journalism course back in 2011, and banged out a post or two in the interim—but there really isn’t much here that wasn’t around in 2010!

It’s time to change that. Frankly, I haven’t wanted to do much in the way of long-form writing online because my writing energy was sapped by essays and papers. Twitter and Facebook have seen orders of magnitude more use than this server I actually pay for, in the last four years.1 It’s just been easier to find the energy to write a few dozen words at once on social media. Never mind that Facebook is a black box and Twitter is so ephemeral2 that I might as well be talking into the wind—they were easy. Easy enough for me to just post two or three tweets/statuses in a row instead of properly writing something longer and just linking to it.3

But every time I pushed out a tweet string, I got frustrated. Twitter makes it hard to link multiple tweets into a coherent sequence, especially if (as I do) one posts from an external app. The constraint chafed when I had more to say than could fit in 140 characters—and it’s not like I could just start tweeting in Chinese to fit in more words. But blogging felt so intimidating because of all the writing I was already doing for class. So it practically never happened.

There’s a lot of stuff I could write about now that I’ve graduated, though. I should start on that while I can still remember details. With that in mind, the first step is getting this site running better, after being neglected for years.

The nerd in me (90% of my personality, obviously) insisted that it be available over HTTPS now that CloudFlare offers free SSL. It took a solid 4–6 hours to get that working properly, because anything included in a secure page should ideally be delivered securely as well.4 I have several secondary domains from which the site loads resources, and they all needed to be reconfigured too.

As long as I was reconfiguring parts of the blog software to make it work properly with HTTPS, I also addressed some of the sluggishness in loading pages. An old plugin was using an absurd amount of time on every page load. Just getting rid of that cut the time (unscientifically) by at least 60%. Other tweaks have been (and continue to be) made to improve load times, which have bothered me since that old shared server I started using 6 years ago.

FeedBurner, which I used since switching to Blogger way back in 2007, was a dying service when I stopped paying attention to most things blog-related four years ago. Now, it’s barely on life support. It no longer has an API, and no features have been added for… Actually, when was the last FeedBurner update? That’s the problem. So my site feed no longer redirects to FeedBurner—and the subscriber count is “down” 75% because the redirect is gone. Email subscriptions will still run through FeedBurner for now, but I’m working on running those from my own server too. It’s already set up to send mail properly, so it’s just a matter of finding software to run the actual newsletter.

About an hour ago, I upgraded nginx (the web server application that runs almost everything on this particular host) from the outdated version shipped with Ubuntu 12.04 LTS. Got a bit scared when all the PHP applications started returning blank pages, but found the reason (a change in how nginx passes parameters to back-end scripts) and corrected the configuration in short order.5 Something as important as a web server needs to be kept updated. (And now I have to do the same upgrade on my other servers. Fun!)

Some time in the next year I’ll need to upgrade Ubuntu to avoid being caught on 12.04 when Canonical ends support in 2017, but I’m waiting to see what happens with Ubuntu 16.04 (the next LTS release). Then I can pick between hopping to 14.04 LTS and waiting for 18.04, or hop to 16.04 and be set until 2020. Upgrading the operating system is potentially a long process, and could break things, so there are good reasons to wait, and wait some more, and plan, and review the plan, and triple-check the plan, and… before committing to an upgrade.

Technobabbles will be in flux for a while, as I continue working to clean up things that broke over the last few years. Anything broken in old content needs fixing, so I’ll definitely read any messages sent in by readers to point out such things. At some point I might go through my old drafts and try to publish some of them—slightly modified or maybe as-is. There’s some good stuff there that just fell off the back of the stove when other things got shoved to the back burners. But first, the house needs to be put to rights.


Notes:

  1. Actually, the same server runs other services, like a URL shortener for my tweets and an IRC bouncer with logging. But this blog was the primary reason I got it in the first place.
  2. Because finding old stuff on Twitter can be so difficult, I’ve been running a ThinkUp instance since my site was on a borrowed shared hosting account. ThinkUp archives things like Twitter, Facebook, and Foursquare posts for me and generates interesting reports using the data.
  3. Twitter’s really the only service I use that has such a short length limit, but it wouldn’t be easy to write a longer, single-post version of a status for Facebook/Google+/LinkedIn/etc. and then break it up into a separate series of tweets, especially because I use Buffer to post everywhere at once. So everyone got the tweet version. Yes, it’s the lazy way, not the reader-friendly way.
  4. Secure pages that include insecure content are called “Mixed content”. Modern browsers often show warnings for mixed content, or even block the non-secure content from loading.
  5. Tip for anyone else upgrading nginx, especially from really old releases like 1.1.x: If your PHP apps stop working, see if you’re trying to include fastcgi_params. In newer versions, the file to include is fastcgi.conf. I fixed my config files all at once with sudo find . -type f -exec cp '{}' '{}'.nginx-1.1 ; and sudo find /etc/nginx/sites-available -type f ! -name '*.nginx-1.1' -exec sed -i 's/fastcgi_params/fastcgi.conf/' {} ;
]]>
https://technobabbl.es/2015/10/getting-back-in-the-blogging-game/feed/ 0 2221
Force Dropbox on Mac OS X to Respect Your Disk Space https://technobabbl.es/2014/05/force-dropbox-on-mac-os-x-to-respect-your-disk-space/?utm_source=rss&utm_medium=rss&utm_campaign=rss https://technobabbl.es/2014/05/force-dropbox-on-mac-os-x-to-respect-your-disk-space/#respond Mon, 26 May 2014 11:52:49 +0000 http://technobabbl.es/?p=2145 Read more

]]>
Dropbox is one of those tools without which it’s probably no longer possible for me to live. It just syncs files between my computers, and it makes those same files accessible through any Web browser. It’s not like my love for it is any secret.

The Problem

But it so happens that over the last year or so, I’ve increasingly run into a pretty major issue: Dropbox on my MacBook Pro will happily fill its .dropbox.cache folder with gigabytes upon gigabytes of files, sometimes filling 50+GB of disk space before finally the system chokes up and apps start crashing because, as Finder says, the hard drive has “Zero bytes free”. The system will pop up warnings that “Your startup disk is almost full”, but if the machine is unattended for several hours straight (overnight, during the day when I haven’t taken it to class, etc.) those warnings don’t help.

It seems the issue happens when a program makes lots of small updates to big files tracked by Dropbox. Other computers start accumulating several copies of each file in the .dropbox.cache folder, and they start eating up disk space pretty fast. Apparently the Dropbox client doesn’t have any logic built into it to prune the cache based on some reasonable size constraint—at least, not on Mac OS X; I’ve never encountered this issue on a Windows machine. So it’ll happily keep sticking previous versions of big changed files into its cache until the drive is completely full, then complain that it “Can’t sync; not enough free disk space”—even though that’s its own fault.

The Fix

Let’s just say I got tired of waiting for the Dropboxers to fix this. The solution I came up with is simple, taking just one line in my user crontab:

*/20 * * * * find -E /Users/dgw/Dropbox/.dropbox.cache -type f -regex '.*/[0-9]{4}-[0-9]{2}-[0-9]{2}/.*' -cmin +60 -exec rm {} \;

If you’re unfamiliar with crontab syntax, let me explain. Each line in a crontab file on OS X (and most other Unix-like systems) consists of 6 values, separated by tab characters: which minute to run the job; which hour to run the job; which day of the month to run the job; which weekday to run the job; and the command to run. Setting the first to */20 and the next four fields to * has the effect of running the command every 20 minutes. The last value, the command itself, uses the find utility to locate any file (-type f) in Dropbox’s cache (/Users/dgw/Dropbox/.dropbox.cache and -regex '.*/[0-9]{4}-[0-9]{2}-[0-9]{2}/.*) that is more than an hour old (-cmin +60) and delete each file it finds with rm (-exec rm {} \;).1

To set this up for yourself, do the following:2

  1. Open Terminal.app
  2. Run crontab -e
  3. Press i to enter vim’s insert mode
  4. Type */20 and four * characters, each followed by Tab (these five values should line up under the first five column headings, #min, hour, mday, month, & wday)
  5. Copy and paste (using Cmd+V) the command shown above, replacing /Users/dgw/Dropbox with the path to your Dropbox folder3
  6. Hit Esc to exit vim’s insert mode
  7. Type ZZ (two capital Zs) to save the file and quit vim
  8. vim will close and you will see the message crontab: installing new crontab in Terminal

Now, every 20 minutes, your Mac will automatically delete any of Dropbox’s cached files older than an hour. This will make it extremely hard (if not impossible) for Dropbox to single-handedly fill your hard drive to bursting.

Note that the numeric values are just what works best for me. */20 just means every 20 minutes; you could make it */15 for every 15 minutes, 0 for on the hour, 30 for half-past every hour, etc. -cmin +60 means files that were created (added to the cache) more than an hour ago, but this could be -cmin +180 for files that are more than three hours old or -cmin +30 for files more than half an hour old, etc. I initially had -cmin +120 until I ran into a particularly “productive” day on Dropbox’s part and had to cut down on its cache more aggressively.

The Caveats

The only potential downside I can think of is that, because you’re deleting things from Dropbox’s cache of changed and deleted file versions, restoring one of those previous versions or a deleted file will almost always mean having to download it again from Dropbox’s servers. If you regularly edit or delete files, only to restore a previous version within a day or two, this might not be something you want to do unless Dropbox is unrelenting in filling up your hard drive with cache files.

Again, the ideal fix would be for the Dropboxers to code in some logic that keeps Dropbox’s cache from filling your Mac’s hard drive in the first place. But until and unless they do, this is the little hack around the problem that I’ll be using.


Notes:

  1. Yes, these are standard GNU/Linux commands. I linked to the OS X manpages because this tip targets OS X, but I don’t believe there are any huge differences between the Mac and Linux implementations.
  2. I’ve had it in place for about six weeks and never had any problems, but <insert standard disclaimers about implementing this at your own risk>.
  3. If you don’t know this path, it’s easy to find: Right-click your Dropbox folder and select “Get Info”. In the info dialog, the path is under “General”, labeled “Where:”, and it’s probably something like /Users/yourusername/Dropbox unless you picked a custom Dropbox folder location.
]]>
https://technobabbl.es/2014/05/force-dropbox-on-mac-os-x-to-respect-your-disk-space/feed/ 0 2145
Did Virgin Mobile USA cut an anti-Android deal with Apple? https://technobabbl.es/2012/06/did-virgin-mobile-usa-cut-an-anti-android-deal-with-apple/?utm_source=rss&utm_medium=rss&utm_campaign=rss https://technobabbl.es/2012/06/did-virgin-mobile-usa-cut-an-anti-android-deal-with-apple/#comments Sat, 09 Jun 2012 17:13:25 +0000 http://technobabbl.es/?p=2101 Read more

]]>
All right, so the big news is, Virgin Mobile USA will soon carry the Apple iPhone 4S. Which is to say, my pre-paid, Sprint-owned cellular telephone carrier may have cut a deal with Apple to make all their Android devices suddenly look unattractive.

Why do I think that? Oh, no reason, just the plan prices. As my long-time Web contact Zoli Erdos asked of Virgin Mobile’s Twitter customer service account, and got an interesting (but not entirely clear) answer:

Wait, “Auto top-up” just means letting them charge for monthly service automatically. I let them do that for my Motorola Triumph.1 Can I get that discount, too? Zoli already got an answer to that question, too:

Huh? Yep, exclusively for iPhone customers, Virgin Mobile USA will take $5/month off of your service plan if you let them charge you automatically every month. Want Android instead? Sucks to be you, you get to pay more.

This story gets even better. I asked, specifically, if there was some kind of deal going on between Virgin Mobile USA and Apple. The answer was surprising, but I’m not entirely sure the responding CSR actually read and understood what I asked:

Let me get this straight. I asked if Virgin Mobile and Apple decided to make Android less appealing, and the answer was “Yes!”? With an exclamation mark?

Wat.

Needless to say, I’ve been less than amused by the changes to Virgin Mobile’s policies over the last year. First they jacked up prices for new customers right before launching the Motorola Triumph in June 2011.23 Virgin Mobile then started throttling 3G data after a 2.5GB monthly usage threshold.4 Then they ended grandfathered plan rates for users who upgrade their devices, meaning that if (when) I eventually upgrade away from the Triumph, my monthly fee will jump from $25/month to $35/month, just because I’m changing phones.5

What started as a great deal for cell phone service is still a good deal compared with contract carriers, to be sure, but the policy changes and new competitors like republic wireless entering the market make it much less sweet. ($19/month for unlimited everything? Tanj, republic, launch something newer than the LG Optimus already!) Ting and NET10 also offer lower-cost smartphone service compared to contract plans, but for my level of usage both are more expensive even than Virgin Mobile’s current pricing.

I really don’t like this iPhone policy. The one change over the past year that I was actually happy to see Virgin Mobile make was dropping their $10 monthly surcharge for Blackberry devices. Though RIM and its Blackberry devices are all but dead, it was nice to see Virgin Mobile start treating all smartphones equally, pricing-wise. Now, we’re back to favoring one platform over the others, and I really don’t like that. All smartphone platforms have roughly equal potential for using network capacity, so charging less for one of them makes absolutely no logical sense.

Whether or not there’s some behind-the-scenes deal between Virgin Mobile—actually, let’s be honest, if it exists the deal is with Sprint—and Apple that’s responsible for this price discrepancy, it sure seems like a very anti-Android thing to do. Virgin Mobile, please treat all smartphone plans equally—no platform favoritism. It’s the customer-friendly thing to do. Extend the $5/month “Auto top-up” discount to all Beyond Talk plan subscribers (you don’t have to include grandfathered users, that’s totally understandable) and maybe I won’t jump ship to republic wireless as soon as they launch a more powerful device.


Notes:

  1. I’ve had it since December, but haven’t felt the need to review it as I did the LG Optimus V. Pretty much all the bug reports and battery life problems are absolutely true. If I feel like a writing project, though, I’ll do a full review of my own, just for completeness.
  2. Virgin Mobile have since remained unwilling to push Motorola to fix the software problems with said Triumph. Motorola, for its part, pretty much ignores/dismisses all bug reports. They keep offering “Factory Data Reset” as the solution to everything, and haven’t said a peep about whether or not there will be a software update. As far as I’m concerned, Motorola’s reputation as a phone maker is completely shot.
  3. Again, I should do a full review of this phone. It’s been out almost a year. I also have a really, really ridiculous story about how I got mine. Plus, I need to rant about the whole “Motorola isn’t supporting its devices” thing.
  4. At least there aren’t any overage fees. It’s slower, but it’s still “unlimited”.
  5. As I understand it, this new policy would also affect an emergency switch back to my LG Optimus V, if my Triumph fails someday. That’s one of the major reasons that I don’t like the policy change.
]]>
https://technobabbl.es/2012/06/did-virgin-mobile-usa-cut-an-anti-android-deal-with-apple/feed/ 13 2101