Web dev in DC http://ross.karchner.com
618 stories
·
15 followers

Oh God Not This Again

3 Comments

Every now and again there’s a thing about the tragedy of RSS. Ugh.

I want to make a few points.

One is that, as Greg Reinacker once said, RSS is plumbing.

Another is that millions of mainstream users rely on it — for podcasting, especially, but also because it powers other things that they use. They don’t know that there’s RSS under the hood, and that’s totally fine.

Another is that it’s not necessary for RSS readers to become mass-market, mainstream apps. I’m sure I never said they would be, and I don’t remember anyone else from the early days of RSS saying they would be, either.

It’s totally fine if RSS readers are just used by journalists, bloggers, researchers, and people who like to read. Yes! It’s a-okay.

But note that everyone who uses Twitter and whatever else, and who follows those people, are benefiting indirectly from RSS. RSS is, often, where the links come from in the first place before they show up on social networks.

(Sometimes it’s even automated. For instance, posts to my @NetNewsWire account on Twitter come from the blog’s feed. In other words: Twitter itself is an RSS reader.)

In a nutshell: judging RSS itself because RSS readers are not mainstream is to miss everything that RSS does. And judging RSS readers for not being mainstream is to judge them against expectations set by some hype artists more than a decade ago — but not by me or anybody else actually doing the work.

I don’t expect to see RSS readers running on every Mac and iOS device. This does not make it a failure.

It’s 2018, and I think by now we’re allowed to have things that some people like, but that not everybody uses.

* * *

From 2011: What we talk about when we talk about RSS

From 2013: Why I love RSS and You Do Too

From 2018: Some Hope

Read the whole story
rosskarchner
1 day ago
reply
I thought the linked article was a good encapsulation of RSS's history
DC-ish
Share this story
Delete
2 public comments
fxer
21 hours ago
reply
“I think by now we’re allowed to have things that some people like, but that not everybody uses.”

if it's being built on VC then no, grow your MAU or die. if we want nice things like newsblur we have to support creators who are comfortable not being the next unicorn.
Bend, Oregon
sirshannon
23 hours ago
reply
Story title: "The Rise and Demise of RSS"
Story text: "RSS is not dead."

I mean, there is a definition of "demise" that isn't "death" but...

U.S. Mobile Giants Want to be Your Online Identity

1 Comment

The four major U.S. wireless carriers today detailed a new initiative that may soon let Web sites eschew passwords and instead authenticate visitors by leveraging data elements unique to each customer’s phone and mobile subscriber account, such as location, customer reputation, and physical attributes of the device. Here’s a look at what’s coming, and the potential security and privacy trade-offs of trusting the carriers to handle online authentication on your behalf.

Tentatively dubbed “Project Verify” and still in the private beta testing phase, the new authentication initiative is being pitched as a way to give consumers both a more streamlined method of proving one’s identity when creating a new account at a given Web site, as well as replacing passwords and one-time codes for logging in to existing accounts at participating sites.

Here’s a promotional and explanatory video about Project Verify produced by the Mobile Authentication Task Force, whose members include AT&T, Sprint, T-Mobile and Verizon:

The mobile companies say Project Verify can improve online authentication because they alone have access to several unique signals and capabilities that can be used to validate each customer and their mobile device(s). This includes knowing the approximate real-time location of the customer; how long they have been a customer and used the device in question; and information about components inside the customer’s phone that are only accessible to the carriers themselves, such as cryptographic signatures tied to the device’s SIM card.

The Task Force currently is working on building its Project Verify app into the software that gets pre-loaded onto mobile devices sold by the four major carriers. The basic idea is that third-party Web sites could let the app (and, by extension, the user’s mobile provider) handle the process of authenticating the user’s identity, at which point the app would interactively log the user in without the need of a username and password.

In another example, participating sites could use Project Verify to supplement or replace existing authentication processes, such as two-factor methods that currently rely on sending the user a one-time passcode via SMS/text messages, which can be intercepted by cybercrooks.

The carriers also are pitching their offering as a way for consumers to pre-populate data fields on a Web site — such as name, address, credit card number and other information typically entered when someone wants to sign up for a new user account at a Web site or make purchases online.

Johannes Jaskolski, general manager for Mobile Authentication Task Force and assistant vice president of identity security at AT&T, said the group is betting that Project Verify will be attractive to online retailers partly because it can help them capture more sign-ups and sales from users who might otherwise balk at having to manually provide lots of data via a mobile device.

“We can be a primary authenticator where, just by authenticating to our app, you can then use that service,” Jaskolski said. “That can be on your mobile, but it could also be on another device. With subscriber consent, we can populate that information and make it much more effortless to sign up for or sign into services online. In other markets, we have found this type of approach reduced [customer] fall-out rates, so it can make third-party businesses more successful in capturing that.”

Jaskolski said customers who take advantage of Project Verify will be able to choose what types of data get shared between their wireless provider and a Web site on a per-site basis, or opt to share certain data elements across the board with sites that leverage the app for authentication and e-commerce.

“Many companies already rely on the mobile device today in their customer authentication flows, but what we’re saying is there’s going to be a better way to do this in a method that is intended from the start to serve authentication use cases,” Jaskolski said. “This is what everyone has been seeking from us already in co-opting other mobile features that were simply never designed for authentication.”

‘A DISMAL TRACK RECORD’

A key question about adoption of this fledgling initiative will be how much trust consumers place with the wireless companies, which have struggled mightily over the past several years to validate that their own customers are who they say they are.

All four major mobile providers currently are struggling to protect customers against scams designed to seize control over a target’s mobile phone number. In an increasingly common scenario, attackers impersonate the customer over the phone or in mobile retail stores in a bid to get the target’s number transferred to a device they control. When successful, these attacks — known as SIM swaps and mobile number port-out scams —  allow thieves to intercept one-time authentication codes sent to a customer’s mobile device via text message or automated phone-call.

Nicholas Weaver, a researcher at the International Computer Science Institute and lecturer at UC Berkeley, said this new solution could make mobile phones and their associated numbers even more of an attractive target for cyber thieves.

Weaver said after he became a victim of a SIM swapping attack a few years back, he was blown away when he learned how simple it was for thieves to impersonate him to his mobile provider.

“SIM swapping is very much in the news now, but it’s been a big problem for at least the last half-decade,” he said. “In my case, someone went into a Verizon store, took over the account, and added themselves as an authorized user under their name — not even under my name — and told the store he needed a replacement phone because his broke. It took me three days to regain control of the account in a way that the person wasn’t able to take it back away from me.”

Weaver said Project Verify could become an extremely useful way for Web sites to onboard new users. But he said he’s skeptical of the idea that the solution would be much of an improvement for multi-factor authentication on third-party Web sites.

“The carriers have a dismal track record of authenticating the user,” he said. “If the carriers were trustworthy, I think this would be unequivocally a good idea. The problem is I don’t trust the carriers.”

It probably doesn’t help that all of the carriers participating in this effort were recently caught selling the real-time location data of their customers’ mobile devices to a host of third-party companies that utterly failed to secure online access to that sensitive data.

On May 10, The New York Times broke the news that a cell phone location tracking company called Securus Technologies had been selling or giving away location data on customers of virtually any major mobile network provider to local police forces across the United States.

A few weeks after the NYT scoop, KrebsOnSecurity broke the story that LocationSmart — a wireless data aggregator — hosted a public demo page on its Web site that would let anyone look up the real-time location data on virtually any U.S. mobile subscriber.

In response, all of the major mobile companies said they had terminated location data sharing agreements with LocationSmart and several other companies that were buying the information. The carriers each insisted that they only shared this data with customer consent, although it soon emerged that the mobile giants were instead counting on these data aggregators to obtain customer consent before sharing this location data with third parties, a sort of transitive trust relationship that appears to have been completely flawed from the get-go.

AT&T’s Jaskolski said the mobile giants are planning to use their new solution to further protect customers against SIM swaps.

“We are planning to use this as an additional preventative control,” Jaskolski said. “For example, just because you swap in a new SIM, that doesn’t mean the mobile authentication profile we’ve created is ported as well. In this case, porting your sim won’t necessarily port your mobile authentication profile.”

Jaskolski emphasized that Project Verify would not seek to centralize subscriber data into some new giant cross-carrier database.

“We’re not going to be aggregating and centralizing this subscriber data, which will remain with each carrier separately,” he said. “And this is very much a pro-competition solution, because it will be portable by design and is not designed to keep a subscriber stuck to one specific carrier. More importantly, the user will be in control of whatever gets shared with third parties.”

My take? The carriers can make whatever claims they wish about the security and trustworthiness of this new offering, but it’s difficult to gauge the sincerity and accuracy of those claims until the program is broadly available for beta testing and use — which is currently slated for sometime in 2019.

As with most things related to cybersecurity and identity online, much will depend on the default settings the carriers decide to stitch into their apps, and more importantly the default settings of third-party Web site apps designed to interact with Project Verify.

Jaskolski said the coalition is hoping to kick off the program next year in collaboration with some major online e-commerce platforms that have expressed interest in the initiative, although he declined to talk specifics on that front. He added that the mobile providers are currently working through exactly what those defaults might look like, but also acknowledged that some of those platforms have expressed an interest in forcing users to opt-out of sharing specific subscriber data elements.

“Users will be able to see exactly what attributes will be shared, and they can say yes or no to those,” he said. “In some cases, the [third-party site] can say here are some things I absolutely need, and here are some things we’d like to have. Those are some of the things we’re working through now.”

Read the whole story
rosskarchner
6 days ago
reply
Nope
DC-ish
Share this story
Delete

The Commons Clause doesn't help the commons

1 Share
The Commons Clause was announced recently, along with several projects moving portions of their codebase under it. It's an additional restriction intended to be applied to existing open source licenses with the effect of preventing the work from being sold[1], where the definition of being sold includes being used as a component of an online pay-for service. As described in the FAQ, this changes the effective license of the work from an open source license to a source-available license. However, the site doesn't go into a great deal of detail as to why you'd want to do that.

Fortunately one of the VCs behind this move wrote an opinion article that goes into more detail. The central argument is that Amazon make use of a great deal of open source software and integrate it into commercial products that are incredibly lucrative, but give little back to the community in return. By adopting the commons clause, Amazon will be forced to negotiate with the projects before being able to use covered versions of the software. This will, apparently, prevent behaviour that is not conducive to sustainable open-source communities.

But this is where things get somewhat confusing. The author continues:

Our view is that open-source software was never intended for cloud infrastructure companies to take and sell. That is not the original ethos of open source.

which is a pretty astonishingly unsupported argument. Open source code has been incorporated into proprietary applications without giving back to the originating community since before the term open source even existed. MIT-licensed X11 became part of not only multiple Unixes, but also a variety of proprietary commercial products for non-Unix platforms. Large portions of BSD ended up in a whole range of proprietary operating systems (including older versions of Windows). The only argument in favour of this assertion is that cloud infrastructure companies didn't exist at that point in time, so they weren't taken into consideration[2] - but no argument is made as to why cloud infrastructure companies are fundamentally different to proprietary operating system companies in this respect. Both took open source code, incorporated it into other products and sold them on without (in most cases) giving anything back.

There's one counter-argument. When companies sold products based on open source code, they distributed it. Copyleft licenses like the GPL trigger on distribution, and as a result selling products based on copyleft code meant that the community would gain access to any modifications the vendor had made - improvements could be incorporated back into the original work, and everyone benefited. Incorporating open source code into a cloud product generally doesn't count as distribution, and so the source code disclosure requirements don't trigger. So perhaps that's the distinction being made?

Well, no. The GNU Affero GPL has a clause that covers this case - if you provide a network service based on AGPLed code then you must provide the source code in a similar way to if you distributed it under a more traditional copyleft license. But the article's author goes on to say:

AGPL makes it inconvenient but does not prevent cloud infrastructure providers from engaging in the abusive behavior described above. It simply says that they must release any modifications they make while engaging in such behavior.

IE, the problem isn't that cloud providers aren't giving back code, it's that they're using the code without contributing financially. There's no difference between what cloud providers are doing now and what proprietary operating system vendors were doing 30 years ago. The argument that "open source" was never intended to permit this sort of behaviour is simply untrue. The use of permissive licenses has always allowed large companies to benefit disproportionately when compared to the authors of said code. There's nothing new to see here.

But that doesn't mean that the status quo is good - the argument for why the commons clause is required may be specious, but that doesn't mean it's bad. We've seen multiple cases of open source projects struggling to obtain the resources required to make a project sustainable, even as many large companies make significant amounts of money off that work. Does the commons clause help us here?

As hinted at in the title, the answer's no. The commons clause attempts to change the power dynamic of the author/user role, but it does so in a way that's fundamentally tied to a business model and in a way that prevents many of the things that make open source software interesting to begin with. Let's talk about some problems.

The power dynamic still doesn't favour contributors

The commons clause only really works if there's a single copyright holder - if not, selling the code requires you to get permission from multiple people. But the clause does nothing to guarantee that the people who actually write the code benefit, merely that whoever holds the copyright does. If I rewrite a large part of a covered work and that code is merged (presumably after I've signed a CLA that assigns a copyright grant to the project owners), I have no power in any negotiations with any cloud providers. There's no guarantee that the project stewards will choose to reward me in any way. I contribute to them but get nothing back in return - instead, my improved code allows the project owners to charge more and provide stronger returns for the VCs. The inequity has shifted, but individual contributors still lose out.

It discourages use of covered projects

One of the benefits of being able to use open source software is that you don't need to fill out purchase orders or start commercial negotiations before you're able to deploy. Turns out the project doesn't actually fill your needs? Revert it, and all you've lost is some development time. Adding additional barriers is going to reduce uptake of covered projects, and that does nothing to benefit the contributors.

You can no longer meaningfully fork a project

One of the strengths of open source projects is that if the original project stewards turn out to violate the trust of their community, someone can fork it and provide a reasonable alternative. But if the project is released with the commons clause, it's impossible to sell any forked versions - anyone who wishes to do so would still need the permission of the original copyright holder, and they can refuse that in order to prevent a fork from gaining any significant uptake.

It doesn't inherently benefit the commons

The entire argument here is that the cloud providers are exploiting the commons, and by forcing them to pay for a license that allows them to make use of that software the commons will benefit. But there's no obvious link between these things. Maybe extra money will result in more development work being done and the commons benefiting, but maybe extra money will instead just result in greater payout to shareholders. Forcing cloud providers to release their modifications to the wider world would be of benefit to the commons, but this is explicitly ruled out as a goal. The clause isn't inherently incompatible with this - the negotiations between a vendor and a project to obtain a license to be permitted to sell the code could include a commitment to provide patches rather money, for instance, but the focus on money makes it clear that this wasn't the authors' priority.

What we're left with is a license condition that does nothing to benefit individual contributors or other users, and costs us the opportunity to fork projects in response to disagreements over design decisions or governance. What it does is ensure that a range of VC-backed projects are in a better position to improve their returns, without any guarantee that the commons will be left better off. It's an attempt to solve a problem that's existed since before the term "open source" was even coined, by simply layering on a business model that's also existed since before the term "open source" was even coined[3]. It's not anything new, and open source derives from an explicit rejection of this sort of business model.

That's not to say we're in a good place at the moment. It's clear that there is a giant level of power disparity between many projects and the consumers of those projects. But we're not going to fix that by simply discarding many of the benefits of open source and going back to an older way of doing things. Companies like Tidelift[4] are trying to identify ways of making this sustainable without losing the things that make open source a better way of doing software development in the first place, and that's what we should be focusing on rather than just admitting defeat to satisfy a small number of VC-backed firms that have otherwise failed to develop a sustainable business model.

[1] It is unclear how this interacts with licenses that include clauses that assert you can remove any additional restrictions that have been applied
[2] Although companies like Hotmail were making money from running open source software before the open source definition existed, so this still seems like a reach
[3] "Source available" predates my existence, let alone any existing open source licenses
[4] Disclosure: I know several people involved in Tidelift, but have no financial involvement in the company

comment count unavailable comments
Read the whole story
rosskarchner
7 days ago
reply
DC-ish
Share this story
Delete

JQ is a symptom

1 Share

jq is a great tool. It does what bash can not – work with structured data. I use it. I would like not to use it.

In my opinion, working with structured data is such a basic thing that it makes much more sense to be handled by the language itself. I want my shell to be capable and I strongly disagree with the view that a shell “is not supposed to do that”. Shell is supposed to do whatever is needed to make my life easier. Handling structured data is one of these things.

If “shell is not supposed to do that”, by that logic, bash is not supposed to do anything except for running external commands and routing the data between them. Doesn’t it seem odd that bash does have builtin string manipulation then? Maybe bash shouldn’t have added associative arrays in version 4? … or arrays in version 2? How about if and while ? Maybe bash shouldn’t have them either?

woman-698943_640

jq is a symptom that bash can’t handle today’s reality: structured data. The world is increasingly more about APIs. APIs consume and return structured data. I do work with APIs from shell. Don’t you guys use AWS CLI or any other API that returns JSON?

The reality has changed. bash hasn’t. I’m working on bash alternative. Please help me with it. Or at least spread the word.

If you don’t like my project, join Elvish . Elvish is another shell that supports structured data.


Happy coding! Hope it’s not in bash.



Read the whole story
rosskarchner
9 days ago
reply
DC-ish
Share this story
Delete

What do you believe now that you didn't five years ago?

1 Comment

 

 

Decentralized systems will continue to lose to centralized systems until there's a driver requiring decentralization to deliver a clearly superior consumer experience. Unfortunately, that may not happen for quite some time.

I say unfortunately because ten years ago, even five years ago, I still believed decentralization would win. Why? For all the idealistic technical reasons I laid out long ago in Building Super Scalable Systems: Blade Runner Meets Autonomic Computing In The Ambient Cloud.

While the internet and the web are inherently decentralized, mainstream applications built on top do not have to be. Typically, applications today—Facebook, Salesforce, Google, Spotify, etc.—are all centralized.

That wasn't always the case. In the early days of the internet the internet was protocol driven, decentralized, and often distributed—FTP (1971), Telnet (<1973), FINGER (1971/1977),  TCP/IP (1974), UUCP (late 1970s) NNTP (1986), DNS (1983), SMTP (1982), IRC(1988), HTTP(1990), Tor (mid-1990s), Napster(1999), and XMPP(1999).

We do have new decentalized services: Bitcoin(2009), Minecraft(2009), Ethereum(2104), IPFS(2015), Mastadon(2016), and PeerTube(2018). We're still waiting on Pied Piper to deliver the decentralized internet

On an evolutionary timeline decentralized systems are neanderthals; centralized systems are the humans. Neanderthals came first. Humans may have interbred with neanderthals, humans may have even killed off the neanderthals, but there's no doubt humans outlasted the neanderthals.

The reason why decentralization came first is clear from a picture of the very first ARPA (Advanced Research Projects Agency) network, which later evolved into the internet we know and sometimes love today:

Read the whole story
rosskarchner
36 days ago
reply
might get that first paragraph as a tattoo
DC-ish
Share this story
Delete

IT is urbanizing, McDonald’s gets it, but Woonsocket doesn’t (yet)

2 Shares

My favorite UK TV producer once had to sell his house in Wimbledon and move to an apartment in Central London just to get his two adult sons to finally leave home. Now something similar seems to be happening in American IT. Some people are calling it age discrimination. I’m not sure I’d go that far, but the strategy is clear: IT is urbanizing — moving to city centers where the labor force is perceived as being younger and more agile.

The poster child for this tactic is McDonald’s, based for 47 years in Oak Brook, Illinois, but just this summer moved to a new Intergalactic HQ downtown in the Chicago Loop. Not everybody has left the old digs. McDonald’s has opened a software division at the new HQ specifically working on McDonald’s cloud offerings, which is to say working on the future of McDonald’s IT. 

The old guys and gals are generally back in the burbs, while the new Dev/Ops Cloud folks are in the city.  This is likely by design. McDonald’s techies can get their Cloud training online in either location, but if you are in the suburbs you can’t get the Cloud charge codes because you are not going to the meetings downtown.  Move to the city if you want to work on Cloud.  Charge codes are the way they starve old practices at McDonald’s.

I don’t see this change in IT structure as an accounting function.  It is a sunk cost tire problem.  The corporations are seeing the issue, they can’t get their older staff to adapt to the innovations.  Clayton Christensen covered this in a number of books. 

What has changed at the board level is companies like Ford realize now that they are a software company that makes cars.  Liberty Mutual said as much.  The Internet of Things, Cloud, IT Innovation, have all struck a chord, and CEOs are listening. 

The issue is finding how to get your staff to adapt to change.  The new answer is to move.  Starve the older teams with fewer charge codes, give them all the training they want since that is very cheap now and all online.  Reward them for training and make it apart of their MBO (Management by Objectives). The training courses are now free across many corporations, (SafariBooks, PluralSight, INE, etc.). 

As an old IT guy it is possible for me to see this as age discrimination and a sneaky (if expensive) workaround for laws against that practice. But it’s really more a matter of innovation discrimination, since fogies like me who are willing to do the classes and make the accompanying physical move — that is older workers who are eager for new experiences of all types — are generally free to join the future.

Still, the COBOL crowd will be pissed when they figure out what’s happening.

When I started work on this column the poster child wasn’t McDonald’s, it was Aetna, the giant health insurer, which earlier this year announced it was moving from Hartford, Connecticut, where it had lived for more than a century, to New York, NY. Though the $9 million in subsidies Aetna was to receive for this $85 million move got a lot of press, I really doubt they were doing it for the tax savings.

Here’s the rationale I was hearing from inside Aetna:

Aetna needed to move into the 21st Century.

The existing Hartford staff is mainly baby boomers stuck in 1990.

The move discharges the baby boomers who will not take the move package.

The IT department got to move into a city that is the capital of FinTech.

The transition would transform Aetna tech with less risk than if they stayed in Hartford with old IT thinking.

Then something funny happened: CVS offered to buy Aetna for $69 billion (the deal has yet to close) and strongly suggested Aetna remain in Hartford, closer to CVS HQ in Woonsocket — yes, Woonsocket — Rhode Island. So Aetna cancelled its move to Manhattan, giving back the city’s $9 million, also forcing me to flip this column on its head.

Here’s where I am going to make a bold prediction. IF the deal closes (it looks likely) and CVS buys Aetna sometime in the next few months, a come-to-Jesus meeting will be held in Woonsocket after which the combined company will resume that move into New York City.

Here is why. I have been noticing that many developers my age don’t want to learn something new.  Docker is new.  Cloud Foundry changes everything in software development.  That knocks down multiple monolithic structures.  When you look at IT performance you see some baby boomers, some, Generation X’ers, but more and more Y and Z generation who have adopted new ways of seeing IT.  This problem is at the core of IBM, HP, Dell, and others.  Baby Boomers can’t quit: they are broke if they do.  But they also can’t adjust, so they strangle innovation to stay on top.

Ultimately companies sell their homes in Wimbledon, so to speak, moving into the big urban tech centers, urbanizing to join the biggest corporate headquarters real estate boom in history — a real estate boom ironically driven by virtualization.


My old friend Adam Dorrell’s company, CustomerGauge, is having an event on September 14th at the Computer History Museum in Mountain View, CA. I’ll be there as host, moderator and valet parking attendant. CustomerGauge is a Software-as-a-Service (SaaS) platform that measures and reports on customer feedback in real time.








Digital Branding
Web Design Marketing

Read the whole story
rosskarchner
41 days ago
reply
DC-ish
Share this story
Delete
Next Page of Stories