Web dev in DC http://ross.karchner.com
543 stories
·
15 followers

Things We Read This Week

1 Comment
Illustration by Kevin Zweerink for The New York Times

Welcome to Things We Read This Week, a weekly post featuring articles from around the internet recommended by New York Times team members. This is where we articles we read and liked, things that made us think and things we couldn’t stop talking about. Check back every Friday for a new post.

How We Tracked Cable News Chyrons

Newsroom-style cowboy coding at its finest. Washington Post graphics editor Kevin Schaul explains how he and his colleagues tracked cable news Chyrons during the James Comey hearing, using a mix of hastily written code, open source software and old-fashioned manual labor. - Recommended by Chase Davis, Editor, Interactive News

Accessibility and Virtual Reality (Video)

Virtual reality is everywhere, including in New York Times stories. VR is a very physical experience and it requires users to make use of their sensory systems and physical bodies. Thomas Logan presented “Accessibility and Virtual Reality” at a recent Accessibility NYC Meetup; his talk follows the Web Accessibility guidelines and describes the current state of inclusive VR, highlighting several products that are doing it well. VR is happening now, not in the future, and we shouldn’t wait to make it accessible. As Logan says, “Be accessible ASAP”. - Recommended by John Schimmel, Senior Integration Engineer, NYT Beta

Art at the Edge of Tomorrow

In the 1960s, Lillian Schwartz was experimenting with art and technology, and creating sculptures that combined the two. After one of her sculptures was featured in an exhibit at MOMA, Schwartz was hired at Bell Labs, which was one of the premier centers for computer innovation at the time (we can thank them for the transistor, information theory and the laser). She went on to spend three decades at Bell, exploring the intersection of science, art and film. This is a great piece (and accompanying video) about a woman who pushed the boundaries of how computers can transform cinema. - Recommended by Sarah Bures, Web Developer, News Products

This is What a Brutalist World Would Look Like on Your Phone

Brutalist design applied to digital products. Enough said. — Recommended by Nick Rockwell, Chief Technology Officer


Things We Read This Week was originally published in Times Open on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read the whole story
rosskarchner
9 days ago
reply
The Chyron thing is pretty cool
DC-ish
Share this story
Delete

Brianna Wu (@Spacekatgal) shows how a candidate can tweet like a person

1 Share

Brianna Wu is running for Congress. She’s using Twitter like a real person. The result is an authenticity and humanity that’s unique in the political sphere. Yesterday I explained to a Twitter idealist from a decade ago how political Twitter had succeeded, but not in the way any of us expected. Now every politician has … Continued

The post Brianna Wu (@Spacekatgal) shows how a candidate can tweet like a person appeared first on without bullshit.

Read the whole story
rosskarchner
32 days ago
reply
DC-ish
Share this story
Delete

Debian 9.0 Stretch has been released!

1 Share

Alt Stretch has been released

Let yourself be embraced by the purple rubber toy octopus! We're happy to announce the release of Debian 9.0, codenamed Stretch.

Want to install it? Choose your favourite installation media among Blu-ray Discs, DVDs, CDs and USB sticks. Then read the installation manual.

Already a happy Debian user and you only want to upgrade? You can easily upgrade from your current Debian 8 Jessie installation, please read the release notes.

Do you want to celebrate the release? Share the banner from this blog in your blog or your website!

Read the whole story
rosskarchner
35 days ago
reply
DC-ish
Share this story
Delete

Zeta’s JITterpreter

1 Share

About six weeks ago, I made ZetaVM open source and announced it on this blog. This is a compiler/VM project that I had been quietly working on for about 18 months. The project now has 273 stars on GitHub. This is both exciting and scary, because so much of what I want to do with this project is not yet built. I really want to show this project to people, but I also find myself scared that people may come, see how immature/incomplete the project is at this stage, and never come back. In any case the project is open source now, and I think it would be good for me to write about it and explain some of the design ideas behind it, if only to document it for current and potential future collaborators.

One of the main goals of ZetaVM is to be very approachable, and enable more people to create their own programming languages, or to experiment with language design. With that goal in mind, I’ve designed a textual, stack-based bytecode format that resembles JSON, but allows for cycles (objects can reference one-another). Functions, basic blocks, and instructions in this bytecode format are all described as a graph of JS-like objects. This is very user-friendly. Anyone can write a Python script that outputs code in this format and run the output in ZetaVM. It’s also very powerful, in a LISP-y code-is-data kind of way: you can generate and introspect bytecode at run time, it’s just made of plain objects and values that the VM can manipulate. ZetaVM has first-class bytecode.

The downside of all this, as you can imagine, is that it inherently has to be absolutely dog slow. Or does it? The first version of the Zeta interpreter traversed the graph of bytecode objects the naive way, and it was indeed dog slow. I’ve since written a new interpreter which removes the overhead of the object-based bytecode by dynamically translating it to an internal representation (think dynamic binary translation). The internal IR is compact and flat, executing it involves no pointer hopping.

The new interpreter generates code on the fly, which means it is, by definition, a Just-In-Time (JIT) compiler. It’s architecture is based on Basic Block Versioning (BBV), a compilation technique I developed during my PhD (talks and papers on the topic are linked here if you’re curious). BBV has the nice property that it generates code lazily, and the generated code naturally ends up compact and fairly linear. This is not done yet, but BBV also makes it possible to specialize code to eliminate dynamic type checks very effectively, and to perform various optimizations on the fly.

You might be wondering why I’m bothering with an interpreter, instead of just writing a JIT that generates machine code. One of the motivating factors is that Zeta is still at a very early stage, and I think that an interpreter is a better choice for prototyping things. Another factor is that it occurred to me that I could potentially make Zeta more portable by having the interpreter do much of the compilation and optimization work. The interpreter can do type-specialization, inlining and various code simplifications.

The interpreter will be designed in such a way that the internal IR it produces is optimized and in many ways very close to machine code. It should then be possible to add a thin JIT layer on top to generate actual machine code. The resulting JIT will hopefully be much simpler and easier to maintain than if one were compiling directly from the raw object-based bytecode format. Another benefit of this design is that all of the optimizations that the interpreter perform will not be tied in with the specifics of x86 or other architectures, they will remain portable.

At the moment, the new interpreter is at a point where it lazily compiles code into the flat internal format, but performs no other optimization. This was enough to get a 7x performance improvement over the naive interpreter, but the current system is still quite a bit below the performance level of the Python interpreter, and there is definite room for improvement. Some of the first optimizations I would like to introduce are the elimination of redundant branching instructions, and the use of inline caching to speed up function calls.

The elimination of redundant branches is fairly easy to do with BBV. Code is lazily compiled, and appended linearly into a buffer. When generating code for a branch, if the block we are jumping to is just about to be compiled, then the branch is redundant. BBV will naturally tend to generate code that flows linearly along hot code paths and branches out-of-line for infrequent code paths. That is, the default path often comes right next and requires no branching.

Inline caching is a classic technique that was pioneered by the Smalltalk and SELF VMs. It’s used, among other things, to eliminate dynamic property lookups when polymorphic function calls are performed (see this excellent blog post for more information). Currently, ZetaVM, when performing a function call, needs to read multiple properties on function objects. For instance, it needs to find out how many arguments the function has, and what is its entry basic block. These property lookups are dynamic, and relatively slow. The end result is that the call instruction is very slow compared to other instructions.

Most call instructions will end up always calling the same instruction. Hence, dynamic overhead on function calls can largely be eliminated by caching the identity of the function being called by a given call instruction. That is, one can cache the number of arguments and entry basic block associated with a function object the first time a call instruction is run, and then reuse this information when calling the function again, provided we keep calling the same function. This information will be cached in line, in the instruction stream, right after the call instruction opcode, hence the name inline caching. I anticipate that with inline caching, function calls in Zeta can be made several times faster.




Read the whole story
rosskarchner
42 days ago
reply
DC-ish
Share this story
Delete

History and future of open source at CFPB

2 Shares

At CFPB, we develop nearly all of our software publicly on GitHub and release it under a Public Domain/CC0 license. We adopted this policy in order for taxpayers to benefit from the products they pay for, to provide a window into how our agency conducts business, and to increase the quality of our source code through contributions from the wider development community.

Since 2012, we have seen the benefits of an open source by default policy in the following outcomes:

Open source software development has been critical at CFPB since we opened our doors. In April 2012, just nine months after the Bureau opened, we released our Source Code Policy, based on the work of the Department of Defense, along with our first two open source projects. Just six days later, we accepted our first pull request from a member of the public.

Our approach to open source in these early days was to publicly release small bits of source code that had potential for wider application. In November of 2012 and January of 2013, our first class of design and development fellows began working at the Bureau. Two of the projects that were developed by this initial class of fellows, the open data platform Qu and eRegulations, set a new precedent: they were developed from day one entirely in the open on GitHub. Shortly afterwards, we formed an open source working group, which worked to put procedures in place for open sourcing code that was only available internally. As more code was moved to the public, the team moved towards a policy of open source by default, where source code is developed in private only in certain well-defined cases, including security risks or when the license is restricted.

With an open source by default policy in place, we’ve been able to collaborate with other government agencies and the public at hackathon events where designers and developers come together to work on open source software:

  • CFPB staff have been present at National Day of Civic Hacking events in 2013, 2014, and 2015, working with developers to create data visualizations using the Consumer Complaint Database and Home Mortgage Disclosure Act data.
  • Software developers from 18F’s eRegs team attended the General Services Administration’s GovTechHack in 2015, where they focused on improving the software’s local setup process. They succeeded in decreasing the setup time for eRegs from 2 hours to 15 minutes.
  • CFPB Developer Catherine Farman gathered a team of designers and developers to contribute to the Owning a Home website at LadyHacks.

These experiences led us to create internal guidelines for attending these events and sharing our work to get developer contributions at them. Though we’ve come a long way in the way we approach open source, we are continually seeking to improve by:

  • Ensuring our project setup documentation is up to date by holding regular README Refresh Days.
  • Sharing our experiences with our fellow federal agencies and learning from them too, so we can incorporate their efforts into our own process.
  • Educating developers about CFPB’s open source projects and data sets, inspired by 18F’s data visualization workshop.
  • Increasing our community outreach by presenting our work at conferences and meetups.

In March 2016, the White House released a draft federal open source software policy for public comment. Members of the public gave feedback and suggested changes via GitHub.com. To date, the public has contributed 20 merged pull requests and over 200 discussion threads.

If you have thoughts about how we can or other federal agencies might continue to evolve our approach to open source software, we’d encourage you to discuss the Federal Open Source Policy on GitHub.

Read the whole story
acdha
44 days ago
reply
Washington, DC
rosskarchner
47 days ago
reply
DC-ish
Share this story
Delete

Chrome won

1 Comment and 2 Shares

Disclaimer: I worked for 7 years at Mozilla and was Mozilla’s Chief Technology Officer before leaving 2 years ago to found an embedded AI startup.

Mozilla published a blog post two days ago highlighting its efforts to make the Desktop Firefox browser competitive again. I used to closely follow the browser market but haven’t looked in a few years, so I figured it’s time to look at some numbers:

alldevices

The chart above shows the percentage market share of the 4 major browsers over the last 6 years, across all devices. The data is from StatCounter and you can argue that the data is biased in a bunch of different ways, but at the macro level it’s safe to say that Chrome is eating the browser market, and everyone else except Safari is getting obliterated.

Trend

I tried a couple different ways to plot a trendline and an exponential fit seems to work best. This aligns pretty well with theories around the explosive diffusion of innovation, and the slow decline of legacy technologies. If the 6 year trend holds, IE should be pretty much dead in 2 or 3 years. Firefox is not faring much better, unfortunately, and is headed towards a 2-3% market share. For both IE and Firefox these low market share numbers further accelerate the decline because Web authors don’t test for browsers with a small market share. Broken content makes users switch browsers, which causes more users to depart. A vicious cycle.

Chrome and Safari don’t fit as well as IE and Firefox. The explanation for Chrome is likely that the market share is so large that Chrome is running out of users to acquire. Some people are stuck on old operating systems that don’t support Chrome. Safari’s recent growth is underperforming its trend most likely because iOS device growth has slowed.

Desktop market share

Looking at all devices blends mobile and desktop market shares, which can be misleading. Safari/iOS is dominant on mobile whereas on Desktop Safari has a very small share. Firefox in turn is essentially not present on mobile. So let’s look at the Desktop numbers only.

desktop

The Desktop-only graph unfortunately doesn’t predict a different fate for IE and Firefox either. The overall desktop PC market is growing slightly (most sales are replacement PCs, but new users are added as well). Despite an expanding market both IE and Firefox are declining unsustainably.

Adding users?

Eric mentioned in the blog post that Firefox added users last year. The relative Firefox market share declined from 16% to 14.85% during that period. For comparison, Safari Desktop is relatively flat, which likely means Safari market share is keeping up with the (slow) growth of the PC/Laptop market. Two possible theories are that Eric meant in his blog post that browser installs were added. People often re-install the browser on a new machine, which could be called an “added user”, but it comes usually at the expense of the previous machine becoming disused. It’s also possible that the absolute daily active user count has indeed increased due to the growth of the PC/laptop market, despite the steep decline in relative market share. Firefox ADUs aren’t public so it’s hard to tell.

From these graphs it’s pretty clear that Firefox is not going anywhere. That means that the esteemed Fox will be around for many many years, albeit with an ever diminishing market share. It also, unfortunately, means that a turnaround is all but impossible.

With a CEO transition about 3 years ago there was a major strategic shift at Mozilla to re-focus efforts on Firefox and thus the Desktop. Prior to 2014 Mozilla heavily invested in building a Mobile OS to compete with Android: Firefox OS. I started the Firefox OS project and brought it to scale. While we made quite a splash and sold several million devices, in the end we were a bit too late and we didn’t manage to catch up with Android’s explosive growth. Mozilla’s strategic rationale for building Firefox OS was often misunderstood. Mozilla’s founding mission was to build the Web by building a browser. Mobile thoroughly disrupted this mission. On mobile browsers are much less relevant–even more so third party mobile browsers. On mobile browsers are a feature of the Facebook and Twitter apps, not a product. To influence the Web on mobile, Mozilla had to build a whole stack with the Web at its core. Building mobile browsers (Firefox Android) or browser-like apps (Firefox Focus) is unlikely to capture a meaningful share of use cases. Both Firefox for Android and Firefox Focus have a market share close to 0%.

The strategic shift in 2014, back to Firefox, and with that back to Desktop, was significant for Mozilla. As Eric describes in his article, a lot of amazing technical work has gone into Firefox for Desktop the last years. The Desktop-focused teams were expanded, and mobile-focused efforts curtailed. Firefox Desktop today is technically competitive with Chrome Desktop in many areas, and even better than Chrome in some. Unfortunately, looking at the graphs, none of this has had any effect on market trends. Browsers are a commodity product. They all pretty much look the same and feel the same. All browsers work pretty well, and being slightly faster or using slightly less memory is unlikely to sway users. If even Eric–who heads Mozilla’s marketing team–uses Chrome every day as he mentioned in the first sentence, it’s not surprising that almost 65% of desktop users are doing the same.

What does this mean for the Web?

I started Firefox OS in 2011 because already back then I was convinced that desktops and browsers were dead. Not immediately–here we are 6 years later and both are still around–but both are legacy technologies that are not particularly influential going forward. I don’t think there will be a new browser war where Firefox or some other competitor re-captures market share from Chrome. It’s like launching a new and improved horse in the year 2017. We all drive cars now. Some people still use horses, and there is value to horses, but technology has moved on when it comes to transportation.

Does this mean Google owns the Web if they own Chrome? No. Absolutely not. Browsers are what the Web looked like in the first decades of the Internet. Mobile disrupted the Web, but the Web embraced mobile and at the heart of most apps beats a lot of JavaScript and HTTPS and REST these days. The future Web will look yet again completely different. Much will survive, and some parts of it will get disrupted. I left Mozilla because I became curious what the Web looks like once it consists predominantly of devices instead of desktops and mobile phones. At Silk we created an IoT platform built around open Web technologies such as JavaScript, and we do a lot of work around democratizing data ownership through embedding AI in devices instead of sending everything to the cloud.

So while Google won the browser wars, they haven’t won the Web. To stick with the transportation metaphor: Google makes the best horses in the world and they clearly won the horse race. I just don’t think that race matters much going forward.


Filed under: Mozilla





Read the whole story
acdha
59 days ago
reply
I hope this isn't true but Google has put a lot of money and top-tier talent into Chrome and shows no signs of stopping
Washington, DC
rosskarchner
59 days ago
reply
DC-ish
Share this story
Delete
Next Page of Stories