Web dev in DC http://ross.karchner.com
989 stories
·
16 followers

Technologists wanted

1 Share

The CFPB is hiring product managers, designers, engineers, data scientists, and more to help detect and prevent unfair, deceptive, and abusive practices in financial markets.

Read the whole story
rosskarchner
28 days ago
reply
DC-ish
Share this story
Delete

AWS achieves the first OSCAL format system security plan submission to FedRAMP

1 Comment

Amazon Web Services (AWS) is the first cloud service provider to produce an Open Security Control Assessment Language (OSCAL)–formatted system security plan (SSP) for the FedRAMP Project Management Office (PMO). OSCAL is the first step in the AWS effort to automate security documentation to simplify our customers’ journey through cloud adoption and accelerate the authorization to operate (ATO) process.

AWS continues its commitment to innovation and customer obsession. Our incorporation of the OSCAL format will improve the customer experience of reviewing and assessing security documentation. It can take an estimated 4,200 workforce hours for companies to receive an ATO, with much of the effort due to manual review and transcription of documentation. Automating this process through a machine-translatable language gives our customers the ability to ingest security documentation into a governance, risk management, and compliance (GRC) tool to automate much of this time-consuming task. AWS worked with an AWS Partner, to ingest the AWS SSP through their tool, Xacta.

This is a first step in several initiatives AWS has planned to automate the security assurance process across multiple compliance frameworks. We continue to look for ways to earn trust with our customers, and over the next year we will continue to release new solutions that customers can use to rapidly deploy secure and innovative services.

“Providing the SSP packages in OSCAL is a great milestone in security automation marking the beginning of a new era in cybersecurity. We appreciate the leadership in this area and look forward to working with all cyber professionals, in particular with the visionary cloud service providers, to help deliver secure innovation faster to the people they serve.”

– Dr. Michaela Iorga, OSCAL Strategic Outreach Director, NIST

To learn more about OSCAL, visit the NIST OSCAL website. To learn more about FedRAMP’s plans for OSCAL, visit the FedRAMP Blog.

To learn what other public sector customers are doing on AWS, see our Government, Education, and Nonprofits case studies and customer success stories. Stay tuned for future updates on our Services in Scope by Compliance Program page. Let us know how this post will help your mission by reaching out to your AWS account team. Lastly, if you have feedback about this blog post, let us know in the Comments section.

Want more AWS Security news? Follow us on Twitter.

Matthew Donkin

Matthew Donkin

Matthew Donkin, AWS Security Compliance Lead, provides direction and guidance for security documentation automation, physical security compliance, and assists customers in navigating compliance in the cloud. He is leading the development of the industries’ first open security controls assessment language (OSCAL) artifacts for adoption of a faster and more reliable way to process resource intensive documentation within the authorization process.

Read the whole story
rosskarchner
42 days ago
reply
OSCAL or GTFO
DC-ish
Share this story
Delete

Rethinking the approach to regulations

1 Share

Markets work best when rules are simple, easy to understand, and easy to enforce. The CFPB is seeking to move away from highly complicated rules that have long been a staple of consumer financial regulation and towards simpler and clearer rules. In addition, the CFPB is dramatically increasing the amount of guidance it is providing to the marketplace, in accordance with the same principles.

Read the whole story
rosskarchner
54 days ago
reply
DC-ish
Share this story
Delete

Attacking the Performance of Machine Learning Systems

1 Comment and 2 Shares

Interesting research: “Sponge Examples: Energy-Latency Attacks on Neural Networks“:

Abstract: The high energy costs of neural network training and inference led to the use of acceleration hardware such as GPUs and TPUs. While such devices enable us to train large-scale neural networks in datacenters and deploy them on edge devices, their designers’ focus so far is on average-case performance. In this work, we introduce a novel threat vector against neural networks whose energy consumption or decision latency are critical. We show how adversaries can exploit carefully-crafted sponge examples, which are inputs designed to maximise energy consumption and latency, to drive machine learning (ML) systems towards their worst-case performance. Sponge examples are, to our knowledge, the first denial-of-service attack against the ML components of such systems. We mount two variants of our sponge attack on a wide range of state-of-the-art neural network models, and find that language models are surprisingly vulnerable. Sponge examples frequently increase both latency and energy consumption of these models by a factor of 30×. Extensive experiments show that our new attack is effective across different hardware platforms (CPU, GPU and an ASIC simulator) on a wide range of different language tasks. On vision tasks, we show that sponge examples can be produced and a latency degradation observed, but the effect is less pronounced. To demonstrate the effectiveness of sponge examples in the real world, we mount an attack against Microsoft Azure’s translator and show an increase of response time from 1ms to 6s (6000×). We conclude by proposing a defense strategy: shifting the analysis of energy consumption in hardware from an average-case to a worst-case perspective.

Attackers were able to degrade the performance so much, and force the system to waste so many cycles, that some hardware would shut down due to overheating. Definitely a “novel threat vector.”

Read the whole story
rosskarchner
56 days ago
reply
DC-ish
Share this story
Delete

Specifying Spring '83

1 Share
Protocol as investigation and critique.
Read the whole story
rosskarchner
62 days ago
reply
DC-ish
Share this story
Delete

A New Definition of HTTP

1 Share

Seven and a half years ago, I wrote that RFC2616 is dead, replaced by RFCs 7230-5.

Now, it’s those specifications’ turn to be retired. The RFC is dead; long live the RFC!

Why?

The emergence of HTTP/2 and now HTTP/3 have made it clear that HTTP’s “core” semantics don’t change between protocol versions. For example, methods and status codes mean the same thing no matter what version of the protocol you use; with a few exceptions, the same can be said about header fields.

However, RFC7231 entangled the definition of these core semantics with the specifics of HTTP/1.1. Given the progression of new protocol versions, the HTTP Working Group decided that it would be better to have a clear, generic defintion of the versionless semantics of HTTP separated from the individual wire protocols that people use.

That led us to rearrange HTTP into three documents:

  • HTTP Semantics - those core, versionless semantics
  • HTTP Caching - split into a separate document for convenience, but also versionless
  • HTTP/1.1 - everything that’s specific to that textual wire protocol

The revision of HTTP/2 and the brand-new HTTP/3 (based upon QUIC) are being simultaneously published and both rely upon the first two documents.

What’s changed?

Beyond the refactoring described above, we also were able to address over 475 issues. Most of these were yet more clarifications of the text, but changes were also made to mitigate security and interoperability issues.

There were simplifications in terminology across the board (for example, we have a whole new set of terms to describe header and trailer fields), and a lot of rearrangement to make it easier to find relevant information.

HTTP field names now have their own registry to make them easier to track down. The operation of HTTP trailer fields was refined to make them more functional; time will tell if they get used more than they currently do.

In Caching, we made a number of updates to improve interoperability, including giving more nuance guidelines about how stored headers should be updated and refining how cache invalidation operates. Many of these changes were based on data we gathered from the HTTP caching tests.

All three documents also have summaries of the most important differences – all of the major changes are listed in HTTP Semantics, HTTP Caching and HTTP/1.1.

Will there be another revision?

Maybe. Both the Working Group and the editorial team really need a break.

However, I believe that when a protocol is important as HTTP, it needs constant maintenance and documentation improvements. Consider DNS in comparison; while there is interop in that community, that’s largely because there’s a group of “insiders” who know how the protocol really works, rather than good specification (although there are efforts to fix that, if only informally). Also, HTTP is implemented and directly used by a much broader community of developers.

History also indicates that if HTTP continues to serve people’s needs, we’ll need to refine and better document the corner cases as the protocol gets used for even more things and in new, interesting ways.

That means that you shouldn’t bother remembering the RFC numbers of the documents above; they only identify a version of the documents. The latest and most relevant HTTP specifications will always be listed on the HTTP Working Group Specifications page; I’d recommend using that as a starting point.

Thanks to my co-editors Roy and Julian, our Working Group chair Tommy, our Area Directors Francesca and Barry, and everyone who reviewed and commented on the documents; it’s been a lot of fun.

Read the whole story
rosskarchner
65 days ago
reply
DC-ish
Share this story
Delete
Next Page of Stories