Digressions on Continuous Security

To content | To menu | To search

Monday, August 6 2018

Running with Runscribe Plus

WhatsApp_Image_2018-08-06_at_13.16.11.jpeg

Six months ago, I broke the bank and bought a pair of Runscribe Plus running pods. I was recovering from a nasty sprain of the deltoid ligaments acquired a year ago while running a forest trails, and was looking for ways to reduce the risk of further damage during my runs.

Also, I love gadgets, so that made for a nice excuse! :)

After reading various articles about the value of increasing step rates to decrease risk of injury, I looked into various footpod options, the leading of which is the Stryd, but wanted to monitor both feet which only the Runscribe Plus can do. So I did something I almost never do: ordering a gadget that hasn't been heavily reviewed by many others (to the exception of the5krunner).

Runscribe Plus, which I'll abbreviate RS, is a sensor that monitors your feet movement and impact while you run. It measures:

  • pronation: rolling of the foot, particularly useful to prevent sprains
  • footstrike: where you foot hits the ground, heel, middle or front
  • shock: how much force your feet hit the ground with
  • step rate
  • stride length
  • contact time
  • and a bunch of calculation based on these raw metrics
WhatsApp_Image_2018-08-06_at_13.34.27.jpeg
My RS arrived less than a week after ordering them, but I couldn't use them right away. After several hours of investigation and back and forth with the founder, Tim Clark, by email, we figured out that my pods shipped with a bogus firmware. He remotely pushed a new version, which I updated to using the android app, and the pods started working as intended.
Usability is great. RS starts recording automatically when the step rate goes above 140 (I usually run around 165spm), and also stop automatically at the end of a run. The android app then downloads running data from each pod and uploads it to the online dashboard. Both the app and the webui can be used to go through the data, and while the app is fine to visualize data, I do find the webui to be a lot more powerful and convenient to use.
Screenshot_2018-08-06_RunScribe_-_Data_Driven_Athlete.png
The cool thing about RS is being able to compare left and right foot, because each foot is measured separately. This is useful to detect and correct balance issues. In my case, I noticed after a few run that my injured foot, the left one, was a lot less powerful than the right one. It was still painful, and I simply couldn't push on it as much, and the right foot was compensating and taking a lot more shock. I tried to make a conscious effort to reduce this imbalance over the following month, and it seem to have paid off in the end.
Screenshot_2018-08-06_RunScribe_-_Data_Driven_Athlete_2.png

The RunScribe Dashboard displays shock data for each foot recorded during a 5k. The dark red line represents the right foot and is taking a lot more shock that the light red one representing the left foot.

It's possible to use the RS to measure distance, but a lot of users on the forum have been complaining about distance accuracy issues. I've run into some of those, even after calibrating the pods to my stride length over a dozen runs. I would go for a 5 miles run with my gps watch and RS would measure a distance of anything between 4 and 6 miles. RS doesn't have a GPS, so it bases those calculations on your stride length and step count. Those inaccuracies didn't really bother me, because you can always update the distance in the app or webui after the fact, which also helps train the pod, and I am more interested in other metrics anyway.
That being said, distance inaccuracy is completely gone. According to Garmin, this morning's run was 8.6 miles, which RS recorded as 8.5 miles. That's a 1% margin of error, and I honestly can't tell which one is right between RS and Garmin.
So what changed? I was previously placing the pods on the heels of my shoes but recently moved to the laces, which may have helped. I also joined the beta program to get early firmware updates, and I think Tim has been tweaking distance calculation quite a bit lately. At any rate, this is now working great.
Screenshot_2018-08-06_RunScribe_-_Data_Driven_Athlete_3.png
RS can also broadcast live metrics to your running watch, which can then be displayed on their own screen. I don't find those to be very useful, so I don't make use of it, but it does provide real-time shock and step rate and what not.

What about Power?

I'll be honest, I have no idea. Running power doesn't seem extremely useful to me, or maybe I need to spend more time studying its value. RS does expose a Power value, so if this is your thing, you may find it interesting.

Take Away

RS is definitely not for everyone. It has its rough edges and exposes data you need to spend time researching to understand and make good use off. That said, whether you're a professional athlete or, like me, just a geek who likes gadgets and data, it's a fantastic little tool to measure your progress and tweak your effort in areas you wouldn't be able to identify on your own. I like it a lot, and I think more people will adopt this type of tool in the future.
Did it help with my ankle recovery? I think so. Tracking pronation and shock metrics was useful to make sure I wasn't putting myself at risk again. The imbalance data is probably the most useful information I got out of the RS that I couldn't get before, and definitely justifies going with a system with 2 pods instead of one. And, if anything else, it helped me regain confidence in my ability to do long runs without hurting myself.
Screenshot_2018-08-06_RunScribe_-_Data_Driven_Athlete_4.png

Footstrike metrics for left and right foot recorded during a half marathon shows I mostly run on the middle and back of my feet. How to use that data is left as an exercise to the runner.

Last, but most certainly not least, Tim Clark and the Runscribe team are awesome. Even with the resources of a big shop like Garmin, it's not easy to take an experimental products through rounds of testing while maintaining a level of quality that satisfies runners accustomed to expensive running gear ($700 watches, $200 shoes, etc.). For a small team to pull this off is a real accomplishment, all while being accessible and creating a friendly community of passionate runners. It's nice to support the underdog every once in a while, even when that means having to live with minor bugs and being patient between updates.
note: this blog post was not sponsored by runscribe in any way, I paid for my own pods and have no affiliation with them, other than being a happy customer.

Friday, September 22 2017

On the prevalence of cross-site scripting (XSS) attacks in modern web applications

As I attended AppSec USA in Orlando, a lot of discussions revolved around the OWASP Top 10. Setting the drama aside for a moment, there is an interesting discussion to be had on the most common vulnerabilities found in current web applications. The Top 10 from 2013 and 2017 (rc1) hasn’t changed much: first and foremost are injection issues, then broken auth and session management, followed by cross-site scripting attacks.

Screenshot_20170922_090147.png

At first glance, this categorizing appears sensible, and security teams and vendors have depended on it for over a decade. But here’s the issue I have with it: the data we collect from Mozilla’s web bug bounty program strongly disagrees with it. From what we see, injection and authentication/session attacks are nowhere near as common as cross-site scripting attacks. In fact, throughout 2016 and 2017, we have received over five time more XSS reports than any other vulnerability!

Screenshot_20170922_073440.png

This is certainly a huge difference between our data and OWASP's, but Mozilla's dataset is also too small to draw generalities from. Thankfully, both Bugcrowd and Hackerone have published reports that show a similar trend.
In their 2017 report, Bugcrowd said "cross-site scripting (XSS) and Cross Site Request Forgery (CSRF) remain the most reported submissions across industries accounting for 25% and 7% of submissions respectively. This distribution very closely reflects last year’s findings (XSS 25% and CSRF 8%).".
Hackerone went further in their report, and broke the vulnerability stats down by industry, saying that “in all industries except for financial services and banking, cross-site scripting (XSS, CWE-79) was the most common vulnerability type discovered by hackers using the HackerOne platform. For financial services and banking, the most common vulnerability was improper authentication (CWE-287). Healthcare programs have a notably high percentage of SQL injection vulnerabilities (6%) compared to other industries during this time period”.


This data confirms what we’re seeing at Mozilla, but to be fair, not everyone agrees. Whitehat also published their own report where XSS only ranks third, after insufficient transport layer protection and information leakage. Still, even in this report, XSS ranks higher than authentication/authorization and injection issues.


All three sources that shows XSS as the most prevalent issue come from bug bounty programs, and there is a strong chance that bug bounty reporters are simply more focused on XSS than other attacks. That said, when looking at modern web applications (and Mozilla’s web applications are fairly modern), it is rare to find issues in the way authentication and authorization is implemented. Most modern web frameworks have stable and mature support for authentication, authorization, access controls, session management, etc. There’s also a big trend to rely on external auth providers with SAML or OpenID Connect that removed implementation bugs we  saw even 4 or 5 years ago. What about non-xss injections? We don’t get that many either. In the handful of services that purposely accept user data, we’ve been paranoid about preventing vulnerabilities, and it seem to have worked well so far. The data we get from security audits, outside of bug bounties, seem to confirm that trend.


In comparison, despite continued efforts from the security community to provide safe escaping frameworks like Cure53’s DOMPurify or Mozilla’s Bleach, web applications are still bad at escaping user-provided content. It’s hard to blame developers here, because the complexity of both the modern web and large applications is such that escaping everything all the time is an impossible goal. As such, the rate of XSS in web applications has steadily increased over the last few years.


What about content security policy? It helps, for sure. Before we enabled CSP on addons.mozilla.org, we had perhaps one or two XSS reports every month. After enabling it, we hardly get one or two per year. For sure, CSP bypass is possible, but not straightforward to achieve, and often sufficient to fend off an attacker (see screenshots from security audit reports below). The continued stream of XSS reports we receive is from older applications that do not use CSP, and the data is a strong signal that we should continue pushing for its adoption.

Screenshot_20170922_133931.png
So, how do we explain the discrepancy between what we’re seeing at Mozilla, Bugcrowd and Hackerone, and what other organizations are reporting as top vulnerabilities? My guess is a lot of vendors are reviewing very old applications that are still vulnerable to issues we’ve solved in modern frameworks, and that Mozilla/Bugcrowd/Hackerone mostly see modern apps. Another possibility is those same vendors have no solutions to XSS, but plenty of commercial solutions to other issues, and thus give them more weight as a way to promote their products. Or we could simply all have bad data and be drawing wrong conclusions.

Regardless of what is causing this discrepancy, there’s evidently a gap between what we’re seeing as the most prevalent issues, and what the rest of the security community, and particularly the OWASP Top 10, is reporting. Surely, this is going to require more digging, so if you have data, please do share it, so we can focus security efforts on the right issues!



Thank you to Greg Guthe and Jonathan Claudius for reviewing drafts of this blog post

Monday, September 11 2017

Lessons learned from mentoring

Over the last few weeks, a number of enthusiastic students have asked me when the registration for the next edition of Mozilla Winter of Security would open. I've been saddened to inform them that there won't be an edition of MWoS this year. I understand this is disappointing to many who were looking forward to work on cool security projects alongside experienced engineers, but the truth is simply don't have the time, resources and energy to mentor students right now.


Firefox engineers are cramming through bugs for the Firefox 57 release, planned for November 14th. We could easily say "sorry, too busy making Firefox awesome, kthnksbye", but there is more to the story of not running MWoS this year than the release of 57. In this blog post, I'd like to explore some of these reasons, and maybe share tips with folks who would like to become mentors.


After running MWoS for 3 years, engaging with hundreds of students and personally mentoring about a dozen, I learned two fundamental lessons:

  1. The return on investment is extremely low, when it's not a direct loss to the mentor.
  2. Students engagement is very hard to maintain, and many are just in it for the glory.
Those are hard-learned lessons that somewhat shattered my belief in mentoring. Let's dive into each.

Return on investment

Many mentors will tell you that having an altruistic approach to mentoring is the best way to engage with students. That's true for short engagements, when you spare a few minutes to answer questions and give guidance, but it's utter bullshit for long engagements.
It is simply not realistic to ask engineers to invest two hours a week over four months without getting something out of it. Your time is precious, have some respect for it. When we initially structured MWoS, we made sure that each party (mentors, students and professors) would get something out of it, specifically:
  • Mentors get help on a project they would not be able to complete alone.
  • Students get a great experience and a grade as part of their school curriculum.
  • Professors get interesting projects and offload the mentoring to Mozilla.
Making sure that students received a grade from their professors helped maintain their engagement (but only to some extend, more on that later), and ensured professors approved of the cost a side project would make to their very-busy-students.
The part that mattered a lot for us, mentors, besides helping train the next generation of engineers, was getting help on projects we couldn't complete ourselves. After running MWoS for three years and over a few dozen projects, the truth is we would be better off writing the code ourselves in the majority of cases. The time invested in teaching students would be better used implementing the features we're looking for, because even when students completed their projects, the code quality was often too low for the features to be merged without significant rewrites.

There have been exceptions, of course, and some teams have produced code of good quality. But those have been the exceptions, not the rule. The low return on investment (and often negative return when mentors invested time into projects that did not complete), meant that it became increasingly hard for busy engineers to convince their managers to dedicate 5 to 10% of their time supporting teams that will likely produce low quality code, if any code at all.
It could be said that we sized our projects improperly, and made them too complex for students to complete. It's a plausible explanation, but at the same time, we have not observed a correlation between project complexity and completion. This leads into the next point.

Students engagement is hard to maintain

You would imagine that a student who is given the opportunity to work with Mozilla engineers for several months would be incredibly engaged, and drop everything for the opportunity to work on interesting, highly visible, very challenging projects. We've certainly seen students like that, and they have been fantastic to work with. I remain friends with a number of them, and it's been rewarding to see them grow into accomplished professional who know way more about the topics I mentored them on than I do today. Those are the good ones. The exceptions. The ones that keep on going when your other mentoring projects keep on failing.

And then, you have the long tail of students who have very mixed interest in their projects. Some are certainly overwhelmed by their coursework and have little time to dedicate to their projects. I have no issue with overwhelmed students, and have repeatedly told many of my mentee to prioritize their coursework and exams over MWoS projects.

The ones that rub me the wrong way are students that are more interested in getting into MWoS than actually completing their projects. This category of resume-padding students cares for the notoriety of the program more than the work they accomplish. They are very hard to notice at first, but after a couple years of mentoring, you start to see the patterns: eagerness to name-drop, github account filled with forks of projects and no authored code, vague technical answers during interview questions, constant mention of their references and people they know, etc.
When you mentor students that are just in it for the glory, the interest in the project will quickly drop. Here's how it usually goes:
  • By week 2, you'll notice students have no plan to implement the project, and you find yourself holding their hands through the roadmap, sometimes explaining concepts so basic you wonder how they could not be familiar with them yet.
  • By week 4, students are still "going through the codebase to understand how it is structured", and have no plans to implement the project yet. You spend meeting explaining how things work, and grow frustrated by their lack of research. Did they even look at this since our last meeting?
  • By week 6, you're pretty much convinced they only work on the project for 30min chunks when you send them a reminder email. The meetings are becoming a drag, a waste of a good half hour in your already busy week. Your tone changes and you become more and more prescriptive, less and less enthusiastic. Students nod, but you have little hope they'll make progress.
  • By week 8, it's the mid-term, and no progress is made for another month.
You end up cancelling the weekly meeting around week 10, and ask students to contact you when they have made progress. You'll hear back from them 3 months later because their professor is about to grade them. You wonder how that's going to work, since the professor never showed up to the weekly meeting, and never contacted you directly for an assessment. Oh well, they'll probably get an A just because they have Mozilla written next to their project...

This is a somewhat overly dramatic account of a failed engagement, but it's not at all unrealistic. In fact, in the dozen projects I mentored, this probably happened on half of them.
The problem with lowly-engaged students is that they are going to drain your motivation away. There is a particular light in the eye of the true nerd-geek-hacker-engaged-student that makes you want to work with them and guide them through their mistakes. That's the reward of a mentor, and it is always missing from students that are not engaged. You learn to notice it after a while, but often long after the damage done by the opportunists have taken away your interest in mentoring.

Will MWoS rise from the ashes?

The combination of low return on investment and poorly engaged students, in addition to a significant increase in workload, made us cancel this year's round. Maybe next year, if we find the time and energy, we will run MWoS again. It's also possible that other folks at Mozilla, and in other organizations, will run similar programs in the future. Should we run it again, we would be a lot stricter on filtering students, and make sure they are ready to invest a lot of time and energy into their projects. This is fairly easy to do: throw them a challenge during the application period, and check the results. "Implement a crude Diffie-Hellman chat on UDP sockets, you've got 48 hours", or anything along those line, along with a good one hour conversation, ought to do it. We were shy to ask those questions at first, but it became obvious over the years that stronger filtering was desperately needed.

For folks looking to mentor, my recommendation is to open your organization to internships before you do anything else. There's a major difference in productivity between interns and students, mostly because you control 100% of an intern's daily schedule, and can make sure they are working on the tasks you assign them too. Interns often complete their projects and provide direct value to the organization. The same cannot be said by mentee of the MWoS program.

Wednesday, January 18 2017

Video-conferencing the right way

I work from home. I have been doing so for the last four years, ever since I joined Mozilla. Some people dislike it, but it suits me well: I get the calm, focused, relaxing environment needed to work on complex problems all in the comfort of my home.

Even given the opportunity, I probably wouldn't go back to working in an office. For the kind of work that I do, quiet time is more important than high bandwidth human interaction.

Yet, being able to talk to my colleagues and exchanges ideas or solve problems is critical to being productive. That's where the video-conferencing bit comes in. At Mozilla, we use Vidyo primarily, sometimes Hangout and more rarely Skype. We spend hours every week talking to each other via webcams and microphones, so it's important to do it well.

Having a good video setup is probably the most important and yet least regarded aspect of working remotely. When you start at Mozilla, you're given a laptop and a Vidyo account. No one teaches you how to use it. Should I have an external webcam or use the one on your laptop? Do I need headphones, earbuds, a headset with a microphone? What kind of bandwidth does it use? Those things are important to good telepresence, yet most of us only learn them after months of remote work.

When your video setup is the main interface between you and the rest of your team, spending a bit of time doing it right is far from wasted. The difference between a good microphone and a shitty little one, or a quiet room and taking calls from the local coffee shop, influence how much your colleagues will enjoy working with you. I'm a lot more eager to jump on a call with someone I know has good audio and video, than with someone who will drag me in 45 minutes of ambient noise and coughing in his microphone.

This is a list of tips and things that you should care about, for yourself, and for your coworkers. They will help you build a decent setup with no to minimal investment.

The place

It may seem obvious, but you shouldn't take calls from a noisy place. Airports, coffee shops, public libraries, etc. are all horribly noisy environments. You may enjoy working from those places, but your interlocutors will suffer from all the noise. Nowadays, I refuse to take calls and cut meetings short when people try to force me into listening to their surrounding. Be respectful of others and take meetings from a quiet space.

Bandwidth

Despite what ISPs are telling you, no one needs 300Mbps of upstream bandwidth. Take a look at the graph below. It measures the egress point of my gateway. The two yellow spikes are video meetings. They don't even reach 1Mbps! In the middle of the second one, there's a short spike at 2Mbps when I set Vidyo to send my stream at 1080p, but shortly reverted because that software is broken and the faces of my coworkers disappeared. Still, you get the point: 2Mbps is the very maximum you'll need for others to see you, and about the same amount is needed to download their streams.

You do want to be careful about ping: latency can increase up to 200ms without issue, but even 5% packet drop is enough to make your whole experience miserable. Ask Tarek what bad connectivity does to your productivity: he works from a remote part of france where bandwidth is scarce and latency is high. I coined him the inventor of the Tarek protocol, where you have to repeat each word twice for others to understand what you're saying. I'm joking, but the truth is that it's exhausting for everyone. Bad connectivity is tough on remote workers.

(Tarek thought it'd be worth mentioning that he tried to improve his connectivity by subscribing to a satellite connection, but ran into issues in the routing of his traffic: 700ms latency was actually worse than his broken DSL.)

Microphone

Perhaps the single most important aspect of video-conferencing is the quality of your microphone and how you use it. When everyone is wearing headphones, voice quality matters a lot. It is the difference between a pleasant 1h conversation, or a frustrating one that leaves you with a headache.

Rule #1: MUTE!

Let me say that again: FREAKING MUTE ALREADY!

Video softwares are terrible at routing the audio of several people at the same time. This isn't the same as a meeting room, where your brain will gladly separate the voice of someone you're speaking to from the keyboard of the dude next to you. On video, everything is at the same volume, so when you start answering that email while your colleagues are speaking, you're pretty much taking over their entire conversation with keyboard noises. It's terrible, and there's nothing more annoying than having to remind people to mute every five god damn minutes. So, be a good fellow, and mute!

Rule #2: no coughing, eating, breathing, etc... It's easy enough to mute or move your microphone away from your mouth that your colleagues shouldn't have to hear you breathing like a marathoner who just finished the olympics. We're going back to rule #1 here.

Now, let's talk about equipment. A lot of people neglect the value of a good microphone, but it really helps in conversations. Don't use your laptop microphone, it's crap. And so is the mic on your earbuds (yes, even the apple ones). Instead, use a headset with a microphone.

If you have a good webcam, it's somewhat ok to use the microphone that comes with it. The Logitech C920 is a popular choice. The downside of those mics is they will pick up a lot of ambient noise and make you sound distant. I don't like them, but it's an acceptable trade-off.

If you want to go all out, try one of those fancy podcast microphones, like the Blue Yeti.

You most definitely don't need that for good mic quality, but they sound really nice. Here's a recording comparing the Plantronic headset, the Logitech C920 and the Blue Yeti.

Webcam

This part is easy because most laptops already come with 720p webcam that provide decent video quality. I do find the Logitech renders colors and depth better than the webcam embedded on my Lenovo Carbon X1, but the difference isn't huge.

The most important part of your webcam setup should be its location. It's a bit strange to have someone talk to you without looking straight at you, but this is often what happens when people place their webcam to the side of their screen.

I've experimented a bit with this, and my favorite setup is to put the webcam right in the middle of my screen. That way, I'm always staring right at it.

It does consume a little space in the middle of my display, but with a large enough screen - I use an old 720p 35" TV - doesn't really bother me.

Lighting and background are important parameters too. Don't bring light from behind, or your face will look dark, and don't use a messy background so people can focus on what you're saying. These factors contribute to helping others read your facial expressions, which are an important part of good communication. If you don't believe me, ask Cal Lightman ;).

Spread the word!

In many ways, we're the first generation of remote workers, and people are learning how to do it right. I believe video-conferencing is an important part of that process, and I think everyone should take a bit of time and improve their setup. Ultimately, we're all a lot more productive when communication flows easily, so spread the word, and do tell your coworkers when they setup is getting in the way of good conferencing.

Thursday, August 4 2016

TLS stats from 1.6 billion connections to mozilla.org

One of the most challenging task of editing Server Side TLS is figuring out which ciphersuites are appropriate in which levels. Over the years, we've made judgment calls based on our experience and understanding of the TLS ecosystem, but finding hard data about clients in the wild is always difficult.


For the next revision of the guidelines, I wanted to collect real-world data to decide if we could prune unused ciphersuites from the Intermediate configuration. People often complain that the list of ciphersuites is too long. So far, we've taken a conservative approach, preferring a long list of ciphersuites that accepts all clients rather than being restrictive and potentially breaking the Internet for a minority on unusual devices.


Collecting TLS statistics is actually harder than one might think. Modern infrastructures terminate TLS ahead of application servers and don't always provide ciphersuite information in their logs. We run most of our services in AWS behind ELBs, so there's no way to collect statistics at scale.


Last year, we moved mozilla.org behind Cloudflare to support certificate switching and continue serving our users on very old systems. A few months ago, Cloudflare added TLS protocol and ciphersuite information to their access logs, so we finally had a solid data source to evaluate client support.


Mozilla.org is an excellent target to evaluate client diversity because it receives traffic from all sorts of devices from all over the world. It's not an opinionated site that only certain type of people would visit. It's not region or language specific (the site supports dozens of languages). And it's the main entry point to download Firefox.


I collected logs from Cloudflare intermittently over the course of a few weeks, to get an evenly distributed sample of clients connections. That data is represented in the table below.


Percentage Hits Protocol Ciphersuite
80.300% 1300142157 TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256
9.900% 160597128 TLSv1 ECDHE-RSA-AES128-SHA
2.800% 45350538 TLSv1 DES-CBC3-SHA
2.500% 42058051 TLSv1.2 ECDHE-RSA-CHACHA20-POLY1305
2.000% 33972517 TLSv1.2 ECDHE-RSA-AES128-SHA256
0.800% 13869096 none NONE
0.400% 6709309 TLSv1.2 ECDHE-RSA-CHACHA20-POLY1305-D
0.200% 4311348 TLSv1 AES128-SHA
0.200% 3629674 SSLv3 DES-CBC3-SHA
0.100% 3155150 TLSv1.1 ECDHE-RSA-AES128-SHA
0.100% 1968795 TLSv1.2 AES128-GCM-SHA256
0.000% 1110501 SSLv3 AES128-SHA
0.000% 860476 TLSv1.2 ECDHE-RSA-AES128-SHA
0.000% 540913 TLSv1.2 AES128-SHA256
0.000% 139800 SSLv3 ECDHE-RSA-AES128-SHA
0.000% 83537 TLSv1.2 AES128-SHA
0.000% 77433 TLSv1.1 AES128-SHA
0.000% 16728 TLSv1.2 DES-CBC3-SHA
0.000% 5550 TLSv1.2 ECDHE-RSA-DES-CBC3-SHA
0.000% 2836 TLSv1.2 AES256-SHA256
0.000% 2050 TLSv1.2 ECDHE-RSA-AES256-SHA
0.000% 1421 TLSv1 ECDHE-RSA-DES-CBC3-SHA
0.000% 570 TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384
0.000% 386 TLSv1 ECDHE-RSA-AES256-SHA
0.000% 141 TLSv1.2 AES256-SHA
0.000% 128 TLSv1 AES256-SHA
0.000% 66 TLSv1.3 ECDHE-RSA-AES128-GCM-SHA256
0.000% 32 TLSv1.2 ECDHE-RSA-AES256-SHA384
0.000% 24 TLSv1.1 DES-CBC3-SHA
0.000% 8 SSLv3 AES256-SHA
0.000% 8 SSLv3 ECDHE-RSA-AES256-SHA
0.000% 8 SSLv3 ECDHE-RSA-DES-CBC3-SHA
0.000% 8 TLSv1.1 AES256-SHA
0.000% 8 TLSv1.1 ECDHE-RSA-AES256-SHA
0.000% 8 TLSv1.1 ECDHE-RSA-DES-CBC3-SHA
0.000% 8 TLSv1.2 AES256-GCM-SHA384

Unsurprisingly, ECDHE-RSA-AES128-GCM-SHA256 accounts for over 80% of the traffic, as this ciphersuite is preferred in both Firefox and Chrome. It's a good news that most of our users benefit from that level of security, but it doesn't help us understand backward compatibility challenges.


More interesting are the following two entries that both negotiate TLSv1 with ECDHE-RSA-AES128-SHA (9.9%) and DES-CBC3-SHA (2.8%). Almost 13% of the traffic to mozilla.org is stuck in TLSv1 land. The presence of DES-CBC3-SHA in third position is a strong sign that we're nowhere done supporting old clients that don't even know what AES is.


The stat I was most curious about is in 9th position: SSLv3 with DES-CBC3-SHA, which accounts for 0.2% of the traffic, is a signature from Windows XP pre-sp3 clients, when SChannel didn't support TLSv1 or AES. 0.2% may seem insignificant, unless you're one of these users and the only way you will browse the internet is by first downloading Firefox from mozilla.org. We certainly don't recommend for anyone to enable SSLv3 on their site, unless you're in this very special category that needs backward compatibility with very old clients. Mozilla.org is one of those sites.


The rest of the stats are mostly what one would expect: a long list of randomly ordered ciphersuites from devices with a variety of preferences. ECDHE-RSA-CHACHA20-POLY1305 is in that list, but only at 2.5%. Cloudflare doesn't support any of the legacy ciphers like CAMELLIA or SEED, so we can't see if any of those are in use (I would expect them not to be, but who knows...). We can also assume that the handful of SSLv3 connections at the bottom are scanners, since I doubt we have many clients stuck in SSLv3 but with ECDHE-RSA-AES256-SHA support.


What are we missing?

The information we're missing the most is DHE support. Cloudflare doesn't enable it anymore, and it would be interesting to know how many clients out there will only negotiate DHE. I'm suspecting most will fall back to some non-PFS AES based ciphersuite, but proof would be nice. Ultimately, removing DHE from the Intermediate recommendations is a goal, given how difficult it's been for operators to generate DHE parameters securely.


Statistics on ECDSA usage would also be nice. We currently use RSA certificates for mozilla.org, but we're more and more recommending ECDSA certs instead (P-256 is preferred in the Modern configuration level). An interesting experiment would be to perform cert switching between RSA SHA1, RSA SHA256 and ECDSA SHA256.


In conclusion

This data is good enough to start drafting the next edition of Server Side TLS. We'll look at removing DHE support from the Intermediate configuration, and maybe limit the non-PFS ciphersuites to one or two AES fallbacks. It would appear, however, that DES-CBC3-SHA is here to stay for a little while longer...


- page 1 of 34