Tuesday, February 5 2019

Interviewing tips for junior engineers

I was recently asked by the brother of a friend who is about to graduate for tips about working in IT in the US. His situation is not entirely dissimilar to mine, being a foreigner with a permit to work in America. Below is my reply to him, that I hope will be helpful to other young engineers in similar situations.

For background, I've been working in the US since early 2011. I had a few years of experience as a security engineer in Paris when we moved. I first took a job as a systems engineer while waiting for my green card, then joined a small tech company in the email marketing space to work on systems and security, then joined Mozilla in 2013 as a security engineer. I've been at Mozilla for almost six years, now running the Firefox operations security team with a team of six people scattered across the US, Canada and the UK. I've been hiring engineers for a few years in various countries.


Are there any skills or experiences beyond programming required to be an interesting candidate for foreign employers

You're just getting started in your career, so employers will mostly look for technical proficiency and being a pleasant person to work with. Expectations are fairly low at this stage. If you can solve technical puzzles and people enjoy talking to you, you can pass the bar at most companies.

This changes after 5-ish years of experience, when employers want to see project management, technical leadership, and maybe a hint of people management too. But for now, I wouldn't worry about it.

I would say the most important thing is to have a plan: where do you want to be 10/15/20 years from now? My aim was toward Chief Security Officer roles, an executive position that requires technical skills, strategic thinking, communication, risk and project management, etc. It takes 15 to 20 years to get to that level, so I picked jobs that progressively gave me the experience needed (and that were a lot of fun, because I'm a geek).


Were there any challenges that you faced, other than immigration, that I may need to be aware of as a foreign candidate?

Immigration is the only problem to solve. When I first applied for jobs in the US, I was on a J-1 visa doing my Master's internship at University of Maryland. I probably applied to 150 jobs and didn't get a single reply, most likely because no one wants to hire candidates that need a visa (a long and expensive process). i ended up going back to France for a couple years, and came back after obtaining my green card, which took the immigration question out of the way. So my advise here is to settle the immigration question in the country where you want to work before you apply for jobs, or if you need employer support to get a visa, be up front about it when talking to them. (I have several former students who now work for US companies that have hiring processes that incorporate visa applications, so it's not unheard of, but the bar is high).

I have a Master of Science from a small french university that is completely unknown to the US, yet that never came up as an issue. A Master is a Master. Should degree equivalence come up as an issue during an interview, offer to provide a grade comparison between US GPA and your country grades (some paid service provide that). That should put employers at ease.

Language has also never been an issue for me, even with my strong french accent. If you're good at the technical stuff, people won't pay attention to your accent (at least in the US). And I imagine you're fully fluent anyway, so having a deep-dive architecture conversation in English won't be a problem.


And lastly, do you have any advice on how to stand out from the rest of the candidates aside from just a good resume (maybe some specific volunteer experience or unique skills that I could gain while I am still finishing my thesis?)

Resume are mostly useless. I spend between 30 and 60 seconds on a candidate's resume. The problem is most people pad their resumes with a lot of buzzwords and fancy-sounding projects I cannot verify, and thus cannot trust. It would seem the length of a resume is inversely proportional to the actual skills of a candidate. Recruiters use them to check for minimal requirements: has the right level of education, knows programming language X, has the right level of experience. Engineering managers will instead focus on actual technical questions to assess your skills.

At your level, keep you resume short. My rule of thumb is one page per five years of experience (you should only have a single page, I have three). This might contradict advise you're reading elsewhere that recommend putting your entire life story in your resume, so if you're concerned about not having enough details, make two versions: the one page short overview (linkedin-style), and the longer version hosted on your personal site. Offer a link to the longer version in the short version, so people can check it out if they want to, but most likely won't. Recruiters have to go through several hundred candidates for a single position, so they don't have time, or care for, your life story. (Someone made a short version of Marissa Meyer's resume that I think speaks volume).

Make sure to highlight any interesting project you worked on, technical or otherwise. Recruiters love discussing actual accomplishment. Back when I started, I had a few open source projects and articles written in technical magazines that I put on my resume. Nowadays, that would be a GitHub profile with personal (or professional, if you're lucky) projects. You don't need to rewrite the Linux kernel, but if you can publish a handful of tools you developed over the years, it'll help validate your credentials. Just don't go fork fancy projects to pad your GitHub profile, it won't fool anyone (I know, it sounds silly, but I see that all too often).

Another thing recruiters love is Hackerrank, a coding challenge website used by companies to verify the programming skills of prospective candidates. It's very likely US companies will send you some sort of coding challenge as part of the interview process (we even do it before talking to candidates nowadays). My advise is to spend a few weekends building a profile on Hackerrank and getting used to the type of puzzle they ask for. This is similar to what the GAFA ask for in technical interviews ("quicksort on a whiteboard" type of questions).

At the end of the day, I expect a junior engineer to be smart and excited about technology, if not somewhat easily distracted. Those are good qualities to show during an interview and on your resume.

My last advise would be to pick a path you want to follow for the next few years and be very clear about it when interviewing. You should have a goal. You have a lot of degrees, and recruiters will ask you what you're looking for. So if you want to go into the tech world, be up front about it and tell them you want to focus on engineering for the foreseeable future. In my experience, regardless of your level of education, you need to start at the bottom of the ladder and climb your way up. A solid education will help you climb a lot faster than other folks, and you could reach technical leadership in just a couple years at the right company, or move to a large corporation and use that fancy degree to climb the management path. In both cases, I would recommend getting those first couple years of programming and engineering work under your belt first. Heck, even the CTO of Microsoft started as a mere programmer!



I hope this helps folks who are getting started. And I'm always happy to answer questions from junior engineers, so don't hesitate to reach out!

Thursday, January 17 2019

Maybe don't throw away your VPN just yet...

Over the past few years I've followed the rise of the BeyondCorp project, Google's effort to move away from perimetric network security to identity-based access controls. The core principle of BeyondCorp is to require strong authentication to access resources rather than relying on the source IP a connection originates from. Don't trust the network, authenticate all accesses, are requirements in a world where your workforce is highly distributed and connects to privileged resources from untrusted networks every day. They are also a defense against office and datacenter networks that are rarely secure enough for the data they have access to. BeyondCorp, and zero trust networks, are good for security.

This isn't new. Most modern organizations have completely moved away from trusting source IPs and rely on authentication to grant access to data. But BeyondCorp goes further by recommending that your entire infrastructure should have a foot on the Internet and protect access using strong authentication. The benefits of this approach are enormous: employees can be fully mobile and continue to access privileged resources, and compromising an internal network is no longer sufficient to compromise the entire organization.

As a concept, this is good. And if you're hosting on GCP or are willing to proxy your traffic through GCP, you can leverage their Identity and Access Proxy to implement these concepts securely. But what about everyone else? Should you throw away your network security and put all your security in the authentication layer of your applications? Maybe not...

At Mozilla, we've long adopted single sign on, first using SAML, nowadays using OpenID Connect (OIDC). Most of our applications, both public facing and internal, require SSO to protect access to privileged resources. We never trust the network and always require strong authentication. And yet, we continue to maintain VPNs to protect our most sensitive admin panels.

"How uncool", I hear you object, "and here we thought you were all about DevOps and shit". And you would be correct, but I'm also pragmatic, and I can't count the number of times we've had authentication bugs that let our red team or security auditors bypass authentication. The truth is, even highly experienced programmers and operators make mistakes and will let a bug disable or fail to protect part of that one super sensitive page you never want to leave open to the internet. And I never blame them because SSO/OAuth/OIDC are massively complex protocols that require huge libraries that fail in weird and unexpected ways. I've never reached the point where I fully trust our SSO, because we find one of those auth bypass every other month. Here's the catch: they never lead to major security incidents because we put all our admin panels behind a good old VPN.

Those VPN that no one likes to use or maintain (me included) also provide a stable and reliable security layer that simply never fails. They are far from perfect, and we don't use them to authenticate users or grant access to resources, but we use them to cover our butts when the real authentication layer fails. So far, real world experience continues to support this model.

So, there, you have it: adopt BeyondCorp and zero trust networks, but also consider keeping your most sensitive resources behind a good old VPN (or an SSH jumphost, whatever works for you). VPNs are good at reducing your attack surface and adding an extra layer of protection to your infrastructure. You'll be thankful to have one the next time you find a bypass in your favorite auth library.

Monday, August 6 2018

Running with Runscribe Plus

WhatsApp_Image_2018-08-06_at_13.16.11.jpeg

Six months ago, I broke the bank and bought a pair of Runscribe Plus running pods. I was recovering from a nasty sprain of the deltoid ligaments acquired a year ago while running a forest trails, and was looking for ways to reduce the risk of further damage during my runs.

Also, I love gadgets, so that made for a nice excuse! :)

After reading various articles about the value of increasing step rates to decrease risk of injury, I looked into various footpod options, the leading of which is the Stryd, but wanted to monitor both feet which only the Runscribe Plus can do. So I did something I almost never do: ordering a gadget that hasn't been heavily reviewed by many others (to the exception of the5krunner).

Runscribe Plus, which I'll abbreviate RS, is a sensor that monitors your feet movement and impact while you run. It measures:

  • pronation: rolling of the foot, particularly useful to prevent sprains
  • footstrike: where you foot hits the ground, heel, middle or front
  • shock: how much force your feet hit the ground with
  • step rate
  • stride length
  • contact time
  • and a bunch of calculation based on these raw metrics
WhatsApp_Image_2018-08-06_at_13.34.27.jpeg
My RS arrived less than a week after ordering them, but I couldn't use them right away. After several hours of investigation and back and forth with the founder, Tim Clark, by email, we figured out that my pods shipped with a bogus firmware. He remotely pushed a new version, which I updated to using the android app, and the pods started working as intended.
Usability is great. RS starts recording automatically when the step rate goes above 140 (I usually run around 165spm), and also stop automatically at the end of a run. The android app then downloads running data from each pod and uploads it to the online dashboard. Both the app and the webui can be used to go through the data, and while the app is fine to visualize data, I do find the webui to be a lot more powerful and convenient to use.
Screenshot_2018-08-06_RunScribe_-_Data_Driven_Athlete.png
The cool thing about RS is being able to compare left and right foot, because each foot is measured separately. This is useful to detect and correct balance issues. In my case, I noticed after a few run that my injured foot, the left one, was a lot less powerful than the right one. It was still painful, and I simply couldn't push on it as much, and the right foot was compensating and taking a lot more shock. I tried to make a conscious effort to reduce this imbalance over the following month, and it seem to have paid off in the end.
Screenshot_2018-08-06_RunScribe_-_Data_Driven_Athlete_2.png

The RunScribe Dashboard displays shock data for each foot recorded during a 5k. The dark red line represents the right foot and is taking a lot more shock that the light red one representing the left foot.

It's possible to use the RS to measure distance, but a lot of users on the forum have been complaining about distance accuracy issues. I've run into some of those, even after calibrating the pods to my stride length over a dozen runs. I would go for a 5 miles run with my gps watch and RS would measure a distance of anything between 4 and 6 miles. RS doesn't have a GPS, so it bases those calculations on your stride length and step count. Those inaccuracies didn't really bother me, because you can always update the distance in the app or webui after the fact, which also helps train the pod, and I am more interested in other metrics anyway.
That being said, distance inaccuracy is completely gone. According to Garmin, this morning's run was 8.6 miles, which RS recorded as 8.5 miles. That's a 1% margin of error, and I honestly can't tell which one is right between RS and Garmin.
So what changed? I was previously placing the pods on the heels of my shoes but recently moved to the laces, which may have helped. I also joined the beta program to get early firmware updates, and I think Tim has been tweaking distance calculation quite a bit lately. At any rate, this is now working great.
Screenshot_2018-08-06_RunScribe_-_Data_Driven_Athlete_3.png
RS can also broadcast live metrics to your running watch, which can then be displayed on their own screen. I don't find those to be very useful, so I don't make use of it, but it does provide real-time shock and step rate and what not.

What about Power?

I'll be honest, I have no idea. Running power doesn't seem extremely useful to me, or maybe I need to spend more time studying its value. RS does expose a Power value, so if this is your thing, you may find it interesting.

Take Away

RS is definitely not for everyone. It has its rough edges and exposes data you need to spend time researching to understand and make good use off. That said, whether you're a professional athlete or, like me, just a geek who likes gadgets and data, it's a fantastic little tool to measure your progress and tweak your effort in areas you wouldn't be able to identify on your own. I like it a lot, and I think more people will adopt this type of tool in the future.
Did it help with my ankle recovery? I think so. Tracking pronation and shock metrics was useful to make sure I wasn't putting myself at risk again. The imbalance data is probably the most useful information I got out of the RS that I couldn't get before, and definitely justifies going with a system with 2 pods instead of one. And, if anything else, it helped me regain confidence in my ability to do long runs without hurting myself.
Screenshot_2018-08-06_RunScribe_-_Data_Driven_Athlete_4.png

Footstrike metrics for left and right foot recorded during a half marathon shows I mostly run on the middle and back of my feet. How to use that data is left as an exercise to the runner.

Last, but most certainly not least, Tim Clark and the Runscribe team are awesome. Even with the resources of a big shop like Garmin, it's not easy to take an experimental products through rounds of testing while maintaining a level of quality that satisfies runners accustomed to expensive running gear ($700 watches, $200 shoes, etc.). For a small team to pull this off is a real accomplishment, all while being accessible and creating a friendly community of passionate runners. It's nice to support the underdog every once in a while, even when that means having to live with minor bugs and being patient between updates.
note: this blog post was not sponsored by runscribe in any way, I paid for my own pods and have no affiliation with them, other than being a happy customer.

Friday, September 22 2017

On the prevalence of cross-site scripting (XSS) attacks in modern web applications

As I attended AppSec USA in Orlando, a lot of discussions revolved around the OWASP Top 10. Setting the drama aside for a moment, there is an interesting discussion to be had on the most common vulnerabilities found in current web applications. The Top 10 from 2013 and 2017 (rc1) hasn’t changed much: first and foremost are injection issues, then broken auth and session management, followed by cross-site scripting attacks.

Screenshot_20170922_090147.png

At first glance, this categorizing appears sensible, and security teams and vendors have depended on it for over a decade. But here’s the issue I have with it: the data we collect from Mozilla’s web bug bounty program strongly disagrees with it. From what we see, injection and authentication/session attacks are nowhere near as common as cross-site scripting attacks. In fact, throughout 2016 and 2017, we have received over five time more XSS reports than any other vulnerability!

Screenshot_20170922_073440.png

This is certainly a huge difference between our data and OWASP's, but Mozilla's dataset is also too small to draw generalities from. Thankfully, both Bugcrowd and Hackerone have published reports that show a similar trend.
In their 2017 report, Bugcrowd said "cross-site scripting (XSS) and Cross Site Request Forgery (CSRF) remain the most reported submissions across industries accounting for 25% and 7% of submissions respectively. This distribution very closely reflects last year’s findings (XSS 25% and CSRF 8%).".
Hackerone went further in their report, and broke the vulnerability stats down by industry, saying that “in all industries except for financial services and banking, cross-site scripting (XSS, CWE-79) was the most common vulnerability type discovered by hackers using the HackerOne platform. For financial services and banking, the most common vulnerability was improper authentication (CWE-287). Healthcare programs have a notably high percentage of SQL injection vulnerabilities (6%) compared to other industries during this time period”.


This data confirms what we’re seeing at Mozilla, but to be fair, not everyone agrees. Whitehat also published their own report where XSS only ranks third, after insufficient transport layer protection and information leakage. Still, even in this report, XSS ranks higher than authentication/authorization and injection issues.


All three sources that shows XSS as the most prevalent issue come from bug bounty programs, and there is a strong chance that bug bounty reporters are simply more focused on XSS than other attacks. That said, when looking at modern web applications (and Mozilla’s web applications are fairly modern), it is rare to find issues in the way authentication and authorization is implemented. Most modern web frameworks have stable and mature support for authentication, authorization, access controls, session management, etc. There’s also a big trend to rely on external auth providers with SAML or OpenID Connect that removed implementation bugs we  saw even 4 or 5 years ago. What about non-xss injections? We don’t get that many either. In the handful of services that purposely accept user data, we’ve been paranoid about preventing vulnerabilities, and it seem to have worked well so far. The data we get from security audits, outside of bug bounties, seem to confirm that trend.


In comparison, despite continued efforts from the security community to provide safe escaping frameworks like Cure53’s DOMPurify or Mozilla’s Bleach, web applications are still bad at escaping user-provided content. It’s hard to blame developers here, because the complexity of both the modern web and large applications is such that escaping everything all the time is an impossible goal. As such, the rate of XSS in web applications has steadily increased over the last few years.


What about content security policy? It helps, for sure. Before we enabled CSP on addons.mozilla.org, we had perhaps one or two XSS reports every month. After enabling it, we hardly get one or two per year. For sure, CSP bypass is possible, but not straightforward to achieve, and often sufficient to fend off an attacker (see screenshots from security audit reports below). The continued stream of XSS reports we receive is from older applications that do not use CSP, and the data is a strong signal that we should continue pushing for its adoption.

Screenshot_20170922_133931.png
So, how do we explain the discrepancy between what we’re seeing at Mozilla, Bugcrowd and Hackerone, and what other organizations are reporting as top vulnerabilities? My guess is a lot of vendors are reviewing very old applications that are still vulnerable to issues we’ve solved in modern frameworks, and that Mozilla/Bugcrowd/Hackerone mostly see modern apps. Another possibility is those same vendors have no solutions to XSS, but plenty of commercial solutions to other issues, and thus give them more weight as a way to promote their products. Or we could simply all have bad data and be drawing wrong conclusions.

Regardless of what is causing this discrepancy, there’s evidently a gap between what we’re seeing as the most prevalent issues, and what the rest of the security community, and particularly the OWASP Top 10, is reporting. Surely, this is going to require more digging, so if you have data, please do share it, so we can focus security efforts on the right issues!



Thank you to Greg Guthe and Jonathan Claudius for reviewing drafts of this blog post

Monday, September 11 2017

Lessons learned from mentoring

Over the last few weeks, a number of enthusiastic students have asked me when the registration for the next edition of Mozilla Winter of Security would open. I've been saddened to inform them that there won't be an edition of MWoS this year. I understand this is disappointing to many who were looking forward to work on cool security projects alongside experienced engineers, but the truth is simply don't have the time, resources and energy to mentor students right now.


Firefox engineers are cramming through bugs for the Firefox 57 release, planned for November 14th. We could easily say "sorry, too busy making Firefox awesome, kthnksbye", but there is more to the story of not running MWoS this year than the release of 57. In this blog post, I'd like to explore some of these reasons, and maybe share tips with folks who would like to become mentors.


After running MWoS for 3 years, engaging with hundreds of students and personally mentoring about a dozen, I learned two fundamental lessons:

  1. The return on investment is extremely low, when it's not a direct loss to the mentor.
  2. Students engagement is very hard to maintain, and many are just in it for the glory.
Those are hard-learned lessons that somewhat shattered my belief in mentoring. Let's dive into each.

Return on investment

Many mentors will tell you that having an altruistic approach to mentoring is the best way to engage with students. That's true for short engagements, when you spare a few minutes to answer questions and give guidance, but it's utter bullshit for long engagements.
It is simply not realistic to ask engineers to invest two hours a week over four months without getting something out of it. Your time is precious, have some respect for it. When we initially structured MWoS, we made sure that each party (mentors, students and professors) would get something out of it, specifically:
  • Mentors get help on a project they would not be able to complete alone.
  • Students get a great experience and a grade as part of their school curriculum.
  • Professors get interesting projects and offload the mentoring to Mozilla.
Making sure that students received a grade from their professors helped maintain their engagement (but only to some extend, more on that later), and ensured professors approved of the cost a side project would make to their very-busy-students.
The part that mattered a lot for us, mentors, besides helping train the next generation of engineers, was getting help on projects we couldn't complete ourselves. After running MWoS for three years and over a few dozen projects, the truth is we would be better off writing the code ourselves in the majority of cases. The time invested in teaching students would be better used implementing the features we're looking for, because even when students completed their projects, the code quality was often too low for the features to be merged without significant rewrites.

There have been exceptions, of course, and some teams have produced code of good quality. But those have been the exceptions, not the rule. The low return on investment (and often negative return when mentors invested time into projects that did not complete), meant that it became increasingly hard for busy engineers to convince their managers to dedicate 5 to 10% of their time supporting teams that will likely produce low quality code, if any code at all.
It could be said that we sized our projects improperly, and made them too complex for students to complete. It's a plausible explanation, but at the same time, we have not observed a correlation between project complexity and completion. This leads into the next point.

Students engagement is hard to maintain

You would imagine that a student who is given the opportunity to work with Mozilla engineers for several months would be incredibly engaged, and drop everything for the opportunity to work on interesting, highly visible, very challenging projects. We've certainly seen students like that, and they have been fantastic to work with. I remain friends with a number of them, and it's been rewarding to see them grow into accomplished professional who know way more about the topics I mentored them on than I do today. Those are the good ones. The exceptions. The ones that keep on going when your other mentoring projects keep on failing.

And then, you have the long tail of students who have very mixed interest in their projects. Some are certainly overwhelmed by their coursework and have little time to dedicate to their projects. I have no issue with overwhelmed students, and have repeatedly told many of my mentee to prioritize their coursework and exams over MWoS projects.

The ones that rub me the wrong way are students that are more interested in getting into MWoS than actually completing their projects. This category of resume-padding students cares for the notoriety of the program more than the work they accomplish. They are very hard to notice at first, but after a couple years of mentoring, you start to see the patterns: eagerness to name-drop, github account filled with forks of projects and no authored code, vague technical answers during interview questions, constant mention of their references and people they know, etc.
When you mentor students that are just in it for the glory, the interest in the project will quickly drop. Here's how it usually goes:
  • By week 2, you'll notice students have no plan to implement the project, and you find yourself holding their hands through the roadmap, sometimes explaining concepts so basic you wonder how they could not be familiar with them yet.
  • By week 4, students are still "going through the codebase to understand how it is structured", and have no plans to implement the project yet. You spend meeting explaining how things work, and grow frustrated by their lack of research. Did they even look at this since our last meeting?
  • By week 6, you're pretty much convinced they only work on the project for 30min chunks when you send them a reminder email. The meetings are becoming a drag, a waste of a good half hour in your already busy week. Your tone changes and you become more and more prescriptive, less and less enthusiastic. Students nod, but you have little hope they'll make progress.
  • By week 8, it's the mid-term, and no progress is made for another month.
You end up cancelling the weekly meeting around week 10, and ask students to contact you when they have made progress. You'll hear back from them 3 months later because their professor is about to grade them. You wonder how that's going to work, since the professor never showed up to the weekly meeting, and never contacted you directly for an assessment. Oh well, they'll probably get an A just because they have Mozilla written next to their project...

This is a somewhat overly dramatic account of a failed engagement, but it's not at all unrealistic. In fact, in the dozen projects I mentored, this probably happened on half of them.
The problem with lowly-engaged students is that they are going to drain your motivation away. There is a particular light in the eye of the true nerd-geek-hacker-engaged-student that makes you want to work with them and guide them through their mistakes. That's the reward of a mentor, and it is always missing from students that are not engaged. You learn to notice it after a while, but often long after the damage done by the opportunists have taken away your interest in mentoring.

Will MWoS rise from the ashes?

The combination of low return on investment and poorly engaged students, in addition to a significant increase in workload, made us cancel this year's round. Maybe next year, if we find the time and energy, we will run MWoS again. It's also possible that other folks at Mozilla, and in other organizations, will run similar programs in the future. Should we run it again, we would be a lot stricter on filtering students, and make sure they are ready to invest a lot of time and energy into their projects. This is fairly easy to do: throw them a challenge during the application period, and check the results. "Implement a crude Diffie-Hellman chat on UDP sockets, you've got 48 hours", or anything along those line, along with a good one hour conversation, ought to do it. We were shy to ask those questions at first, but it became obvious over the years that stronger filtering was desperately needed.

For folks looking to mentor, my recommendation is to open your organization to internships before you do anything else. There's a major difference in productivity between interns and students, mostly because you control 100% of an intern's daily schedule, and can make sure they are working on the tasks you assign them too. Interns often complete their projects and provide direct value to the organization. The same cannot be said by mentee of the MWoS program.

- page 1 of 35