Bismuth: Building a Cloud
29 Jun 2024This year, a good friend and I have been working on a new startup: Bismuth.
This year, a good friend and I have been working on a new startup: Bismuth.
This is a (long) overdue post to accompany the REcon 2023 talk Pete Markowsky and I gave talking about our work on Warpspeed: a time travel debugger for macOS.
About the same time I wrote up my previous post about snapshot fuzzing, I was thinking about other ways to restore program state for fuzzing, ideally in userland for ease of use.
tl;dr: diffusion.gallery is a website I put together which feeds random prompts from OpenAI into Stable Diffusion. It’s pretty neat.
Some time ago, I was working on a server to generate images from weather RADAR data (a separate post on this will come at some point). As part of this, I spent a few hours profiling my code and found a tiny “bug” in the open source library I was using to parse one type of RADAR data.
If you follow AWS closely, you may have heard about a niche product launch a few years back called Ground Station which lets you rent, well, a ground station (basically a big antenna plus supporting equipment to communicate with satellites). A friend recently linked me an AWS blog post with a sample use case which described using it as a way of receiving real time imagery from orbiting weather satellites. Now funny enough, receiving data from polar orbiting weather satellites has been a side project of mine for over a decade now, but living in NYC has put a bit of a hold on it. I used to have a home-built QFH antenna which I used to receive images with a surprisingly high success rate given the janky construction of it.
A few weeks ago, I found and reported CVE-2022-25636 - a heap out of bounds write in the Linux kernel. The bug is exploitable to achieve kernel code execution (via ROP), giving full local privilege escalation, container escape, whatever you want.
Early on in the pandemic, there was a good amount of discussion on Twitter about indoor CO2 levels as more people were spending time exclusively at home, often in a single, small room for hours on end. Since I was one of those people spending nearly the entire day in a single room, I decided to look around for a CO2 monitoring system. While a simple “alert after levels rise above x ppm” is sufficient, I was really looking for one that would be able to log data to a remote system so that I could monitor it throughout the day and/or look back on historical data from any computer. After being thoroughly disappointed with what was on Amazon (nothing at a reasonable price point seemed to be able to send data to a remote server), I decided it would be a nice little project to build my own.
Right as the pandemic was starting in March/April 2020, I spent a couple of weekends writing a Loadable Kernel Module (LKM) for Linux, designed to add a syscall which could be used by a fuzzer to quickly restore program state instead of using a conventional fork/exec loop. This was originally suggested on the AFL++ Ideas page, and it nicely intersected a bunch of stuff I’m familiar with so I wanted to take a crack at it.
Given the recent series of issues with Google Cloud, I decided it was time to jump ship and look at other providers for this blog (and eventually the rest of my sites most likely).
While writing up my recent post about debugging a problem with CSAW CTF’s website this year, I remembered this post which I started writing about a year ago documenting all of the “fun” stories I have from running CTF over the past 4 years.
If you competed in this year’s CSAW CTF, you may have noticed that the site was pretty sluggish until around 1am EST Sunday. This post is a walkthrough of how I went from noticing this sluggishness, to debugging the issue, to putting in a fix which decreased page load times by over 10x.
When exploiting a program, there’s four primary regions of memory that matter to us:
A recent post to the OSS Security mailing list brought up a potential DoS fixed in Linux about a year ago. This got a decent amount of attention on Twitter, and so I decided to see if I could create a proof-of-concept for this relatively simple bug.
In the fall of 2017, hyper and I co-created and co-taught a new class at NYU Tandon: Introduction to Offensive Security. We wanted to create a course that taught the basics of what’s needed in, well, offensive security (playing CTFs, doing pentests, etc.). It was very well received that semester, and is now being re-taught for the third time by Prof. Brendan Dolan-Gavitt who supervised Josh and I when we taught the course for the first time.
Recently, I was looking for a nice, unified way to traverse my way through large open-source projects. The OSIRIS Lab previously had a DXR instance but it ended up not being able to support some projects I wanted to index due to the way it works (a clang pass). I looked around a bit, and decided to give OpenGrok a try, and I’ve been very happy with the results. Seems to be the one good product Oracle makes :P
I’m happy to say I’m finally opening up “Weather Explorer”, a project I’ve been working on in my spare time for the past two years.
Since I interned at M.I.T Lincoln Lab in the summer of 2016, I’ve been working on an extension project of the work I did there. While it’s still not finished, it’s a pretty big chunk of work that deserves to be on this website somewhere :)
Today while setting up a new Proxmox node in my cluster, I ran into a “fun” issue.
This semester, hyper and I have been working on developing the basis for our own Cyber Reasoning System (CRS). The slides from our presentation at the OSIRIS Lab’s end of year meetup are here.
(Cross-posted from my entry in the OSIRIS Lab’s blog: https://blog.isis.poly.edu/2017/09/25/csaw-ctf-2017-infra/)
For the past five years or so, I’ve been looking to find a way to get streaming weather data pushed to me. Originally I had wanted level 2 RADAR products to create my own composites/renders, however I couldn’t find a good source that would push it to me, and even if I could, I didn’t have the capacity to handle processing all of that data in realtime. The IEM makes level 2 data available over HTTP, and grabbing individual files as I needed them to experiment was good enough at the time.