Recently I worked with a customer to apply proxy settings to the Azure DevOps Server’s App Tier components, allowing them to route outbound HTTP traffic through their company’s corporate web proxy.
But what happens to Service Hooks that post HTTP requests to traffic that does NOT need to be routed through the corporate proxy? Well as stated in my previous article, you can configure a list of HTTP endpoints that do not need to pass through proxy.
But how do you know what URLs are being posted to out in the wild?
Below I’ve written a quick script to pull all Service Hooks by Project and pop them into a CSV. With this you can configure any urls that would need to bypass the corporate proxy.
Chat based collaboration and workstream are all the rage in today’s largely remote workforce. Azure DevOps is no stranger to this need and as such Microsoft announced back in 2017/2018, newly added support for Azure DevOps integration with Microsoft Teams via Service Hooks. With this new integration developers can configure service hooks that notify channels in Microsoft Teams of various events from code pushes and pull requests to build output and releases. This helps keep the development team informed throughout the development process.
Excited to try it out, my colleague, Vito Lodese and I recently worked to bring this tool to a large enterprise customer of Azure DevOps. If you look at the Microsoft documentation, setting it up is pretty straight forward. We were quickly able to set it up in Azure DevOps Services with little to no friction. But what about customers running Azure DevOps Server that are behind a corporate proxy?
Looking across the web you can find lots of documentation on configuring Pipelines agents for corporate proxy. But requests that are sent to service hooks do not come from pipelines agents, they are issued by the Azure DevOps App Tier servers. That being said how do we configure the App Tier to respect a corporate proxy?
The App Tier servers of Azure Devops have several main components, two of which are the “Background Job Agent” and the “Web Services”. Each of these are .NET based applications and as such can be configure much the same as any .NET application. Details on this configuration can be found here.
In your Azure DevOps Server installation path you will find two configuration files that will need to be updated. First, lets
Web Services component with it’s configuration located at
C:\Program Files\Azure DevOps Server 2020\Application Tier\Web Services\web.config.
We will need to add a new section directly under so it should look much like this:
<?xml version="1.0" encoding="utf-8"?> <configuration> ... <system.net> <defaultProxy useDefaultCredentials="true"> <proxy usesystemdefault="True" proxyaddress="CORPORATE PROXY URI" bypassonlocal="True"/> </defaultProxy> </system.net> </configuration>
Make sure to avoid having two <system.net> sections in case one is already present. Restart the two application pools for Azure DevOps Server from the IIS Management Console.
Next you will need to make the same change to the
C:\Program Files\Azure DevOps Server 2020\Application Tier\TfsJobAgent\TfsJobAgent.exe.config.
Restart the Background Job Agent via the Services Management Console. Given a few moments you can go ahead and configure the Service Hook for
One last thing to note: Some customers will find that some users have configured Web Hooks that post notifications to URLs that do not need to be
routed through the corporate proxy. In such cases you may use the
<bypassList> node under
<defaultProxy>. More details on how to configure this
setting can be found here.
Has your Azure DevOps Server collection database grown so large it’s become unmanageable? Tired of your database backups running long? Do you suspect your users have been storing all their favorite cat videos in their Team Project’s TFVC repository?
It’s all too common for developers to mistakenly store large binary files in source control thinking it’s a good place to store large binary files. Although there are scenarios this might make sense, generally speaking it isn’t a great idea, it bloats your collection databases, slowing down backups, making database replication seeding slower and eats up drive space when there are better long term storage for such data, like S3 storage.
A fine example of a large binary objects I’ve seen sprinkled throughout Azure DevOps database is the Java SDK installer, MS VC++ Redistributable Installers, and things like these. Inadvertantly developers may upload these in Azure DevOps for safe keeping. Little do they know that a dozen other developers in your organization have all uploaded the exact same installer in the database, duplicating for now reason hundreds of gigabytes of data. If left unchecked your database can grow wildly out of control.
The below script can easily be ran against your collection db to locate the largest files, the project they reside in, the changeset they were committed in and when, and who was the brilliant minds behind upload such a file in your precious Azure DevOps data. :-)
The only thing left is to actually destroy the files. Head over to our latest docs to read about the
tf destroy command. Don’t forget
to that your database will not automatically remove the files immediately after destroying them. There is an automatic clean up job that
runs within a week that will clean up the file metadata from your database. To force this cleanup use the
/startcleanup flag when you
destroy the file in question.
After you’ve done a solid round of cleanup, this doesn’t necesarrily reduce the database size on disk since the database will still retain it’s overall size after the data is destroyed. You will need to run DBCC SHRINK to ensure the database reduces it’s size footprint on disk.
Do you manage a team of developers that love to share code with eachother and the community? Chances are your team is likely using Github Gist feature to share quite snippets of code with eachother and even the community. This feature of GitHub is extremely useful as a light weight code sharing and reference tool! But along with it’s ease of use comes the all-to-common mistake of pasting and saving your tokens and secrets that are not safe to be shared publicly.
The below powershell script can be used to identify Azure DevOps Personal Access Tokens and GitHub Personal Access Tokens in your team’s public GitHub accounts.
Let’s get started!
First let’s go ahead and setup a CSV file that contains 3 columns, only one column is really needed, the
column. The other columns are used for make the output report a little more denormalized.
Your input CSV should look something like this:
"GitHubUsername","FullName","Email" "akanieski", "Andrew Kanieski", "email@example.com"
You can populate this list with the known GitHub usernames of all your team members.
The next step is it will cross reference each PAT and validate whether it is currently in use or not by attempting to use it to access the GitHub REST API. If it is successfull it will mark it as a “confirmed” token.
Below you will see the breakdown of suspicious secrets in your team’s GitHub accounts as well as whether or not the PAT is active. You will also get a CSV report with the details
Just like above, the script can scan our team members public Gists, but instead of verifying if the token is valid and active with GitHub, it can spot Azure DevOps Personal Access Tokens and confirm if it is active with Azure DevOps, both “Services” and “Server”.
Currently the script uses a simple regular expression to identify the personal access tokens. More advanced secrets scanner’s , like Microsoft’s CredScan and the scanning features of GitHub Advanced Security, use other algorithms to identify possible secrets in your code. A common way of identifying a secret is to measure a strings entropy. But alas, the goal of this script is to provide a quick way of scanning, not to build a robust scanning tool.
As you use this script you will quickly find that GitHub throttle’s it’s API access quite aggressively, with a rolling per-hour
request rate of 60 requests per hour. If you pass a
GithubToken to the script, it will use these to
authenticate with GitHub, increasing your per-hour request-rate to 5000 requests per hour.
It’s a busy work week, your backlog seems neverending, you’re rushing to get things pushed out to production. You think I’ve got a new configuration for my Frontdoor that I want to deploy, I’ll just tear down the old one and push that ARM template to deploy it’s replacement. You fire off the delete command. Once it’s done you push the latest scripts for deployment and go get coffee. You comeback to find that although the delete was successful the deployment failed. You check the error logs, “Name already in use”.
You think, meh, no problem, I’ll just run the deployment, maybe the delete hadn’t fully committed before the replacement was deployed with the same name. You run it again, “Name already in use”. You triple check. Same. You go to your resource explorer looking for the Frontdoor with the same name. It’s not there. What’s going on??
You go to visit your application to see if it’s running, you swing over to
app.sample.com which should, by way
of a CNAME entry on your domain, route you directly to your Frontdoor. You find that the website takes you to some other
website. Another website, being hosted under your subdomain. Have I been hacked?
The scenario I describe above is what’s known as a “Dangling DNS Subdomain Takeover”, and is a common way for bad actors to gain unintended access to hosting a site in your subdomain. Let’s break down how it works!
First off, what is a CNAME entry? A CNAME is a DNS record that allows it’s owner to point the DNS resolution for
a given domain or subdomain to another hostname. It is a commonly used tool to direct traffic to a multi-tenanted
or shared hosting environment, like Azure Frontdoor, Azure App Services, Google’s App Engine or Amazon’s Elastic Beanstalk. So in practice the CNAME entry for our example above might look like this.
CNAME app.sample.com -> app-sample-demo.azurefd.net
In this example we have a subdomain,
app.sample.com pointing to
app-sample-demo.azurefd.net. This allows us to
host our site behind Azure Frontdoor using the user defined url from Azure Frontdoor. Now let’s say your using a
custom domain and custom SSL certificate on
app.sample.com, so your not using Azure DNS to manage your DNS, for
reasons I’ll get into later in this article.
So put yourself into the grimy shoes of a malicious actor, trying to steal cookies from the app hosted in your subdomain. After some reflection you might realize there is a weakness in this chain. What happens if the app owner deletes the shared resource that, even if just for a moment? Many hosting providers will release the auto generated hostname assigned to the resource. In the case of my example, the app-sample-demo.azurefd.net get’s released. You think to yourself, what if I write a script to swoop in as soon as it’s deleted, and provision my own Azure Frontdoor instance with the same name as the target of our attack, which is now newly released and ready to be reused.
You can see where this is going. Now a brand new Azure Frontdoor is online, with the same name as your old Frontdoor except it’s not owned and operated by you! They spin up their malicious app, and maybe they collect your user’s cookies or worse they create an identical login screen and start harvesting user’s credentials.
This is essentially a typical “Dangling DNS Subdomain Takeover”. The “Dangling DNS” refers to the CNAME entry that is left pointing to a non-provisioned resource in a multi-tenanted hosting service.
In the scenario described above there are a handful or mechanisms customers can employ to mitigate this risk. Microsoft has a fantastic article that explains this type of attack, and lays out a few key ways of mitigating and some ways of stopping it dead in it’s tracks before it even happens. Read more here. Use of Azure DNS with “Aliases” addresses this issue. Also note in the article there are other proactive measures that can be taken even if your not ready to Azure DNS.
This particular issue is not one that is unique Azure, and the solutions are also not unique to Azure. Securing your domains and DNS entries from this sort of attack is crucial to maintaining security!
The key takeaway, never delete a resource that backs a CNAME entry in your DNS without first redirecting or removing the CNAME record first!
In my experiences working with enterprise implementations of Azure DevOps Server and Services I’ve found it fairly common for organizations to be able to migrate work items and other entities. Some common scenarios include:
Sometimes these scenarios may be simple, just move item 1 from project A to project B. But in other scenarios they can be fairly complex. For example, moving a work item from a CMMI XML Process Model to an Agile flavored Inheritance Process model. This would require field mappings from one field to another including work item type changes. As these sorts of scenarios come up it can be daunting creating a set of scripts to automate this process using the out-of-the-box REST api for Azure DevOps.
That being said I would like to introduce a project I’ve been working on the …
This tool was written to create a simple user experience for migrating work items and other entities between projects in Azure DevOps. Working directly with the Azure DevOps REST api takes work. Administrators may not have the time needed to script out migrations using the REST API. This tool aims to make that experience easier.
This tool is maintained by it’s author Andrew Kanieski with the support of the community and is not an official or supported product of Microsoft. It’s design and intent does not relect the views of Microsoft.
A special thanks to the teams of people that I have worked with over the years that have lent ideas and requested features that have found their way into this tool. Together as a community we can make our work easier and more enjoyable!
If you are looking for a feature that’s missing or need to report a bug, please follow up in our GitHub’s Issues section.
Disclaimer: This product is not a Microsoft product and support for it is handled by the community. Please keep this in mind when using it. If you encounter an issue feel free to submit a bug using the link above