Let's first spread some facts, the term 'DevOps' itself ignited the author initial HUGE interest in the cloud, alone.
He liked it
He wanted to be just that
And,
He got what he wanted.
Welcome to this year of culmination dedicated to mastering security in the cloud.
During this phase, we strive to convert all the manual endeavors invested in operations into code, guaranteeing precision and efficiency within our codebase and pipelines.
This is a fundamental aspect of the DevOps philosophy and practices.
DevOps is the organizational culture and set of practices that aims to bridge the gap between software development—Dev and IT operations—Ops.
It's not just a methodology; it's a transformative approach to software delivery and infrastructure management.
Imagine DevOps as the connective tissue within an organization, aligning teams, processes, and tools to achieve a common goal: delivering high-quality software rapidly and reliably.
It emphasizes collaboration, communication, and integration throughout the entire software development lifecycle.
Deciphering the Sec's Link #
DevSecOps extends the principles of DevOps to incorporate security seamlessly into the software development process, rather than treating it as a separate and later-stage concern.
Ultimately aims to build a culture of shared responsibility for security among developers and operations teams, rather than treating security as a separate entity.
This promotes a proactive and continuous approach to security, ensuring that it becomes an integral part of the development process—or pipeline from the outset.
Sprint Objectives #
This sprint aim to empower you with the devops way of provisioning Microsoft Sentinel, deploying the provisioning to Azure DevOps repository, configuring the required connectors, designing the pipeline all via code.
-
Instance Creation:
- Start by creating the Sentinel instance, forming the foundational element for security orchestration.
-
Connector Integration:
- Connect the instance to relevant connectors to unlock its intrinsic value.
- Establish connectivity to diverse data sources, broadening the scope of the Sentinel instance.
-
Analytics Rules:
- Formulate analytics rules to strengthen the system against potential threats.
- These rules act as vigilant sentinels, promptly alerting the system to any unusual or suspicious activities.
-
Workbook Creation:
- Develop workbooks to introduce visual insights.
- These dynamic dashboards offer a panoramic view of organizational trends, patterns, and key metrics.
-
DevOps Shift:
- The journey commenced with a focus on operations, aligning with operational needs and requirements.
- Shift to code ensures efficiency, consistency, and adaptability in managing and securing the digital environment.
We have executed every aspect of this through explicit operations.
This approach has allowed us to establish a robust foundation, ensuring alignment with operational needs and requirements.
Now, as the natural progression, we stand ready to shift from these manual processes towards automated workflows using code only with the leading IaC.
Diclaimer: This isn't a strictly linear step-by-step progression; instead, it serves as a systematic guide designed to inspire action with minimal explicit instructions.
Closing The IaC Gap #
We'll craft our Terraform files in the best methodology possible, but now comes the exciting phase.
I genuinely believe there is a need for more attentive consideration and improvement in bridging gaps concerning Terraform. Here you have it.
There are three key commands in the Terraform process.
terraform init
terraform plan
terraform apply
Firstly, during file preparation, we execute the terraform init
command. This step involves tasks such as installations, among others.
Next is the planning phase, which is initiated by the terraform plan
command. This step informs you about the prospective changes that will occur if you decide to provision your infrastructure.
Lastly, the terraform apply
command is executed. This command takes your code and transforms it into the actual infrastructure.
This brief overview encapsulates the basics of Terraform, serving as our Terraform 101.
Pre-Coding Preparations #
Now that you have a basic understanding of how Terraform works, let me outline what you need to know before you start coding. Depending on your cloud infrastructure, specific configurations tailored to that platform must be set up to initiate your work and development of code.
Providers In Terraform #
Providers enable Terraform to manage resources on different platforms, such as cloud providers, databases, and more.
They serve as connectors between Terraform and the APIs of the target infrastructure. Each provider has its own set of resources and data sources that can be utilized in Terraform configurations.
The Azure Cloud Provider Configuration #
We are currently engaged with Microsoft Sentinel, a component of the extensive Azure ecosystem. Therefore, it's essential to configure the Azure provider for our work.
To include a provider in your Terraform configuration, you need to declare it in the configuration file e.g main.tf
or provider.tf
depending on team preferences.
terraform {
required_version = ">= 1.0.0"
}
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">=2.90.0"
}
}
}
Above is an example snippet of Terraform code that includes the provider for Azure. These blocks are available and open-sourced on the Terraform Registry at registry.terraform.io
.
Provisioning Microsoft Sentinel #
Once you have configured the provider, you are ready to proceed with coding the specific resource you intend to provision—in our case, our Azure SIEM product.
Develop Terraform Code for Log Analytics Workspace: #
Develop the Terraform code to provision Microsoft Sentinel. Define resource for the Log Analytics workspace.
provider "azurerm" {
features = {}
}
resource "azurerm_log_analytics_workspace" "example" {
name = "example-law"
location = "East US"
resource_group_name = "example-rg"
sku = "PerGB2018"
retention_in_days = 30
}
Develop Terraform Code for Sentinel Instance #
Configure the Microsoft Sentinel solution itself within that workspace.
resource "azurerm_sentinel" "example" {
name = "example-sentinel"
resource_group_name = "example-rg"
workspace_id = azurerm_log_analytics_workspace.example.id
plan {
publisher = "Microsoft"
product = "OMSGallery/AzureSentinel"
}
}
Deploy Provisioning Code to Azure DevOps Repo #
Consider pushing the code to a secure location, such as an Azure repository.
This allows us to have controlled versioning for our code going forward.
This step ensures version control, collaborative development, and the ability to track changes over time especially as we have successfully completed the basics for creating the instance.
In this scripts together, we defined an Azure Log Analytics workspace and configure the Microsoft Sentinel solution within that workspace.
Configuring Connectors #
Following the successful deployment of the SIEM instance, the next step in this sprint was to configure connectors to integrate with various data sources.
This involves extending the Terraform configuration to include connector settings.
Develop Terraform Code for Connector Configuration #
After provisioning Microsoft Sentinel, develop Terraform code to configure connectors.
This may include connectors, you can refer to this basic structure.
resource "azurerm_sentinel_data_connector_azure_activity_log" "example" {
name = "example-activity-log-connector"
subscription_id = "your-subscription-id"
log_analytics_workspace_id = azurerm_log_analytics_workspace.example.id
resource_group_name = "example-rg"
}
Below is an example of how a Microsoft 365 Defender connector can be configured:
resource "azurerm_sentinel_data_connector" "m365_defender" {
display_name = "M365DefenderConnector"
solution_id = azurerm_sentinel_solutions.example[0].id
connector_id = "m365defender"
connector_kind = "DataConnector"
data_types = ["Alert"]
data_types_mappings = { "Alert" = ["Microsoft.ATP"] }
data_types_validation = "Disabled"
}
Deploy Connector Code to Azure Repository #
Similar to the provisioning code, deploy the Terraform connector code to an Azure repository. Proper versioning and collaboration are essential for managing changes to connector configurations.
Following a deliberate methodology, I've structured the process for you to start by coding both your workspace and Sentinel instance, pushing them to there as well. Afterward, proceed to code the connector and push it to the repository.
We'll now go ahead and address "this Azure repo", allowing you to move forward with these steps and highlight the bigger product.
All-In-One DevUnityOps Platform #
Consider Azure Repo as a platform akin to GitHub, albeit more tailored to org needs. It is an integral part of the broader array of services encompassed within Azure DevOps.
-
Azure Repos is a version control system.
- Version control for code, supports git.
- Enables collaboration among development teams.
-
Azure Pipelines is a CI/CD service.
- Automates CI/CD processes and many languages and platforms support.
- Supports building, testing, and deployment of applications.
-
Azure Boards is a work tracking system.
- Work tracking and project management support agile methodology.
- Provides tools for backlog management and sprint planning.
-
Azure Test Plans is an effort testing tool.
- Supports manual and automated testing.
- Integrates with Azure Pipelines for continuous testing.
-
Azure Artifacts is a package management system.
- Package and dependency management for projects.
- Hosts and shares packages like NuGet, npm, and Maven.
Here, we leverage two robust services.
Azure Repos serves as the repository for all our infrastructure-as-code configurations, while Azure Pipelines, in the subsequent step, will be employed to execute the processes.
Design Your Own Pipeline #
The pipeline literally combines those instructions and potentially more, offering the convenience of completing the entire task seamlessly upon successful execution.
This concept is typically captured in a YAML file,
You can name it pipelines.yml.
stages:
- build
- deploy
# Other Jobs Tasks here
# ..
jobs:
- name: TerraformPipeline
stage: deploy
script:
- terraform init
- terraform plan -out=tfplan
- terraform apply tfplan
You also have the option to delegate the command to a script and simply call the script.
Here is the init command scripted;
#!/bin/bash
# Display a message indicating the initialization process
echo "Initializing Terraform..."
# Run the terraform init command
terraform init
# Display a completion message
echo "Terraform initialization completed."
The is an intentional design here; at first glance, you might not perceive a significant difference.
However, consider the impact if additional commands were incorporated.
The goal is to empower you and provide you a thoughtful structure you can leverage.
Trigger Pipeline for Terraform Workflow #
Once you have meticulously designed and configured your pipelines within the Azure DevOps environment to seamlessly orchestrate the Terraform workflow, you can anticipate a well-organized and efficient execution process.
-
Design The Pipelines:
Design an Azure DevOps pipeline to orchestrate the Terraform workflow. This pipeline should ensure the seamless execution of provisioning and connector configuration steps.
You can expect something like the following:
Once you have completed all necessary preparations, you can proceed by pushing your code to the Azure repository and initiating the Azure Pipelines process.
After the Azure DevOps pipeline has run successfully, verify the provisioned resources on the Azure portal.
Confirm the existence and configuration of the Log Analytics workspace, Microsoft Sentinel solution, and any configured connectors.
Sentinel Version Controllable #
Within the SIEM system itself, Microsoft Sentinel offers the capability to seamlessly integrate with your version control service.
Given that we have already provisioned our product using Azure Repos, it makes sense to extend this integration to include GitHub.
GitHub Repository Connection Steps #
-
Create a New Deployment Connection:
Start by creating a new deployment connection to establish a link between your Azure Pipelines and GitHub.
-
Specify Connection Details:
Provide essential details such as the connection name, description, and the source control system. In this case, we are focusing on GitHub.
-
GitHub Authentication:
Authenticate your Azure Pipelines service with GitHub by using the GitHub identity provider (IDP). This step ensures secure access to your GitHub repositories.
-
Access GitHub Repositories:
Once authenticated, you gain access to all your GitHub repositories. Choose the specific repository that you intend to use for this deployment.
-
Create or Select a Repository:
Create a new repository if needed or select an existing one for the current deployment scenario.
-
Associate Branch:
Specify the branch you will be using for deployment. In this case, for instance, you may have the main branch.
If you require a dev branch other than the main one, ensure that you've set it up on your GitHub repository beforehand, prior to establishing the actual connection.
-
Define Version Control Content:
Determine the content you want to include in your version control. For example, if you are working with workbooks and playbooks, specify accordingly.
-
Service Principal Secret:
During the configuration, you will obtain a Service Principal secret. Leave it as is for now, as you can rotate it later for security purposes.
-
Complete the Configuration:
Once all the necessary details are specified, click "Create" to establish the connection and complete the configuration process.
Once the integration is established, you gain the ability to code features directly. This process will generate both a YAML file and a PowerShell script that execute the necessary steps to push your code and convert it to Sentinel format.
Upon completion, you'll conveniently find your coded rules or workbooks within the product interface, streamlining the development and implementation process.
Streamlining collaboration with Sentinel through DevOps introduces a more user-friendly approach to your workflow.
This methodology allows you to seamlessly address various product components using code. For instance, envision the simplicity of managing analytics rules by creating a dedicated directory brimming with YAML files.
Within these files, you can precisely specify rule configurations, inclusive of their KQL associated. For the demonstrated workbooks, the method involves pushing them in JSON format.
Monitoring and Maintenance #
After the successful implementation of all the technologies, it is crucial to remain vigilant and actively monitor activities within the organization.
With a comprehensive set of rules to detect potential threats and an array of insightful workbooks to analyze data, you will significantly enhance the security posture in the digital realm.
-
Vulnerability Assessment:
Conduct regular vulnerability assessments on the Microsoft Sentinel environment. Utilize security scanning tools to identify and address potential vulnerabilities in the product, incident dashboard, and related components.
-
Incident Response Drills:
Periodically conduct incident response drills to test the effectiveness of the security measures in place. Simulate different types of security incidents to evaluate the team's response capabilities and identify areas for improvement.
-
Periodic Security Audits:
Conduct periodic security audits to assess the overall security posture of Microsoft Sentinel. Engage in comprehensive reviews of configurations, access controls, and policies to identify and address any security gaps.
-
Engagement with Security Community:
Stay engaged with the broader security community to stay informed about industry trends, emerging threats, and best practices. Participate in forums, conferences, and online communities to benefit from shared insights and experiences.
You can now redirect your attention to what truly matters most. Continuous monitoring serves as an invaluable tool for allowing you to assess your vendors' real-time security postures.
This proactive stance is instrumental in promptly addressing vulnerabilities and potential compromises, ensuring a robust and resilient security framework for sustained success.
Wrapping Up DevSecOps Insights #
The second release was a game-changer, offering a clear roadmap for translating our security operations into a coded framework.
This move not only seamlessly integrated security with DevOps but also marked a major leap in leveraging the full potential of our year-long cybersecurity efforts.
We started into the intricacies of initiating the design and provisioning of cloud infrastructure through Infrastructure as Code.
Among several options, we have chosen the widely acclaimed Terraform by HashiCorp as our preferred tool.
We guide you through the tool's initiation process, elucidating its merits and highlighting three pivotal commands integral to your workflow.
Navigating through the journey, we demonstrate the connection to our Azure cloud using the provider block, systematically coding the requisite instances within a workspace and establishing connectors.
Furthermore, we shed light on the optimal storage location for all these components in Azure DevOps.
As we approached the conclusion, we emphasize the noteworthy capability within the product itself—integration with version control platforms like GitHub.
This integration facilitates the seamless transition of operations as code, empowering you to push updates efficiently.
Turning security into code has unlocked a wealth of possibilities, empowering us to redefine our approach to cybersecurity.
This synergy goes beyond improving workflows; it's a strategic move that positions us to be more agile and proactive against emerging threats.
In realizing the full potential of this transformative year in cybersecurity, we've not only strengthened our defenses but also instilled a culture of continuous improvement and innovation.
The fusion of security and DevOps principles sets the stage for a more dynamic and responsive cybersecurity stance, preparing us confidently for the upcoming challenges.