Monday, August 22, 2016

Unattended installation octopus server - PowerShell DSC



I’ve been using Octopus deploy at work, and created a custom DSC resource to do the installation and configuration. The resource is now available to use from PowerShell gallery. You can use the Install-Module cmdlet to download the DSC resource

Install-Module Octopus

This will install the Octopus module. The module has a DSC resource OctopusServer that can be used for installation and configuring the octopus server on a node. You can create a simple configuration as below.

Configuration OctopusServerConfiguration{
    param(
        [PSCredential] $credentials
    )
    Node localhost{
        OctopusServer OctoServer{
            ServerName = $env:COMPUTERNAME
            Port = 8085
            Credentials = $credentials
            Ensure = "Present"
        }
    }

}


Monday, August 15, 2016

Combining DSC with ELK for effective infrastructure monitoring

DSC and event logs

DSC the management platform in Windows PowerShell that enables deploying and managing configuration data for software services and managing the environment in which these services run.
DSC provides a set of Windows PowerShell language extensions, new Windows PowerShell cmdlets, and resources that you can use to declaratively specify how you want your software environment to be configured. It also provides a means to maintain and manage existing configurations. When you run a DSC configured resource on a target system it will first check if the target matches the configured resource. If it doesn’t match it, it will make it so.

But if there is an error in the last DSC run? How can we have live monitoring and alerting with DSC? We’ll that’s what we’ll be solving in this post. We’ll use the combination of event logs and the ELK stack to create an infrastructure monitoring system, for our environments. DSC logs every details of the execution and details in the windows event logs. These logs can be found by using the EventViewer, by navigating to the channel Applications and Service Logs -> Microsoft -> Windows -> Desired State Configuration



We’ll use the WinlogBeat in combination with Logstash to retrieve these logs and push them to Elastic search instance. Later using Kibana we can create queries and visualizations to create a DSC dashboard for effective monitoring.

Installing and configuring ELK on Windows

Before staring the installation, we need to download the following software:

Download and extract all these softwares to the respective folders under a folder “ELK”. The contents in the demo looks like. I have some additional beats installed on my machine, but for this demo, we only need Winlogbeat. Before starting the installation, we need to have JAVA installed on the machine for Elastic search.

To install ELK, we need to start the services for elastic search, logstash and kibana. These services can be started by running the .bat files for these services.

Elasticsearch

To install and configure elastic search, navigate to the bin directory of elasticsearch and run the service.bat file with argument “install”



Once the installation is completed you can test whether everything is working fine by invoking the url http://127.0.0.1:9200/ . If everything is correctly setup you should see a JSON result  like

{
  "name" : "Rebel",
  "cluster_name" : "elasticsearch",
  "version" : {
    "number" : "2.3.1",
    "build_hash" : "bd980929010aef404e7cb0843e61d0665269fc39",
    "build_timestamp" : "2016-04-04T12:25:05Z",
    "build_snapshot" : false,
    "lucene_version" : "5.5.0"
  },
  "tagline" : "You Know, for Search"
}

Logstash

We will make use of the winlogbeat plugin to send events to logstash. On receiving these events, logstash will send the transaction to elasticsearch by using the output plugin.

To install the beats input plugin. Run the logstash-plugin.bat file with the argument “install logstash-input-beats”.


The next step is to configure logstash to listen on port 5044 for incoming beats connection and index into elasticsearch. You can do this by using a logstash configuration file. Create a logstash.conf file in the logstash/bin directory with the below contents.

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => "127.0.0.1:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

Now you can start the logstash service with the configuration by passing this file name to the logstash.bat file.

.\logstash.bat -f logstash.conf

Winlogbeat

To send the DSC event logs to logstash, we need to configure winlogbeats to pull log information from the DSC operational channel. To do this, open the winlogbeat.yml file in the winlogbeats directory and add the lines for event_logs as given below.


Change the output to logstash instead of the default elasticsearch option.

Install the service by running the install-service-winlogbeat.ps1 script in the same directory.

kibana

Run the Kibana.bat file in the bin directory under kibana folder, to install the service. If the service is stared, kibana site can be accessed at the url http://127.0.0.1:5601.  

You can configure the winlog index pattern by creating a new index pattern winlogbeat-* as given below. 

Once the index is created, you can now go ahead and create queries and visualizations from those queries for your DSC dashboard.

For ex:

You can create a visualization for the logs per computer by following the steps below.
  • Click on the discover tab and create a query with fields computer name and log level. You can do this by adding the computer_name and level fields as given below.

  • On the right side, you should be able to see the filtered results with the column names now.
  • Next, save the search, by choosing the save search option in the top bar and give an appropriate name

  • Next, click on the visualize tab and add a new line chart, choose the select from saved search as a source and choose the search you have saved.

  • Click on x-axis and choose “Date histogram” for aggregation field.

  • Click on Add sub buckets, click on split lines and choose “Terms” as sub-aggregation with field “computer_name”

  • Click the play button to see the graph. Save the visualization with a proper name.
  • Now click on the dashboard and add this visualization on the dashboard to see the widget in action.





Thursday, August 4, 2016

Application deployments, best practices


With more and more teams getting ready with practices like continuous deployment and DevOps, there are lot of questions about the best practices regarding application deployment. I've been working with projects and setting up continuous delivery and automated deployments as part of the work. Below are some of the practices that I follow for application deployments.
Automate
There is a lot of focus on automation nowadays and a lot of tools and information is available to automate the manual actions in the deployment process. Almost every deployment automation tools allows complete customization and extensibility to make use of scripting languages that are used to carry out OS specific tasks. This makes it very easy to use these tools in the process even though you have a specific task to perform that is not available out-of-the-box in the tool that you are using. For .NET applications you can use PowerShell for example to do almost anything on windows environments.
Build once, deploy it many times
For avoiding situations like, "it works in the Test environment but not in Production". you need to use the same binaries that were installed and tested on lower environments when deploying to higher environments. For automated deployments to go risk free, we need to have the confidence that we are deploying the same package that was tested and approved from a lower environment. If you are planning to recompile the code base every time a deployment is happening, there are lot of hidden changes and troubles coming along with that process, which makes the deployments unstable.
Maintain a repository for the build artifacts
Things can sometimes go wrong and sometimes its needed to roll-back to a known-working version. Having an artifact repository for your packages not only will help you in using the same version on every environment, but also to locate old versions of an application without having to build them again. Also its a stable version that was working before!!!
Change configuration variables at deployment time not at build time
One of the common challenges with applications which migrate through the usual lifecycle of environments such as development, test and production is getting the configuration context right. Some of the most common examples are connection strings, trace modes etc. There are multiple ways to handle this problem with one of them being making use of configuration file transformations as part of the build process. This can introduce the same risk as rebuilding binaries for each environment. It’s a good practice to make use of the deployment tool to apply environment specific configuration changes during deployment time to the applications. This will allow you to be flexible with the latest changes in the environments.
Don’t customize the deployment process or steps based on environments
It’s always good to treat your deployment process the same across environments. This helps you to create more reliable and predictable deployments across environments. A lot can go wrong when you try to make adjustments in the deployment process based on environments. Let’s keep it simple J