Wednesday, December 31, 2014

Powershell script to wakeup all sites in the SharePoint farm

$ErrorActionPreference = "Stop"

function Wakeup-Site{
     param(
          [Parameter(Mandatory = $true)]
          [String] $Url
     )
     $request = [System.Net.HttpWebRequest] [System.Net.WebRequest]::Create($Url)
     $request.UseDefaultCredentials = $true
     $request.Method = "GET"
     $request.Accept = "text/html"
     $request.Timeout = 120000 #2 minutes

     Write-Host "Waking up site at $Url"
     try
     {
          # Get the response of $request
          $response = [System.Net.HttpWebResponse] $request.GetResponse()
     }
     catch [Net.WebException]
     {       
    Write-Warning $_.Exception.Message
     }
     finally
     {
          if ($response)
          {
              $response.Close()
              Remove-Variable ResponseObject
          }
     }
}

if((Get-PSSnapin "Microsoft.SharePoint.PowerShell" -ErrorAction SilentlyContinue) -eq$null){
     Add-PSSnapin "Microsoft.SharePoint.PowerShell"
}
Get-SPWebApplication |%{`
     $_.Sites |% { Wakeup-Site $_.Url}
}


Tuesday, December 30, 2014

Code online with Monaco in VS online

One of the most interesting and new features of Visual studio online is “Monaco”, the online code editor. With Monaco, you get a free, lightweight online code editor that is accessible from any device or platform and allows you to perform almost any tasks that can be done from a VS IDE.
To access Monaco follow the steps given below.
  • Open the Azure management portal
  • Click on websites and choose your website, which you want to edit using Monaco

  • Scroll down to the till you find the extensions tile on the website configurations section.
  • Click on the extensions and choose to add the visual studio online extension

  • After adding the extension, browse to see the code on the portal


  • On the let navigation panel, choose the Git icon to clone the vs online repository from the Git repository

  • Start editing the code from the browser.
  • After editing you can see the Git notifications for the changed files.

  • Click on this icon and commit the changes
  • Publish the changes and access the website to see your changes live!!!!



Sunday, December 28, 2014

DevOps world with visual studio online - Part 1

With visual studio online, you can easily host your code in the cloud, access it anywhere and support a variety of development platform. You can easily setup the cloud infrastructure using VS online in a couple of minutes without having to setup the server infrastructure. It also allows you to choose the version control system to host the files (Centralized – TFS or Distributed – Git), manage the project in the cloud by planning and tracking work items with fully integrated tools for agile planning and portfolio management.
To create a project and create a master branch in Git, you can create a new repository at a file location either by using the Git Bash or Git Extensions as given below.

Once the repository is created, you can now go ahead and create a project in visual studio at the location. You can also choose to host the site on cloud if needed and test the application.





After creating the project, next step is to add the files to the repository. You can use the git add command to add the files to the repository before committing it. Its also good to add a .gitignore file to the repository to mention the details of the files that you don’t want to include in the repository.
If you check the status of your repository at this point. It should look something like:

Once the git ignore file is created, you can add and commit the files to the master branch using the "git add ." and "git commit -m "my commit message"" commands
After commit your repository structure should look like

Once you have your master repository created in your local machine, you can now setup a remote git repository by using the command.
git remote -add origin "your vs online project url here"
To create a VS online account, you can either use the Azure management portal or use the Visual studio online website. I've used my Azure portal to manage my VS online account.
After successful creation of the account, you can create a new project in VS online and use the version control of your choice as given below.

To push the local repository contents to this remote repository, you need to use the git remote command and then push the contents to the remote repository.

Once published, the source code can be viewed from your VS online profile.

Next we’ll see, how to branch this, create builds, tests and deploy to Azure all using VS online.



Friday, December 26, 2014

DSC - Extending to Azure virtual machines

Being a fan of PowerShell DSC, and have benefited by the ease of use, I wanted to test the extensions on my Azure infrastructure. I have y VM configured with the Azure PowerShell SDK and can use this for the DSC setup. The next step is to create an authentication for the subscription by using the Azure AD.

To create an authentication method and manage your subscription using the Azure AD, follow the below given steps.
  • Use the Add-AzureAccount cmdlet and login to the portal to select a subscription. If you have multiple subscriptions, then you can use the Get-AzureSubscription cmdlet to view all your subscriptions and choose one. If you have a single subscription, the Add-AzureAccount cmdlet chooses the default subscription and uses it.
Add-AzureAccount –Credential (Get-Credential)
  • Later if you want to choose another subscription, you can use the Select-AzureSubscription cmdlet.
  • After authentication, Azure saves your credentials and closes the dialog window.
  • Once you have the account and authentication setup completed, you can create and work on the configurations.
  • To check whether DSC extension is available on the VM, you can use the Get-Command cmdlet to check whether the DSC modules are available as given below.

Create and push configurations on the Azure VM’s

You can create the configurations denoting the desired state for the VM by adding the resource configurations and save it to a config file. In the below sample, I’ve used the Registry resource to add a key to the registry for saving the settings for my application.

Configuration MyAppConfig{
    Node localhost{
        Registry MyAppSetting{
            Key = "HKEY_LOCAL_MACHINE\SOFTWARE\MySite"
            ValueName = "AllowedFunctions"
            ValueData = "READ:CONTRIBUTE"
        }
    }
}

When it comes to save a configuration on Azure, its bit different from the on premise machines. For Windows Azure, you need to publish the configuration to a storage container on the cloud services. The default storage container for DSC is windows-powershell-dsc container. You can provide another storage container by using the –ContainerName parameter for the Publish-AzureVMDscConfiguration cmdlet.
You can use the get-help command and check the options for the Publish-AzureVMDscConfiguration cmdlet for more customizations.

Before publishing the configuration, you need to create a storage context and use that to publish the configuration to. The storage context can be created by using the New-AzureStorageContext cmdlet. Once the context is created you can publish the configuration to the container created in the context using the Publish-AzureVMDscConfiguration as given below.


As you can see from the screenshot, the configuration is archived and stored on the storage container mentioned in the storage context created.

Next step is to enact to the configuration using the Set-AzureVMDscConfiguration cmdlet. You can use the cmdlet as given below.

Once the configuration is applied, you can see the VM updated with the changes

Sunday, November 30, 2014

DSC - Configuration delivery modes

In the configuration management life cycle in DSC, configuration delivery plays a major role. Once a configuration is authored, a delivery mode helps to enact the configuration on the target systems. These modes dictate how the configuration is enforced and corrected, as required. DSC supports two types of delivery modes: Push and Pull.

PUSH mode

In the PUSH configuration delivery mode, the configuration to be enacted gets pushed to the node, whether it is local or remote. This is unidirectional and immediate. In a unidirectional mode the configuration is pushed from a management system to a target node. In push mode, the user initiates configuration processing via the Start-DscConfiguration cmdlet. This command immediately applies the configuration to the target, which can be specified by the -ComputerName parameter of the Start-DscConfiguration DSC cmdlet. By default, this cmdlet uses the files in the -Path folder to find the target node details and the configuration pushed to the node is enacted immediately.
The PUSH mode can be viewed as:

Note: Whether the target node is local or remote, it’s important to have the WinRM service up and running with the appropriate WinRM listeners.

Limitations of PUSH mode

The PUSH mode is easy to configure and use as it does not require any infrastructure services, such as a central shared location to host the configurations. However, the Push mode is not scalable. Pushing configuration to hundreds or thousands of systems would take quite long and can be limited, based on the resources available on the system on which the Start-DscConfiguration cmdlet is being run.
To enact configuration, we need all resource modules to be present on the target system. This represents another limitation of the Push mode. For example, if you have custom DSC resources used in a configuration script, the modules for these resources must be copied to target systems prior to pushing the configuration

PULL mode

Unlike the Push mode for configuration delivery, the Pull mode is bidirectional. This means that the pull server and the client both communicate with each other to manage the client configuration. In PULL mode the DSC Local Configuration Manager (LCM) can periodically poll a server for new configurations and keep the servers in compliance with the configuration and avoid drift. The Local Configuration Manager (LCM) of target node periodically performs a compliance check on the configuration of the node using a checksum on the MOF file. If the checksum is the same then nothing happens. Otherwise the LCM requests the new configuration and once it is transmitted to the client, the LCM executes it and also ensures that any missing resources that are part of the configuration is downloaded as well.

The PULL mode can be viewed as with the pull server configured to use SMB (Server Message Block)


To configure an SMB based pull server, you can use the New-SmbShare cmdlet to create a file share that can store the configuration and custom modules.

New-SmbShare -Name MySMBPullServer -Path E:\DSCShare -ReadAccess Everyone -Description "PULL server SMB"

Configuring the PULL client based on SMB

The download manager for SMB based pull clients is DSCFileDownloadManager. You can configure the LCM for SMB based pull clients by creating a configuration like and executing on the local machine

Configuration LCMSMBConfig {
                Node MyServer1 {
                                LocalConfigurationManager {
                                                ConfigurationID = '9C3DE0FC-FCB2-491F-B5EF-22DE7A368DE6'
                                                RefreshMode = "Pull"
                                                DownloadManagerName = "DscFileDownloadManager"
                                                DownloadManagerCustomData = @{SourcePath = "\\MyServer1\DSCResources"}
                                                ConfigurationModeFrequencyMins = 60
                                                RefreshFrequencyMins = 30
                                }
                }
}
LCMSMBConfig
Set-DscLocalConfigurationManager -Path .\LCMSMBConfig


Later when creating PULL configurations for the clients to react, you can use the GUID mentioned in the ConfiguationID property for the target machine as the Node value.

Monday, November 17, 2014

Desired State Configuration - Elements of the configuration

DSC is a feature built into the Windows Operating System. It’s based on the standards like CIMS and WS-Management remote management offered by the operating system. With DSC you can move to a way of configuration management where you can create a script that defines how the state of the server should be instead of defining how to make the server in the desired state. That means, DSC is more of a declarative syntax than an imperative one. This makes DSC scripts/ configurations easy to understand and maintain by the operations.
A sample DSC configuration for configuring the web applications in a SharePoint farm looks like:
Configuration SPSiteCollectionConfig {
    param($nodes)

    Import-DscResource -ModuleName xDSC_SPSiteCollection
    Import-DscResource -ModuleName xDSC_SPWebApplication
    Node ($nodes) {

        xSPWebApplication WebApplication{
            WebApplication = "WebApplication MYWebApp"
            Ensure = "On"
        }
       

        SPSiteCollection SiteCollection {           
            SiteUrl = 'http://MYWEBAPP.com/newSite'
            SiteName = 'New Site'
            SiteTemplate = 'BLANKINTERNET#2'
            ContentDatabaseName = 'CONTENT_NEWSite'
            Language = '1033'
            Ensure = 'Present'
            DependsOn = "[xSPWebApplication]WebApplication" 

        }
    }
}
SPSiteCollectionConfig

The  Configuration script start with a Configuration block which is used to give the configuration a meaningful name. The configuration keyword is the core component of DSC that defines the desired configuration of a target system. Following the Configuration keyword, the Node keyword is used to specify the target system/ systems on which the configuration should be applied by the LCM. You can also parameterize the values that are passed to the Node keyword if you want to decide the target systems at a later point of time. Later you can pass the target systems values to the configuration when you want to create a MOF file out of the configuration created.

Within the node is a state of a DSC Resource. The DSC resource module is what takes care of the how part of the configuration management. The DSC resource modules are implemented or written in imperative style, which means the resource module implements all the necessary details to manage the configuration settings of an entity. However, the DSC configuration makes it possible to abstract those imperative details from the end user, using a declarative configuration script.

We now have to translate that into a format that the DSC Local Configuration Manager or the DSC Engine can understand. This is the Managed Object Format (MOF) representation of the configuration script. We can generate the MOF representation of the configuration script by simply loading the configuration into memory and then calling the configuration. 



In our example we added the name of the configuration at the end of the script. This will ensure that the script when executed will generate the MOF files for each node and store it in the folder that has the same name as the configuration.

Once you have the MOF files generated you can execute them using the PUSH mode by calling the Start-DSCConfiguration cmdlet




Monday, November 3, 2014

Desired State Configuration - Introduction to CIM

PowerShell 4.0 introduces desired state configuration (DSC), a powerful new feature that makes it easier than ever to manage your Windows infrastructure, whether on premise or in the cloud. DSC is built on the Common Information Model (CIM) standard developed by the Desktop Management Task Force (DMTF) and uses Windows Remote Management (WinRM) technology as a communication mechanism. In this post we will look at what Windows Remote Management is and how it is used in the Windows OS world to enable standards-based management. In the upcoming posts I'll explain how CIM cmdlets in PowerShell use WinRM to work with management data from remote systems and the role of DSC in managing remote machines.
Windows Remote Management is the Microsoft implementation of the WS-Management Protocol. It uses SOAP (Simple Object Access Protocol) for exchanging control information and data between capable devices over HTTP and HTTPS. The main goal of WS-Management-based implementation is to provide a common way for these systems or components to access and exchange information.
The WinRM sevice enables remote management of windows systems. You can use the cmdlet given below to check the status of the service on the computer.
Get-Service -ComputerName 'MyComputerXXX' -Name WinRM
If the service is not enabled on the computer, you can create a listener based on HTTP or HTTPS protocol on the desired port. To setup a WinRM listener you can use the Set-WSManQuickConfig cmdlet. Once WinRM listener is set and working, you can use the infrastructure to access management information from remote servers.

The CIM cmdlets introduced as part of PowerShell 3.0 makes it easier to work with Windows Management Instrumentation.  CIM - Common Information Model is the DMTF standard for describing the structure and behavior of managed resources such as storage, network, or software components. The CIM standard defines two parts:

  • Schema: Provides the actual model descriptions
  • Infrastructure: Defines the standards for integrating multiple management models using OOPS constructs and design.
The CIM cmdlets support multiple ways of exploring WMI. They work well when you are working in an interactive fashion. For example, Tab expansion expands the namespace when you use the CIM cmdlets; thereby permitting exploring namespaces that might not otherwise be very discoverable. You can even use this technique to drill down into namespaces. For e.g if you want to find all the classes in the root/Microsoft namespace. use the cmdlet. You can try out tab completion after typing in the –Namespace option for the cmdlet.
Get-CimClass -Namespace root/Microsoft
To create an instance of a class, you can use the Get-CimInstance cmdlet. For example, the below given command, exposes all properties for the class MSFT_WmiError
Get-CimClass -ClassName MSFT_WmiError | ForEach-Object {$_.CimClassProperties}
You can also use the –ComputerName or –CimSession parameters to manage remote machines. Its recommended to use the CimSession and reuse the session if you are trying to perform a large set of operations on the remote server.
For e.g. you can use the commands given below to get the details of the processes running on the remote server:
$session = New-CimSession –ComputerName XXX
Get-CimInstance –ClassName Win32_Process –CimSession $session

Once we have the CIM sessions set up and the command execution is complete, we can remove the existing CIM sessions, by using the Remove-CimSession cmdlet.

Monday, September 29, 2014

Continuous Deployment - Remote execution of PowerShell scripts from your build process


Including Windows PowerShell script as part of your build and deployment process, brings you the flexibility of easily and effectively customize your packaging and deployment process. With the proper combination of environment configuration files (XML) and PowerShell scripts you can achieve the impossible. This post will show you how to run Windows PowerShell scripts remotely from a TFS build process.
Using CredSSP for second-hop remoting

One common issue with PowerShell remoting is the “double hop” problem. When the scripts are executed remotely on a Server A and then it tries to connect from Server A to Server B, the second connection fails to send the credentials to that server. As a result the second server fails to authenticate the request and rejects the connection. To get around this issue you need to use the CredSSP authentication mechanism in PowerShell.
Credential Security Service Provider (CredSSP) is a new security service provider that is available through the Security Support Provider Interface (SSPI) in Windows. CredSSP enables an application to delegate the user’s credentials from the client (by using the client-side SSP) to the target server (through the server-side SSP).
To setup the machines, for CredSSP you can follow the below given steps:

On the local computer that needs to connect to a remote server, execute the command Enable-WSManCredSSP -Role client -DelegateComputer *. This will  make the local computer role as client since so that it can connect to remote computer which acts as server.
On the remote server run the command Enable-WSManCredSSP -Role server.

Remote execution of the PowerShell script from TFS build

For executing PowerShell scripts from the C# code you need to make use of the API’s in the System.Management.Automation.dll assembly. You can use the PowerShell instance directly but a better way is to create a Runspace instance. Every PowerShell instance works in a Runspace. You can have multiple Runspaces to connect to different remote hosts at the same time.
To get the remote execution working properly you need to first construct an instance of WSManConnectionInfo and pass that on to the RunspaceFactory.CreateRunspace(..) method. You can use the following code snippet to construct the connection object.

private PSCredential BuildCredentials(string username, string domain, string password)
{
    PSCredential credential = null;
    if (String.IsNullOrWhiteSpace(username))
    {
        return credential;
    }
    if (!String.IsNullOrWhiteSpace(domain))
    {
        username = domain + "\\" + username;
    }
    var securePassword = new SecureString();
    if (!String.IsNullOrEmpty(password))
    {
        foreach (char c in password)
        {
            securePassword.AppendChar(c);
        }
    }
    securePassword.MakeReadOnly();
    credential = new PSCredential(username, securePassword);
    return credential;
}

private WSManConnectionInfo ConstructConnectionInfo()
{
    //The connection info values are created after looking into the ouput of the command in the target machine

    /*
        * dir WSMan:\localhost\Listener\Listener_*\*
        * /
    //Output           
    /* 
        * Name                      Value                                                             Type

        * ----                      -----                                                             ----

        * Address                   *                                                                 System.String

        * Transport                 HTTP                                                              System.String

        * Port                      5985                                                              System.String

        * URLPrefix                 wsman                                                             System.String

        * 

        */
   const string SHELL_URI = http://schemas.microsoft.com/powershell/Microsoft.PowerShell;
    var credentials = BuildCredentials("username""domain""password");
    var connectionInfo = new WSManConnectionInfo(false"remoteserver", 5985, "/wsman", SHELL_URI, credentials);
    connectionInfo.AuthenticationMechanism = AuthenticationMechanism.Credssp;
    return connectionInfo;
}
Next you can use this connection object to pass it to the RunspaceFactory.CreateRunspace method and invoke the PowerShell script as given below.
var scriptOutput = new ScriptOutput();
var tfsBuildHost = new TFSBuildHost(scriptOutput);
var connection = ConstructConnectionInfo();
 
using (var runspace = RunspaceFactory.CreateRunspace(tfsBuildHost, connection))
{
    runspace.Open();
    InvokePSScript(runspace, scriptOutput);
    runspace.Close();
}
 
 
private void InvokePSScript(Runspace runspace, ScriptOutput scriptOutput)
{
    using (var ps = PowerShell.Create())
    {
        ps.Runspace = runspace;
        var commandToExecute = ConstructScriptExecutionCommand("path to script file""parameter1""parameter2");
        ps.AddScript(commandToExecute);
        try
        {
            var results = ps.Invoke();
            foreach (var result in results)
            {
                if (result.BaseObject != null)
                {
                    scriptOutput.WriteResults(result.BaseObject.ToString());
                }
            }
        }
        catch(Exception)
        {
            if (ps.InvocationStateInfo.State != PSInvocationState.Failed)
            {
                return;
            }
            scriptOutput.WriteError(ps.InvocationStateInfo.Reason.Message); 
        }
    }
}

You can also notice the TFSBuildHost object in the code. This class provides communications between the Windows PowerShell engine and the user. The implementation details are as given below.
internal class TFSBuildHost : PSHost{
    private ScriptOutput _output;
    private CultureInfo originalCultureInfo =
        System.Threading.Thread.CurrentThread.CurrentCulture;
    private CultureInfo originalUICultureInfo =
        System.Threading.Thread.CurrentThread.CurrentUICulture;
    private Guid _hostId = Guid.NewGuid();
    private TFSBuildUserInterface _tfsBuildInterface;
 
    public TFSBuildHost(ScriptOutput output)
    {
        this._output = output;
        _tfsBuildInterface = new TFSBuildUserInterface(_output);
    }
    public override System.Globalization.CultureInfo CurrentCulture
    {
        get { return this.originalCultureInfo; }
    }
    public override System.Globalization.CultureInfo CurrentUICulture
    {
        get { return this.originalUICultureInfo; }
    }
    public override Guid InstanceId
    {
        get { return this._hostId; }
    }
    public override string Name
    {
        get { return "TFSBuildPowerShellImplementation"; }
    }
    public override Version Version
    {
        get { return new Version(1, 0, 0, 0); }
    }
    public override PSHostUserInterface UI
    {
        get { return this._tfsBuildInterface; }
    }
    public override void NotifyBeginApplication()
    {
        _output.Started = true;
    }
    public override void NotifyEndApplication()
    {
        _output.Started = false;
        _output.Stopped = true;
    }
    public override void SetShouldExit(int exitCode)
    {
        this._output.ShouldExit = true;
        this._output.ExitCode = exitCode;
    }
    public override void EnterNestedPrompt()
    {
        throw new NotImplementedException(
                "The method or operation is not implemented.");
    }
    public override void ExitNestedPrompt()
    {
        throw new NotImplementedException(
                "The method or operation is not implemented.");
    }
} 

internal class TFSBuildUserInterface : PSHostUserInterface
{
    private ScriptOutput _output; 
    private TFSBuildRawUserInterface _tfsBuildRawUi = new TFSBuildRawUserInterface();
 
    public TFSBuildUserInterface(ScriptOutput output)
    {
        _output = output;
    }
    public override PSHostRawUserInterface RawUI
    {
        get { return this._tfsBuildRawUi; }
    }
    public override string ReadLine()
    {
        return Environment.NewLine;
    }
    public override void Write(string value)
    {
        _output.Write(value);
    }
    public override void Write(
                                ConsoleColor foregroundColor,
                                ConsoleColor backgroundColor,
                                string value)
    {
        //Ignore the colors for TFS build process.
        _output.Write(value);
    }
    public override void WriteDebugLine(string message)
    {
        Debug.WriteLine(String.Format("DEBUG: {0}", message));
    }
    public override void WriteErrorLine(string value)
    {
        _output.WriteError(value);
    }
    public override void WriteLine()
    {
        _output.WriteLine();
    }
    public override void WriteLine(string value)
    {
        _output.WriteLine(value);
    }
    public override void WriteLine(ConsoleColor foregroundColor, ConsoleColor backgroundColor, string value)
    {
        //Ignore the colors for TFS build process.
        _output.WriteLine(value);
    }
}

internal class TFSBuildRawUserInterface : PSHostRawUserInterface
{
    private ConsoleColor _backColor = ConsoleColor.Black;
    private ConsoleColor _foreColor = ConsoleColor.White;
    private Size _bufferSize = new Size(300, 900);
    private Size _windowSize = new Size(100, 400);
    private Coordinates _cursorPosition = new Coordinates { X = 0, Y = 0 };
    private Coordinates _windowPosition = new Coordinates { X = 50, Y = 10 };
    private int _cursorSize = 1;
    private string _title = "TFS build process";
     
    public override ConsoleColor BackgroundColor
    {
        get { return _backColor; }
        set { _backColor = value; }
    }
    public override Size BufferSize
    {
        get { return _bufferSize; }
        set { _bufferSize = value; }
    }
    public override Coordinates CursorPosition
    {
        get { return _cursorPosition; }
        set { _cursorPosition = value; }
    }
    public override int CursorSize
    {
        get { return _cursorSize; }
        set { _cursorSize = value; }
    }
    public override ConsoleColor ForegroundColor
    {
        get { return _foreColor; }
        set { _foreColor = value; }
    }
    public override bool KeyAvailable
    {
        get { return false; }
    }
    public override Size MaxPhysicalWindowSize
    {
        get { return new Size(500, 2000); }
    }
    public override Size MaxWindowSize
    {
        get { return new Size(500, 2000); }
    }
    public override Coordinates WindowPosition
    {
        get { return _windowPosition; }
        set { _windowPosition = value; }
    }
    public override Size WindowSize
    {
        get { return _windowSize; }
        set { _windowSize = value; }
    }
    public override string WindowTitle
    {
        get { return _title; }
        set { _title = value; }
    }
}