Pages

DevOps Training provided by me





Having brought down the environment creation time for Bangalore International Airport Project from three and a half days to 3 hours, all that in one click, I realized that all this was possible due to the magic of "DevOps".

I wanted to share my enthusiasm with a wider group and called for applications for DevOps Enablement program. I also wanted it to be an experiment on how successful it would go with non-Developer or non-tech folks and hence opened it up for people from sales as well. To my amazement, they too were able to pick up these skills easily. Below is the feedback from them on their learning experience.


Below is the Feedback from the attendees of my DevOps Enablement course.



First of all, you are an amazing teacher. I thought I would attend one session, but now I am hooked. Thank you for introducing us to the another side of Ops. In 2 classes you have already managed to answer a lot of questions (cloud, VMs. hypervisors) that have been lingering at the back of our minds. Keep up the good work! I look forward to all the upcoming sessions. 
— Genevieve Miranda



"The DevOps workshop is really helpful for me. The one workshop I attended was interesting. But I need some clarity on wha you covered on first day workshop."
— Vikas Jeevan Suryavanshi


"Venu you Rock - hanks for your time and taking us through DevOps sessions. Its very useful and we learnt new things. Appreciate your effort."
— Nandish D.


Its good know and learn new things compare between VM and cloud thing. It is great help to us to maintenance infrastructure as automate using tools like vagrant, Ansible etc. Interesting to work with Venu.
—Srinivasa Reddy Avula


This training is really helpful for me. Im from Peoplesoft team. This training will help me to learn about how to Automate our infrastructure.
— Prasenjit 


DevOps training initiative is very good and we need more similar like trainings. 
— Sajish




This workshop helping to understand basics of devops. Of devs who need do some automation in project this will definitely help. 
— Gopinath

Awesome and very helpful. Hands on session helped clear the doubts. 
— Debabrata Kumar 

Informative workshop for beginners. 
— Astha Jaiswal 


Great insight and informative sessions, keep up the enthusiasm and good work. 
— Raj Saxena


It seems nice as started learning about the different Infrastructure solutions. Its a great initiative.  
— Shubhamay Das



Great Initiative. — Dixith Kumar

It has been a great initiative having DevOps bootcamp. Having short sessions of couple of hours is what I liked. It's helping me getting to understand many basic concepts that I couldn't due to lack of time while working on our projects.  
 — Nishkarsh


Considering the very diverse set of people is helping a lot.
Running the show with more of interaction is also helping a lot....
— Ramalingam Sangarasubramanian

Learning OpenStack - The Easy Way



Dear Friends,


Our Dream has come true!


It takes 9 months for a mother to bring another life. That is the amount of time it has taken me to demystify the otherwise terse and abstract cloud computing concepts. And thanks to Packt Publishing I for helping me impart this knowledge in the most engaging way.


The Full video course is out you can get more details about it here.



You can get more details about the course here without registering.

The Overview video is out for all.

It was a tough negotiation with publisher to give out this valuable video for free I hope that it will get you started on your "Infrastructure as Code" journey.


Three points I would like to share, which might help you are:-
  1. People are watching us! Packt publishing approached me watching the screencasts I had posted on Youtube. (Which until then I was certain that nobody was watching).
  2. Good news is Video course royalty is 3 times that of a book! Bad news is because its 3 times the effort :-)
  3. Publishing field has also taken the Agile approach.

A course has several sections and sections have chapters. The script for every section is reviewed just like a book, and then the visuals also go through another stringent review process. After several iterations, we get the permission to go ahead and record the video, which is then handed over to the publisher for final editing. The section gets published, after which we repeat these steps for the next section. (Meeting deadlines was tough, having signed the contract most of the work was done in between my travels to 5 cities in Brazil, followed by Eu and Israel... needless to say most of the work in the 9 months was done in airports and planes)


I wish I could pen down the names of all those amazing people who've made this possible but since they already know who they are, I would just say thank you for having helped me on this arduous journey.

​Feel free to spread the word about this course, I have a discount code for us and followers on Social media

#LearningOpenStack


Discount Code: LOpStk25


Expiry Date: 15th October 2016
 

Creating an SSL certificate and adding it to Apache

Just a brain dump, will format it latter

https://letsencrypt.org/getting-started/

~/letsencrypt$ sudo ./letsencrypt-auto certonly -d venumurthy.com -d www.venumurthy.com

Congratulations! Your certificate and chain have been saved at
 


https://letsencrypt.org/getting-started/

 /etc/letsencrypt/live/obhiyo.com/fullchain.pem

cert.pem  chain.pem  fullchain.pem  privkey.pem


vim sites-enabled/000-default.conf


<VirtualHost 54.169.00.52:443>

        ServerName www.venumurtyy.com

        ServerAdmin contact@venumurty.com
        DocumentRoot /var/www/venumurthy
        SSLEngine on
        SSLCertificateFile /etc/letsencrypt/live/vm.com/cert.pem
        SSLCertificateKeyFile /etc/letsencrypt/live/vm.com/privkey.pem
        SSLCertificateChainFile /etc/letsencrypt/live/vm.com/fullchain.pem



        ErrorLog ${APACHE_LOG_DIR}/error.log
        CustomLog ${APACHE_LOG_DIR}/access.log combined

</VirtualHost>


vim /etc/apache2/sites-available/default-ssl.conf

                  SSLEngine on

                #   A self-signed (snakeoil) certificate can be created by installing
                #   the ssl-cert package. See
                #   /usr/share/doc/apache2/README.Debian.gz for more info.
                #   If both key and certificate are stored in the same file, only the
                #   SSLCertificateFile directive is needed.
                SSLCertificateFile     /etc/letsencrypt/live/VM.com/cert.pem
                SSLCertificateKeyFile  /etc/letsencrypt/live/VM.com/privkey.pem


                #   Server Certificate Chain:
                #   Point SSLCertificateChainFile at a file containing the
                #   concatenation of PEM encoded CA certificates which form the
                #   certificate chain for the server certificate. Alternatively
                #   the referenced file can be the same as SSLCertificateFile
                #   when the CA certificates are directly appended to the server
                #   certificate for convinience.
                SSLCertificateChainFile /etc/letsencrypt/live/VM.com/chain.pem

                #   Certificate Authority (CA):
                #   Set the CA certificate verification path where to find CA
                #   certificates for client authentication or alternatively one
                #   huge file containing all of them (file must be PEM encoded)
                #   Note: Inside SSLCACertificatePath you need hash symlinks
                #                to point to the certificate files. Use the provided
                #                Makefile to update the hash symlinks after changes.
                #SSLCACertificatePath /etc/ssl/certs/
                SSLCACertificateFile /etc/letsencrypt/live/VM.com/fullchain.pem



sudo a2ensite default-ssl.conf

 service apache2 reload

Very helpful link




Choosing Apache mod_wsgi over Eventlet in OpenStack Kilo and Liberity

While installing OpenStack Liberty release you disable the keystone service from starting up automatically and we also see a note such as

"In Kilo and Liberty releases, the keystone project deprecates eventlet in favor of a separate web server with WSGI extensions. This guide uses the Apache HTTP server with mod_wsgi to serve Identity service requests on port 5000 and 35357. By default, the keystone service still listens on ports 5000 and 35357. Therefore, this guide disables the keystone service. The keystone project plans to remove eventlet support in Mitaka."

 
The reason behind this is

Eventlet by design performs well in networked environments and handles everything in a single thread. Due to Apache's ability to do multi-threading it was better to use it as the frontend.

Keystone depends on apache/web-server modules to handle federated identity (validation of SAML and etc) and similar Single Sign On type authentication.

Eventlet has proven problematic when it comes to workloads within Keystone, notably that a number of actions cannot yield (either due to lacking in Eventlet, or that the dependent library uses C-bindings that eventlet is not able to work with).

Apache has many modules available which can be used.

Vagrant on steroids


This is our life when we are working on automating some really complicated machine building and provisioning procedures. I.e. while developing the playbooks in Ansible, or CookBooks in Chef or manifests in Puppet.

It is not easy to fail faster and fix early as the script might have to download all the dependencies again and again. Adding to all the odds, it could be that the dependencies are being downloaded on a low bandwidth.

Even though Vagrant makes bringing up VM and their management faster, the provisioning (using Ansible, Chef or Puppet or etc) might take inordinately long times when it involves, and it usually does involve downloading packages on the VMs. And it gets painful when you have to download a full stack of several libraries to test your VM.

To help overcome this issue, we can use the following to cache the dependencies and test the configuration scripts or recipes faster, the following should set you up (assuming you have vagrant already installed):-

1. Install vagrant-cachier plugin


vagrant plugin install vagrant-cachier

2. Install vagrant-proxy plugin

A Vagrant plugin that configures the virtual machine to use specified proxies.

vagrant plugin install vagrant-proxyconf

3. Install a proxy server


I am on mac and hence using squidman to install it use the below one line command

sudo brew cask install squidman



Configuring to use them


1. Configuring to use the vagrant-cachier.


Edit the Vagrantfile to include these line

 if Vagrant.has_plugin?("vagrant-cachier")
    config.cache.scope              = :box
    config.cache.synced_folder_opts = {
        type:          :nfs,
        mount_options: ['rw', 'vers=3', 'tcp', 'nolock']
    }
  end 

2. Configuring vagrant-proxyconf

Change the IP appropriately

if Vagrant.has_plugin?("vagrant-proxyconf")
# Start squid-man Mac OSX proxy on port 8081
# config.proxy.enabled = false
config.proxy.http     = "http://10.211.55.2:8081"
config.proxy.https    = "http://10.211.55.2:8081"
config.proxy.no_proxy = "localhost,127.0.0.1"
end 

3. Launch SquidMan and configure

As in the screenshots below. General tab, change the IP appropriately.

Clients tab add the following depending on your subnet
And start the Squidman


Confirming that it is all working

1. If you are on Ubuntu for example you would start seeing a lot of packages in the following folder
~/.vagrant.d/cache/parallels/ubuntu-14.04/apt/

2. Run the vagrant up and check squid's access logs. (Open Squidman and press Command+T and click on "Access log")

3. After these configurations have been put in place, Provisioning gets faster from the second time onwards. As for the first time, the dependencies get cached.

And for dependencies like JDK (which don't get cached due to https) you might want to create an image by snapshotting the VM with the OS and these dependencies. This will allow you to develop recipes you can skip their installations every time you run the palybooks or cookbooks


Hope this helps. (I am finally going to hit the Publish button :) after so many days of slacking on this. However do let me know if you would need clarity on anything)


Ansible - Error - stderr: E: There are problems and -y was used without --force-yes

In case your tasks is to install some packages and it errors out as below

  

- name: Install linux-headers
  apt: pkg={{item}} 
       state=installed 
       install_recommends=yes 
       update_cache=yes
  with_items: 
      - linux-headers-generic
      - dkms
  sudo: yes




failed: [parallelsUbuntu] => (item=linux-headers-generic,dkms) => {"failed": true, "item": "linux-headers-generic,dkms"}
stderr: E: There are problems and -y was used without --force-yes

stdout: Reading package lists...
Building dependency tree...
Reading state information...
The following extra packages will be installed:
  cpp fakeroot gcc libfakeroot linux-headers-3.13.0-63
  linux-headers-3.13.0-63-generic patch
Suggested packages:
  cpp-doc dpkg-dev debhelper gcc-multilib manpages-dev autoconf automake1.9
  libtool flex bison gdb gcc-doc diffutils-doc
The following NEW packages will be installed:
  cpp dkms fakeroot gcc libfakeroot linux-headers-3.13.0-63
  linux-headers-3.13.0-63-generic linux-headers-generic patch
0 upgraded, 9 newly installed, 0 to remove and 17 not upgraded.
Need to get 0 B/9846 kB of archives.
After this operation, 78.0 MB of additional disk space will be used.
WARNING: The following packages cannot be authenticated!
  cpp gcc patch dkms libfakeroot fakeroot linux-headers-3.13.0-63
  linux-headers-3.13.0-63-generic linux-headers-generic

msg: '/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold"   install 'linux-headers-generic' 'dkms'' failed: E: There are problems and -y was used without --force-yes


FATAL: all hosts have already failed -- aborting

Solution

The solution is to just add the option of force=yes

  
apt: pkg={{item}} 
       state=installed 
       install_recommends=yes 
       update_cache=yes
       force=yes


which is equivalent to what we would have done manually on the terminal

  
sudo apt-get install some-deb -y --force-yes


Every time I use Ansible my admiration for it only increases, how well advanced concepts have been implemented in such simple ways!

Software Defined Environment - Environments on Demand

Get the Development, QA, Staging or Production Environment you need at the click of a button.


Software Defined Environment

The current situation


It wouldn’t be a bold statement to say that all software’s ultimate goal is to enhance the customer experience. How many times have we not read such comments on app stores or heard business say?

 “Great app, but I can only give it three stars until the developers add ...”

But the Development team’s side of the story is

    “I am waiting for the environment to test the code with new features”

Continuous Delivery and Continuous Integration can help release software updates more frequently and with almost no manual intervention, but there are some bottlenecks to being able to do this. Following are a few: -

Delay in getting the Environments


  Lack of self-provisioning creates dependency on IT department.

Lack of easily customizable Environments


For Development, Testing and Staging with new features or updates to dependencies.

Manual Provisioning of Environments


Being repetitive and involving several steps, we would not be able to leverage the power of Automated Deployments and CI.

And the hilarious but unfortunately true risk of

“Oh! But it works on my laptop!”

Not being able to recreate the environments easily and consistently can lead to not being able to recreate performance issues or release code or updates to production confidently!

Inconsistent environments could result in such scenarios as a new update has been released to the production system and the system Admin might have put in the configuration or dependencies that only he or she is aware of to get the app working. Similarly the developer might have put in the unique settings on his or her laptop to get the code working on his or her workstation or laptop. Due to which every server becomes “works of art” and as unique as snowflakes. Needless to say inconsistent environments make it very difficult to determine why an application breaks when it's promoted to the next environment. Wasting the Developer and Operation teams time in determining if an issue is due to the source code or environment configuration.

What is an Environment?


It is not just an image or template of a virtual machine but all the compute, storage, network and several other resources (XaaS) that are required to host your application. Quite simply put, everything you can find inside the server room!

Environments on Demand at the click of a button


A solution that could give the Development, QA, Staging or Production Environment at the click of a button could remove all the bottlenecks and risks that we had discussed earlier and at the same time orchestrate Software-Defined Compute, Networking, Storage, Security and such to provide a smart infrastructure that is aware of resources needed by the application and is adaptive and responsive to the workloads dues to fluctuating business demand. All this while being easy to customize and simple to use!

Please watch this video demonstrating how it works.

How can Software Defined Environments help?


Self-Provisioning, empowers the Developers, QAs and etc. to bring up the environments they need, cutting delay due to dependency and bureaucracy!

Push-button deployments to get environments easily and run automated tests, on any version of the app to any environment, helps in getting faster feedback. Using policies, teams can be allocated quota of resources and authorization to use them can also be fine grained.

Everything can be parameterized due to which getting the n.x update of a dependency into an environment is just a click away.

The other advantages of having a Software Defined Environment is Automating orchestration will vastly reduce the possibility of human error, and make it possible to scale far beyond what people could do manually.

Reducing the cost of cloud ownership by sharing resources, time to market drastically and by being able to reuse the existing hardware resources



Increase the quality of service by improving the application performance by auto scaling which is achieved by having a hardware that is intelligent and adaptive to the needs of the app.

Phoenix Environments

           
Providing resilient, fault tolerant Environments, which can bring up your infrastructure in one click.

Mitigating the risks of inconsistent environments by providing Consistent Environments throughout the software development life cycle, i.e. from a Developers laptop unto the production systems.

Embracing the following advantages of Infrastructure as code, making your infrastructure IMMUTABLE!
           

Extend the advantages of version control from your app to Infrastructure.

Auto deployment will cut the repetitive and manual process of configuring all infrastructure resources.

Get a unified view simplifying the monitoring and management of all resources.

As you test the app at scale and once deployed, there could be hundreds of servers that might need to be brought up at need to scale the app. Using this approach we could cut the repetitive process and copy the configurations to more servers, virtual machines, switches, routers and storage servers instantly.

What do we do with our existing cloud investments?


As we are going to use Open Source Software, like OpenStack, Go CD and such it will be interoperable with any existing private cloud technologies like the VMWare, Xen and such allowing us to reuse the exiting hardware capital and get started with as little investments as possible and scale out by adding in more capacity seamlessly when ever the utilization is expected to reach higher.

Please watch this video demonstrating how it works or let us know and we would be delighted to take you on this journey!

Welcome to the era of Software Defined Economy!


video

You can see the accompanying slides here. 





The code has been open sourced here. Enjoy!

I will be presenting this in The India Cloud Expo conference