Providing Solutions For Life

Amazon CloudDay Bangalore - 17 May

It was a jam-packed event and thanks to Amazon for the invite.

The Agenda 



All most all the tracks were overflowing and informative.

Thanks for the Amaze-On Amazon


DevOps Training provided by me





Having brought down the environment creation time for Bangalore International Airport Project from three and a half days to 3 hours, all that in one click, I realized that all this was possible due to the magic of "DevOps".

I wanted to share my enthusiasm with a wider group and called for applications for DevOps Enablement program. I also wanted it to be an experiment on how successful it would go with non-Developer or non-tech folks and hence opened it up for people from sales as well. To my amazement, they too were able to pick up these skills easily. Below is the feedback from them on their learning experience.


Below is the Feedback from the attendees of my DevOps Enablement course.



First of all, you are an amazing teacher. I thought I would attend one session, but now I am hooked. Thank you for introducing us to the another side of Ops. In 2 classes you have already managed to answer a lot of questions (cloud, VMs. hypervisors) that have been lingering at the back of our minds. Keep up the good work! I look forward to all the upcoming sessions. 
— Genevieve Miranda



"The DevOps workshop is really helpful for me. The one workshop I attended was interesting. But I need some clarity on wha you covered on first day workshop."
— Vikas Jeevan Suryavanshi


"Venu you Rock - hanks for your time and taking us through DevOps sessions. Its very useful and we learnt new things. Appreciate your effort."
— Nandish D.


Its good know and learn new things compare between VM and cloud thing. It is great help to us to maintenance infrastructure as automate using tools like vagrant, Ansible etc. Interesting to work with Venu.
—Srinivasa Reddy Avula


This training is really helpful for me. Im from Peoplesoft team. This training will help me to learn about how to Automate our infrastructure.
— Prasenjit 


DevOps training initiative is very good and we need more similar like trainings. 
— Sajish




This workshop helping to understand basics of devops. Of devs who need do some automation in project this will definitely help. 
— Gopinath

Awesome and very helpful. Hands on session helped clear the doubts. 
— Debabrata Kumar 

Informative workshop for beginners. 
— Astha Jaiswal 


Great insight and informative sessions, keep up the enthusiasm and good work. 
— Raj Saxena


It seems nice as started learning about the different Infrastructure solutions. Its a great initiative.  
— Shubhamay Das



Great Initiative. — Dixith Kumar

It has been a great initiative having DevOps bootcamp. Having short sessions of couple of hours is what I liked. It's helping me getting to understand many basic concepts that I couldn't due to lack of time while working on our projects.  
 — Nishkarsh


Considering the very diverse set of people is helping a lot.
Running the show with more of interaction is also helping a lot....
— Ramalingam Sangarasubramanian

Learning OpenStack - The Easy Way



Dear Friends,


Our Dream has come true!


It takes 9 months for a mother to bring another life. That is the amount of time it has taken me to demystify the otherwise terse and abstract cloud computing concepts. And thanks to Packt Publishing I for helping me impart this knowledge in the most engaging way.


The Full video course is out you can get more details about it here.



You can get more details about the course here without registering.

The Overview video is out for all.

It was a tough negotiation with publisher to give out this valuable video for free I hope that it will get you started on your "Infrastructure as Code" journey.


Three points I would like to share, which might help you are:-
  1. People are watching us! Packt publishing approached me watching the screencasts I had posted on Youtube. (Which until then I was certain that nobody was watching).
  2. Good news is Video course royalty is 3 times that of a book! Bad news is because its 3 times the effort :-)
  3. Publishing field has also taken the Agile approach.

A course has several sections and sections have chapters. The script for every section is reviewed just like a book, and then the visuals also go through another stringent review process. After several iterations, we get the permission to go ahead and record the video, which is then handed over to the publisher for final editing. The section gets published, after which we repeat these steps for the next section. (Meeting deadlines was tough, having signed the contract most of the work was done in between my travels to 5 cities in Brazil, followed by Eu and Israel... needless to say most of the work in the 9 months was done in airports and planes)


I wish I could pen down the names of all those amazing people who've made this possible but since they already know who they are, I would just say thank you for having helped me on this arduous journey.

​Feel free to spread the word about this course, I have a discount code for us and followers on Social media

#LearningOpenStack


Discount Code: LOpStk25


Expiry Date: 15th October 2016
 

Creating an SSL certificate and adding it to Apache

Just a brain dump, will format it latter

https://letsencrypt.org/getting-started/

~/letsencrypt$ sudo ./letsencrypt-auto certonly -d venumurthy.com -d www.venumurthy.com

Congratulations! Your certificate and chain have been saved at
 


https://letsencrypt.org/getting-started/

 /etc/letsencrypt/live/obhiyo.com/fullchain.pem

cert.pem  chain.pem  fullchain.pem  privkey.pem


vim sites-enabled/000-default.conf


<VirtualHost 54.169.00.52:443>

        ServerName www.venumurtyy.com

        ServerAdmin contact@venumurty.com
        DocumentRoot /var/www/venumurthy
        SSLEngine on
        SSLCertificateFile /etc/letsencrypt/live/vm.com/cert.pem
        SSLCertificateKeyFile /etc/letsencrypt/live/vm.com/privkey.pem
        SSLCertificateChainFile /etc/letsencrypt/live/vm.com/fullchain.pem



        ErrorLog ${APACHE_LOG_DIR}/error.log
        CustomLog ${APACHE_LOG_DIR}/access.log combined

</VirtualHost>


vim /etc/apache2/sites-available/default-ssl.conf

                  SSLEngine on

                #   A self-signed (snakeoil) certificate can be created by installing
                #   the ssl-cert package. See
                #   /usr/share/doc/apache2/README.Debian.gz for more info.
                #   If both key and certificate are stored in the same file, only the
                #   SSLCertificateFile directive is needed.
                SSLCertificateFile     /etc/letsencrypt/live/VM.com/cert.pem
                SSLCertificateKeyFile  /etc/letsencrypt/live/VM.com/privkey.pem


                #   Server Certificate Chain:
                #   Point SSLCertificateChainFile at a file containing the
                #   concatenation of PEM encoded CA certificates which form the
                #   certificate chain for the server certificate. Alternatively
                #   the referenced file can be the same as SSLCertificateFile
                #   when the CA certificates are directly appended to the server
                #   certificate for convinience.
                SSLCertificateChainFile /etc/letsencrypt/live/VM.com/chain.pem

                #   Certificate Authority (CA):
                #   Set the CA certificate verification path where to find CA
                #   certificates for client authentication or alternatively one
                #   huge file containing all of them (file must be PEM encoded)
                #   Note: Inside SSLCACertificatePath you need hash symlinks
                #                to point to the certificate files. Use the provided
                #                Makefile to update the hash symlinks after changes.
                #SSLCACertificatePath /etc/ssl/certs/
                SSLCACertificateFile /etc/letsencrypt/live/VM.com/fullchain.pem



sudo a2ensite default-ssl.conf

 service apache2 reload

Very helpful link




Choosing Apache mod_wsgi over Eventlet in OpenStack Kilo and Liberity

While installing OpenStack Liberty release you disable the keystone service from starting up automatically and we also see a note such as

"In Kilo and Liberty releases, the keystone project deprecates eventlet in favor of a separate web server with WSGI extensions. This guide uses the Apache HTTP server with mod_wsgi to serve Identity service requests on port 5000 and 35357. By default, the keystone service still listens on ports 5000 and 35357. Therefore, this guide disables the keystone service. The keystone project plans to remove eventlet support in Mitaka."

 
The reason behind this is

Eventlet by design performs well in networked environments and handles everything in a single thread. Due to Apache's ability to do multi-threading it was better to use it as the frontend.

Keystone depends on apache/web-server modules to handle federated identity (validation of SAML and etc) and similar Single Sign On type authentication.

Eventlet has proven problematic when it comes to workloads within Keystone, notably that a number of actions cannot yield (either due to lacking in Eventlet, or that the dependent library uses C-bindings that eventlet is not able to work with).

Apache has many modules available which can be used.

Vagrant on steroids


This is our life when we are working on automating some really complicated machine building and provisioning procedures. I.e. while developing the playbooks in Ansible, or CookBooks in Chef or manifests in Puppet.

It is not easy to fail faster and fix early as the script might have to download all the dependencies again and again. Adding to all the odds, it could be that the dependencies are being downloaded on a low bandwidth.

Even though Vagrant makes bringing up VM and their management faster, the provisioning (using Ansible, Chef or Puppet or etc) might take inordinately long times when it involves, and it usually does involve downloading packages on the VMs. And it gets painful when you have to download a full stack of several libraries to test your VM.

To help overcome this issue, we can use the following to cache the dependencies and test the configuration scripts or recipes faster, the following should set you up (assuming you have vagrant already installed):-

1. Install vagrant-cachier plugin


vagrant plugin install vagrant-cachier

2. Install vagrant-proxy plugin

A Vagrant plugin that configures the virtual machine to use specified proxies.

vagrant plugin install vagrant-proxyconf

3. Install a proxy server


I am on mac and hence using squidman to install it use the below one line command

sudo brew cask install squidman



Configuring to use them


1. Configuring to use the vagrant-cachier.


Edit the Vagrantfile to include these line

 if Vagrant.has_plugin?("vagrant-cachier")
    config.cache.scope              = :box
    config.cache.synced_folder_opts = {
        type:          :nfs,
        mount_options: ['rw', 'vers=3', 'tcp', 'nolock']
    }
  end 

2. Configuring vagrant-proxyconf

Change the IP appropriately

if Vagrant.has_plugin?("vagrant-proxyconf")
# Start squid-man Mac OSX proxy on port 8081
# config.proxy.enabled = false
config.proxy.http     = "http://10.211.55.2:8081"
config.proxy.https    = "http://10.211.55.2:8081"
config.proxy.no_proxy = "localhost,127.0.0.1"
end 

3. Launch SquidMan and configure

As in the screenshots below. General tab, change the IP appropriately.

Clients tab add the following depending on your subnet
And start the Squidman


Confirming that it is all working

1. If you are on Ubuntu for example you would start seeing a lot of packages in the following folder
~/.vagrant.d/cache/parallels/ubuntu-14.04/apt/

2. Run the vagrant up and check squid's access logs. (Open Squidman and press Command+T and click on "Access log")

3. After these configurations have been put in place, Provisioning gets faster from the second time onwards. As for the first time, the dependencies get cached.

And for dependencies like JDK (which don't get cached due to https) you might want to create an image by snapshotting the VM with the OS and these dependencies. This will allow you to develop recipes you can skip their installations every time you run the palybooks or cookbooks


Hope this helps. (I am finally going to hit the Publish button :) after so many days of slacking on this. However do let me know if you would need clarity on anything)


Ansible - Error - stderr: E: There are problems and -y was used without --force-yes

In case your tasks is to install some packages and it errors out as below

  

- name: Install linux-headers
  apt: pkg={{item}} 
       state=installed 
       install_recommends=yes 
       update_cache=yes
  with_items: 
      - linux-headers-generic
      - dkms
  sudo: yes




failed: [parallelsUbuntu] => (item=linux-headers-generic,dkms) => {"failed": true, "item": "linux-headers-generic,dkms"}
stderr: E: There are problems and -y was used without --force-yes

stdout: Reading package lists...
Building dependency tree...
Reading state information...
The following extra packages will be installed:
  cpp fakeroot gcc libfakeroot linux-headers-3.13.0-63
  linux-headers-3.13.0-63-generic patch
Suggested packages:
  cpp-doc dpkg-dev debhelper gcc-multilib manpages-dev autoconf automake1.9
  libtool flex bison gdb gcc-doc diffutils-doc
The following NEW packages will be installed:
  cpp dkms fakeroot gcc libfakeroot linux-headers-3.13.0-63
  linux-headers-3.13.0-63-generic linux-headers-generic patch
0 upgraded, 9 newly installed, 0 to remove and 17 not upgraded.
Need to get 0 B/9846 kB of archives.
After this operation, 78.0 MB of additional disk space will be used.
WARNING: The following packages cannot be authenticated!
  cpp gcc patch dkms libfakeroot fakeroot linux-headers-3.13.0-63
  linux-headers-3.13.0-63-generic linux-headers-generic

msg: '/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold"   install 'linux-headers-generic' 'dkms'' failed: E: There are problems and -y was used without --force-yes


FATAL: all hosts have already failed -- aborting

Solution

The solution is to just add the option of force=yes

  
apt: pkg={{item}} 
       state=installed 
       install_recommends=yes 
       update_cache=yes
       force=yes


which is equivalent to what we would have done manually on the terminal

  
sudo apt-get install some-deb -y --force-yes


Every time I use Ansible my admiration for it only increases, how well advanced concepts have been implemented in such simple ways!