For our Web Analytics Tool Stetic we are always looking for new flexible hosting solutions and just found Google’s Compute Engine as the perfect place for our needs. On Google Compute Engine you can launch virtual machines on Google’s infrastructure with great performance, high flexibility, strong security and a good price/performance ratio. The biggest advantage is the possibility to scale your systems whenever you need. From our point of view and compared to other cloud computing competitors Google has also the most user-friendly interface and a great set of tools to get you started easily.
With Puppet, the tool of our choice for server management, you can easily define and create your cluster of Compute instances and get the software installed and configured quickly. In this blog post we want to share our learnings on how to setup a high available LAMP Stack with MongoDB on Google’s Compute Engine with Puppet Open Source.
Before we get started you need a Google Account, a Compute Engine enabled project and some software installed:
gcloud auth login
.apt-get install puppet
, for other OS’s please refer to Installing Puppet: Pre-Install Tasks With the installed Puppet and authenticated Cloud SDK we can now setup our local Puppet environment. First you have to install the Google Compute Engine Module from Puppetlabs by running
puppet module install puppetlabs-gce_compute
Then create a device.conf
in your Puppet configuration path. To determine the location of this file, just run
puppet apply --configprint deviceconfig
This should give you a path like /etc/puppet/device.conf
or ~/.puppet/device.conf
.
#/etc/puppet/device.conf [my_project] type gce url [/dev/null]:project_id
In the section header my_project
you can choose any name associated with your project. This is the name of the certificate and will be used in the future to select the right project when connecting to GCE.
Within the element url just change project_id
to your chosen project id.
Now it’s time to create a instance for the Puppet master. The Puppet master will be the management server for all our instances with the running Puppet agents and should therefore be reachable from the instances over the network. Every Puppet agent polls the configuration (catalog) from the master and start running it if compilation succeeded. We choose "f1-micro" as the machine type with 1 shared CPU, 0.6 GB RAM and with the default 10 GB sized boot disk. This should be enough for a Puppet master with up to 20 agents. If you like to run a higher amount of agents or any other service on this machine that needs some capacity you should choose a higher machine type.
Create your first puppet manifest file puppetmaster_up.pp
with the following contents:
# The Puppet master gce_instance { 'puppet-master': ensure => present, description => 'Puppet Master Open Source', machine_type => 'f1-micro', zone => 'europe-west1-b', network => 'default', auto_delete_boot_disk => false, tags => ['puppet', 'master'], image => 'projects/debian-cloud/global/images/backports-debian-7-wheezy-v20140814', manifest => "include gce_compute_master", startupscript => 'puppet-community.sh', puppet_service => present, puppet_master => "puppet-master", modules => ['puppetlabs-inifile', 'puppetlabs-stdlib', 'puppetlabs-apt', 'puppetlabs-concat', 'saz-locales'], module_repos => { 'gce_compute' => 'git://github.com/stetic/puppetlabs-gce_compute', 'gce_compute_master' => 'git://github.com/stetic/puppet-gce_compute_master', 'mongodb' => 'git://github.com/puppetlabs/puppetlabs-mongodb' } }
Most of the code should be self-explaining: We create a new Google Compute Instance called „puppet-master“, ensure that it’s present, set the machine type, the zone where the instance should be started (More Info: Regions & Zones), the default network, some tags to identify the machine later, the image with Debian 7 Backports and add some puppet specific configuration and modules. For more information about all available options refer to the puppetlabs/gce_compute Usage Section.
In the module_repos option we add a minimal modified version of the gce_compute module from Github which adds support for disk types to use SSD disks, multiple disks and adds an auto_delete_boot_disk option. The second repo gce_compute_master is needed to create a Puppet Master from the Open Source version of puppet which is not provided by the gce_compute module.
Now we can create and start our first GCE instance with the following command. Instead of my_project use the previously defined certificate name:
puppet apply --certname my_project puppetmaster_up.pp
If everything is running fine, you should see a similar message like this:
Notice: Compiled catalog for my_project in environment production in 0.17 seconds
Notice: /Stage[main]/Main/Gce_instance[puppet-master]/ensure: created
Notice: Finished catalog run in 21.18 seconds
You can now ssh into your new created instance by typing
gcutil ssh puppet-master
Connected to puppet-master you should check if a puppet agent and a puppet master process is running. All puppet logging goes to /var/log/syslog
.
In order to have access to the GCE API, the Puppet Master has to be authorized to the Google Cloud. It’s the same procedure as on your local machine, except the sudo:
sudo gcloud auth login
Now create a Puppet manifest on the master to define the instances. The following manifest will create two web nodes, three MongoDB nodes, a firewall rule to allow incoming connections on port 80 and a load balancing for the two web servers in the Europe Zone:
# /etc/puppet/manifests/cluster_up.pp # # LAMP cluster on GCE with Apache and MongoDB replica set # # Disks gce_disk { 'disk-mongodb1': ensure => present, description => 'mongodb1:/var/lib/mongodb', size_gb => '100', zone => 'europe-west1-b', disk_type => 'pd-ssd', wait_until_complete => true } gce_disk { 'disk-mongodb2': ensure => present, description => 'mongodb2:/var/lib/mongodb', size_gb => '100', zone => 'europe-west1-a', disk_type => 'pd-ssd', wait_until_complete => true } gce_disk { 'disk-mongodb3': ensure => present, description => 'mongodb3:/var/lib/mongodb', size_gb => '100', zone => 'europe-west1-a', disk_type => 'pd-ssd', wait_until_complete => true } # Instances gce_instance { 'web1': ensure => present, machine_type => 'g1-small', zone => 'europe-west1-b', network => 'default', boot_disk_type => 'pd-ssd', auto_delete_boot_disk => false, tags => ['apache', 'web'], image => 'projects/debian-cloud/global/images/backports-debian-7-wheezy-v20140814', manifest => '', startupscript => 'puppet-community.sh', puppet_service => present, puppet_master => $fqdn } gce_instance { 'web2': ensure => present, machine_type => 'g1-small', zone => 'europe-west1-a', network => 'default', boot_disk_type => 'pd-ssd', auto_delete_boot_disk => false, tags => ['apache', 'web'], image => 'projects/debian-cloud/global/images/backports-debian-7-wheezy-v20140814', manifest => '', startupscript => 'puppet-community.sh', puppet_service => present, puppet_master => $fqdn } gce_instance { 'mongodb1': ensure => present, machine_type => 'n1-highmem-2', zone => 'europe-west1-b', network => 'default', require => Gce_disk['disk-mongodb1'], disk => 'disk-mongodb1,deviceName=mongodb', boot_disk_type => 'pd-ssd', auto_delete_boot_disk => false, tags => ['mongodb', 'database', 'primary'], image => 'projects/debian-cloud/global/images/backports-debian-7-wheezy-v20140814', manifest => 'exec { "mkdir-var-lib-mongodb": command => "/usr/bin/sudo /bin/mkdir -p /var/lib/mongodb" } exec { "safe_format_and_mount": command => "/usr/bin/sudo /usr/share/google/safe_format_and_mount -m \"mkfs.ext4 -F\" /dev/disk/by-id/google-mongodb /var/lib/mongodb", require => Exec["mkdir-var-lib-mongodb"] }', startupscript => 'puppet-community.sh', puppet_service => present, puppet_master => $fqdn } gce_instance { 'mongodb2': ensure => present, machine_type => 'n1-highmem-2', zone => 'europe-west1-a', network => 'default', require => Gce_disk['disk-mongodb2'], disk => 'disk-mongodb2,deviceName=mongodb', boot_disk_type => 'pd-ssd', auto_delete_boot_disk => false, tags => ['mongodb', 'database', 'member'], image => 'projects/debian-cloud/global/images/backports-debian-7-wheezy-v20140814', manifest => 'exec { "mkdir-var-lib-mongodb": command => "/usr/bin/sudo /bin/mkdir -p /var/lib/mongodb" } exec { "safe_format_and_mount": command => "/usr/bin/sudo /usr/share/google/safe_format_and_mount -m \"mkfs.ext4 -F\" /dev/disk/by-id/google-mongodb /var/lib/mongodb", require => Exec["mkdir-var-lib-mongodb"] }', startupscript => 'puppet-community.sh', puppet_service => present, puppet_master => $fqdn } gce_instance { 'mongodb3': ensure => present, machine_type => 'n1-highmem-2', zone => 'europe-west1-a', network => 'default', require => Gce_disk['disk-mongodb3'], disk => 'disk-mongodb3,deviceName=mongodb', boot_disk_type => 'pd-ssd', auto_delete_boot_disk => false, tags => ['mongodb', 'database', 'member'], image => 'projects/debian-cloud/global/images/backports-debian-7-wheezy-v20140814', manifest => 'exec { "mkdir-var-lib-mongodb": command => "/usr/bin/sudo /bin/mkdir -p /var/lib/mongodb" } exec { "safe_format_and_mount": command => "/usr/bin/sudo /usr/share/google/safe_format_and_mount -m \"mkfs.ext4 -F\" /dev/disk/by-id/google-mongodb /var/lib/mongodb", require => Exec["mkdir-var-lib-mongodb"] }', startupscript => 'puppet-community.sh', puppet_service => present, puppet_master => $fqdn } # # Firewall # gce_firewall { 'allow-http': ensure => present, network => 'default', description => 'allows incoming HTTP connections', allowed => 'tcp:80', } # # Load balancer # gce_httphealthcheck { 'basic-http': ensure => present, require => Gce_instance['web1', 'web2'], description => 'basic http health check', } gce_targetpool { 'web-pool': ensure => present, require => Gce_httphealthcheck['basic-http'], health_checks => 'basic-http', instances => 'europe-west1-b/web1,europe-west1-a/web2', region => 'europe-west1', } gce_forwardingrule { 'web-rule': ensure => present, require => Gce_targetpool['web-pool'], description => 'Forward HTTP to web instances', port_range => '80', region => 'europe-west1', target => 'web-pool', }
The instances have to be configured with the needed software and services. For this purpose create the following manifests:
We start with a simple Apache installation for the web nodes, having PHP with the MongoDB extension enabled, serving a simple PHP script that connects to our new MongoDB replica set and performs some basic actions.
# /etc/puppet/manifests/web.pp # # Web node # class web { package { "apache2-mpm-prefork": ensure => latest } package { "libapache2-mod-php5": ensure => latest, notify => Service["apache2"] } service { "apache2": ensure => "running", enable => true, hasrestart => true, require => Package["apache2-mpm-prefork", "libapache2-mod-php5"] } package { "php5-mongo": ensure => latest } file { "/var/www/index.php": ensure => file, path => '/var/www/index.php', owner => "www-data", group => "www-data", mode => "0644", require => Package["apache2-mpm-prefork"], content => "setReadPreference( MongoClient::RP_PRIMARY_PREFERRED ); \$c = \$m->foo->bar; \$c->insert( array( 'msg' => sprintf( 'Hello from %s at %s.', '${hostname}', date('Y-m-d H:i:s') ) ) ); echo 'Hi, this is ${hostname} on a load balanced apache and connected to a MongoDB replica set'; echo ''; \$cursor = \$c->find(); foreach (\$cursor as \$doc) { var_dump(\$doc); } echo ''; " } file { "/var/www/index.html": ensure => absent, owner => "root", group => "root", require => [ Package["apache2-mpm-prefork"], Service["apache2"] ] } }
The manifest for our database adds some system tuning for the MongoDB service as suggested by the MongoDB Team, ensures the correctly mounted disk and defines the installation and configuration of the MongoDB server with a replica set.
# /etc/puppet/manifests/database.pp # # MongoDB node # # class database ( $replicaset = 'replica', $replicaset_type = 'member', ) { include stdlib # # System settings # Adapted from http://docs.mongodb.org/ecosystem/platforms/google-compute-engine/ # # /etc/security/limits.conf file_line { 'limits.conf-soft-nofile': ensure => present, line => 'mongod soft nofile 64000', path => '/etc/security/limits.conf', } file_line { 'limits.conf-hard-nofile': ensure => present, line => 'mongod hard nofile 64000', path => '/etc/security/limits.conf', } file_line { 'limits.conf-soft-nproc': ensure => present, line => 'mongod soft nproc 32000', path => '/etc/security/limits.conf', } file_line { 'limits.conf-hard-nproc': ensure => present, line => 'mongod hard nproc 32000', path => '/etc/security/limits.conf', } # /etc/security/limits.d/90-nproc.conf file { '/etc/security/limits.d/90-nproc.conf': ensure => present } -> file_line { '90-nproc.conf-soft-noproc': line => 'mongod soft nproc 32000', path => '/etc/security/limits.d/90-nproc.conf', } -> file_line { '90-nproc.conf-hard-noproc': line => 'mongod hard nproc 32000', path => '/etc/security/limits.d/90-nproc.conf', } # /etc/sysctl.conf file_line { 'sysctl.conf-tcp_keepalive_time': line => 'net.ipv4.tcp_keepalive_time = 300', path => '/etc/sysctl.conf', } # /etc/udev/rules.d/85-mongod.rules exec { 'blockdev-setra': onlyif => "/usr/bin/test ! -f /etc/udev/rules.d/85-mongod.rules", command => '/sbin/blockdev --setra 32 /dev/disk/by-id/google-mongodb', require => Mount["/var/lib/mongodb"] } -> file { '/etc/udev/rules.d/85-mongod.rules': ensure => present } -> file_line { '85-mongod.rules': line => 'ACTION=="add", KERNEL=="disk/by-id/google-mongodb", ATTR{bdi/read_ahead_kb}="32"', path => '/etc/udev/rules.d/85-mongod.rules', } # # User, group and directories # group { "mongodb": ensure => present, } -> user { "mongodb": ensure => present, gid => "mongodb", require => Group["mongodb"] } -> # The mongodb module has a File resource /var/lib/mongodb - so we have to do a mkdir exec { "mkdir-var-lib-mongodb": command => "/bin/mkdir -p /var/lib/mongodb >/dev/null 2>&1", user => "root", unless => "/usr/bin/test -d /var/lib/mongodb", } -> mount {'/var/lib/mongodb': ensure => mounted, atboot => true, device => '/dev/disk/by-id/google-mongodb', fstype => 'ext4', options => 'defaults,auto,noatime,noexec' } -> exec { "/bin/chown -R mongodb /var/lib/mongodb": unless => "/bin/bash -c '[ $(/usr/bin/stat -c %U /var/lib/mongodb) == \"mongodb\" ]'", } exec { "/bin/chgrp -R mongodb /var/lib/mongodb": unless => "/bin/bash -c '[ $(/usr/bin/stat -c %G /var/lib/mongodb) == \"mongodb\" ]'", } -> class {'::mongodb::globals': manage_package_repo => true, bind_ip => '0.0.0.0' } -> class {'::mongodb::server': ensure => present, bind_ip => '0.0.0.0', directoryperdb => true, replset => $replicaset, require => Mount["/var/lib/mongodb"] } -> class {'::mongodb::client': } if $replicaset_type == 'primary' { mongodb_replset { $replicaset: ensure => present, members => ['mongodb1:27017', 'mongodb2:27017', 'mongodb3:27017'], require => [ Mount["/var/lib/mongodb"], Class['mongodb::server'] ] } } }
The site.pp is the main manifest with our node definitions.
# /etc/puppet/manifests/site.pp # # Site.pp # import "web.pp" import "database.pp" node /^web(1|2)$/ { include web } node 'mongodb1' { class { 'database': replicaset_type => 'primary', } } node /^mongodb(2|3)$/ { include database }
Next apply the cluster_up.pp manifest with
sudo puppet apply --certname my_project /etc/puppet/manifests/cluster_up.pp
The instances should be created and provisioned with the defined software and services within a few minutes. Just type the ip address of the created load balancer into your browser to check the status. You can retrieve the ip address in your Google Developer Console or by using gcutil:
sudo gcutil getforwardingrule web-rule
You should see a simple webpage served randomly from instance web1 and web2 inserting a simple log message to your just created MongoDB Replica Set.
Now that we have the highly avaliable LAMP cluster with MongoDB up and running, you can start with your own customizations on the Puppet Master. Just edit site.pp, web.pp or database.pp on the Master to fit your needs. An agent polls the master periodically every 30 minutes to retrieve and apply the current catalog.
The sources of all manifests can also be found on Github.
Further Information:
Using Puppet to Automate Google Compute Engine
Supporting documentation for Using Puppet with Google Compute Engine
Automate Google Compute Engine with Puppet
Compute Engine Management with Puppet, Chef, Salt, and Ansible
Was denkst du?