Systems Thinking


Systems thinking is a new and evolutionary way of looking at a problem and attempting to gain understanding and knowledge about a system. A system is “set of elements standing in interrelations” as defined by Ludwig von Bertlanffy in his book General Systems Theory. My understanding of this is that a system is a mathematical set of physical parts and a set of physical relationships between the parts. It should be noted that the parts themselves are often but not necessarily systems too. It is important to understand what a system is before trying to how a system operates and why it does.

As Systems thinking is the topic this week for my subject “Systems Engineering for Software Engineers” there are associated readings (and viewings) to push us to think about the topic. The first is a short lecture given by Dr Russel Ackoff. In the video he briefly goes into his view of systems thinking and what it actually is.

He defines systems thinking to be synthetic. By nature we think analytically, pulling apart a system to analyses it as a collection of components and then combining the individual understanding of each component in an attempt to understand the system as a whole. He posits that this gives us no understanding, only knowledge (which is still very useful).

Synthetic thinking is in direct opposition to analytical thinking, it requires that you view the system as a component in a larger containment system, communicating with other components. By determining the behavior and functionality of the containment system as a whole and them dismantling the containing system by identifying the underlying functionality of the system as a component. Dr Ackoff stresses the importance of considering the function of the system as being more important than the internal knowledge of how the system works.

This seems counter intuitive to me. A system is the product of the sum of its parts and the sum of its internal interactions. Considering this, analytical thinking is often flawed by ignoring the interaction. Chemistry education relies on this flaw to help students understand quantized energy of electrons in an atom. By ignoring the interaction of the electrons with each other (by considering on the base case hydrogen) we can grasp the way that energy can influence the properties of an atom and we can do so without much difficulty. In order to overcome the challenge of considering the electron-electron interactions (which often depend on complex computational math) chemistry appears to have developed a similar approach to systems thinking in regard to the wider context of a chemical in a system.

Chemistry is defined as “the science of substances: their structure, their properties, and the reactions that change them into other substances” in Linus Paulings Text book ‘General Chemistry. This definition embodies the systems thinking process. Chemistry takes a molecule and looks internally at it’s structure (analytically) and at its properties as it interacts within a neutral context as well as the reactions that it undergoes within the system that it came from (systematically). A chemist would then attempt to match the properties and reactions in its natural context to the molecules internal components and there interactions (atoms, electrons, protons etc.)

We can see that chemistry is an example of where analytically assessing a system (a chemical or a molecule) gives only half the picture, and that we quite often need a context and a functionality of the system as a whole to even comprehend its internal interactions. As systems become more complex not only does analytic thinking not provide real understanding of a system but without a systematic thought process, we may misunderstand the way that the internal interactions either should or do behave.

This brings us to the relevance at the core of the subject. Why should software engineers think about systems engineering and systems thinking? Without a greater understanding of the larger system, a software component may not function appropriately within the system or by misinterpreting the system provide a flawed or incorrect implementation of what is required. It is only by considering the larger system as a whole that we can effectively develop correct and appropriate software.

Powernap – The Server Power Management


I recently moved into a new home and one of the first things that I HAD to do was setup the home media server with a XBMC front end. So I set it up and I have been watching movies and television happily for a month now. All was going well until I got my first electricity bill. Wow was it painful. Of course it include the usage of a fridge and other appliances, but the media server had not been turned off for the whole month even though I only used it maybe 2 hours a day.

To this end I set out to find a way to reduce power consumption. The first thing that I thought of was a simple shutdown script for the XBMC front end that would turn off the server before it turned off. This worked well but there where problems, that sometimes the front end wasn’t turned off for long periods with idle time, or sometimes after I turned off the front end I still wanted to access the server and I still had to manually turn the server on. Clearly this was not a long term solution.

The next concept was to get the computer to be turned on via Wake On Lan. Wake On Lan (WoL) is a method of sending a special command to a computers motherboard via an Ethernet connection. This is easy to setup but hard actually do. Searching around online I found a program Powerwake written by Dustin Kirkland which using a very simple interface can perform as required. On Ubuntu this can be installed and used to wake a computer like so:

sudo apt-get install powerwake

sudo powerwake 10.0.0.12

*Replace 10.0.0.12 with the the address of the computer you want to wake.

This can be used to wake a sleeping computer. I use a simple script built into the front end which when runs wakes the server, waits about 30 seconds and then tries to remount the fstab file. If it the remount fails it waits another 30 seconds and then tries again. When the remount succeeds the computer reloads XBMC – easy.

This leaves only the problem of getting the server to turn off after inactivity. For this I installed the powernap program. This is a great application, again from Dustin Kirkland, which acts much like a screen saver for a server or non-GUI system. It is was quite complex to setup, but once you understand how it works it becomes much simpler.

Unfortunately time has run out and this post must end now. Stay tuned for updates of how I installed Powernap and a better look at how the program works.

As power is becoming a more and more costly and a limited resource (until we switch to renewable energy sources) tools such as powernap and powerwake will become more and more necessary for both hobbyists and professionals alike.

Start using Fuel PHP today


The last few days I have been busy looking at the Fuel PHP framework. And it’s AWESOME. That’s not a great description, its not even very objective, but none the less it is true (you will have to take my word for it now). The framework can be found at fuelphp.com.

Today I am going to give a quick guide for installing and running Fuel PHP on a linux machine and some extra cool tricks to make the setup easier. First what things do you need, assuming that you have a fresh linux install.

  • apache webserver
  • php 5.3 (needs to be at least 5.3)
  • php cli (command line interface)
  • mysql database server
  • command line editor vi or vim (not totally necessary but i will assume that you have one as I believe command line file editing to be the easiest option)
  • wget or curl (fuel documentation uses curl but either could be used)

I will also assume that you have root access and that you are on a home computer using localhost for your domain. Production environments require more careful setup and I will address those issues in a completely different post.

Lets Go

1. install all the necessary components

sudo apt-get update
sudo apt-get install php5 php5-cli apache2 mysql-server phpmyadmin curl vim git-core

2. install the fuel PHP oil installer. (that sounds strange)

curl get.fuelphp.com/oil | sh

3. change into the directory that you want to build you fuel application in and run the fuel PHP oil create script. replace {user} with your username and {app name} with the name you want to give your application. it doesn’t matter what the name is as it wont get used in the website URI

cd /home/{user}/Public/
oil create {app name}

4. Now it that’s as far as the basic section of the documentation on the fuelphp.com documentation website. However you cannot access the app in the browser yet. Before you can do that there are a few steps that you must take. We have to set up the apache virtual host. This greatly varies on you operating system, but for simplicity I will cover just Ubuntu, and will cover different OS’s later. We need to alter the virtual host so that it follows symbolic links and knows what the root directory of the website is. Because the fuel index.php file is in the public folder in the actual top directory we will include the public directory in the website root directory.

sudo vim /etc/apache2/sites-enabled/000-default

Now make the file look like this:

<VirtualHost *:80>
        ServerAdmin webmaster@localhost

        DocumentRoot /home/{user}/Public/{app name}/public
        <Directory />
                Options FollowSymLinks
                AllowOverride All
                Order allow,deny
                allow from all
        </Directory>
        <Directory /home/{user}/Public/{app name}/public>
                Options Indexes FollowSymLinks MultiViews
                AllowOverride All
                Order allow,deny
                allow from all
        </Directory>
</VirtualHost>

Restart apache to apply the configuration changes

sudo /etc/init.d/apache2 restart

5. If everything worked you should now be able to go to http://localhost/ and the fuel app page should be visible. It looks a little bit like the codeigniter welcome page. (most of the developers of fuel have or still do contribute to codeigniter).

Conclusion

Hopefully this is guide is a relatively straight forward explanation of how to set up fuel on a new development environment. Any questions or comments are appreciated. I’m sure I made a mistake somewhere in there or perhaps it just didn’t work on your computer. I would also love to get some suggestions on what you would like to see next.

Working For a Client – Building a Computer System


At one point in time, those people with a rough to well formed knowledge of a computer have been asked to help out a family member or friend with an ‘silicon’ based issue. It is usually a case of having clicked on a suspect link or open a attachment from a phishing email and more often than not there is either a quick fix or no fix(other than a clean OS install). Sometimes though, very occasionally, you might get asked to setup an entire home computer system.

This is what I intend on being a series of posts, not so much on the nitty gritties of setting up individual components of a system, but creating the entire workable system. At the conclusion of the series I will be releasing my backup program written for the system in a form that everyone can use.

A point that should be made is that this is the first time I have done this is at a non personal level. The resources that I have used range from manuals to forums and wiki’s to websites and is by no means a complete body of knowledge, and it should never be expected to be complete. The point is that what I have done is not necessarily the quickest, most efficient way of doing things (although it might be). Hopefully if nothing else it leads to a thorough discussion and a greater understanding if computer systems as a whole.

Project Outline

This project is centered around a small photography hobby as well as general data storage. There where a number of considerations when taking into account the clients needs.

  1. How much data storage would be needed in the long and short term.
  2. How often would files need to be moved around and accessed
  3. What peripherals would be used now and in the future and what would the level of difficulty be to add them at creation date or later.
  4. What would the future needs of the client be in 1 year, 5 years, 10 years time

Concern 1

The first concern is how much data storage is needed. This usually isn’t the first question I would ask when discussing the needs of a client, however, I soon found that depending on the size of the system the physical needs vary greatly. If only a small volume of storage is needed then a centralized server may not be needed. If a large system is needed for working with large files then working over a network is preferable to a adding a USB HDD to a pre-existing computer.

The client is a hobby photographer and works with often large photo’s (>20 Mb) and in large collections. They need to be able to have fast access to the images as well know that they can not lose any data by accident. I calculated from the information that I was able to gather, that a file space of approximately 4Tb (Terabytes) would be sufficient.

Concern 2

The second concern, following directly from the first, is how often will the files need to be accessed and used. This is relatively straight forward, In this case, the files need to be both backed up and stored on a local server and will be accessed many times a day. This helps us determine the way that we will arrange the HDD arrays and define the backup scripts.

Concern 3

The third concern is what peripherals will be needed now and in the future and hard difficult they will be to setup. The first and foremost is the camera’s themselves, then any portable devices such as tablet PC’s and smart phones. There is also guest computers for collaborative work and visitors, A potential Home theater PC and lastly printers.

Most of these are relatively straight forward. Most just require the setup and regular rotation of a password, The camera’s come with software and The media based computer can be built on a UPnP system. The only difficult setup would be the network computers.

Concern 4

The forth, final and most important concern is, what are the FUTURE needs. This is a difficult question, and one that i was not able to answer due to a lack of foresight of the technology industry. Who knows what level of technology will be accessible in 1-5 years time (at the time of writing only ADSL internet at 1500 kbps is accessible and afford-ability of wireless devices is very low).

Conclusion

This now leaves us at the starting point in our build. We have a general overview of the needs of our client, and roughly an outline of the types of services that will be integrated.

In the next part of the series, I will be going through the basis of the network topology and the hardware that was purchased prior to and during the build.

If there is any questions or comments please leave a note, I would love to start some discussion on this topic as it seems to becoming more and more requested by the community.

Installing Ruby RVM on Ubuntu and Fedora


Installing ruby on Linux is easy, just run sudo apt-get install ruby or yum install ruby. However, version control can be difficult to manage. That is, scripts that run on a version on your computer may not run as well on another. So wouldn’t it be great if you could have 2 or more versions or the ruby interpreter on your computer.

To do this we will use the RVM – Ruby Version Manager. This is a great program, Allowing you to have complete environments for running ruby. This includes the interpreter and gems. And besides the few intricacies of setting it up, It is relatively easy to do.

My instructions are fairly similar to the official RVM instructions. You can find them at rvm.beginrescueend.com and more about RVM at there index page.

OK. Lets go..

Firstly make sure that you have a version ruby installed and curl for downloading the script (most distributions come standard with this, although minimal setups will not.

UBUNTU
sudo apt-get install ruby curl

FEDORA
yum install ruby curl

Now make sure that you are logged in as the user that you want to install ruby for. This may be a no brainier however, if you have more than one user you may wish to install ruby for everyone or for a specific person. I would very much suggest NOT install RVM for everybody, because different users may choose to run different ruby interpreters which may cause problems latter on down the track.

Time to get dirty with the command line.

Open up a terminal session and cd to your home directory. The first thing is to download the install script and run it in bash

bash < <(curl -s https://rvm.beginrescueend.com/install/rvm)

Once the script has run issue the following command to load RVM every time you login.

echo '[[ -s "$HOME/.rvm/scripts/rvm" ]] && . "$HOME/.rvm/scripts/rvm" # Load RVM function' >> ~/.bash_profile

Now you need to close all your open terminal windows and then launch another. When the new one is open execute the following.

source .bash_profile

Now, to test that we have done all the previous correctly you need to use the following command and compare the results.

type rvm | head -l

The should output 'rvm is a function'

Thats it. Ruby Version Manager has been installed. Unfortunately, just having it installed is not much good, we need to look at gemsets and ruby versions, but that is for another post.

Quick Tip – Connecting to Another Machine (Linux to Linux)


In todays quick tip I will discuss methods of connecting to another computer. There are many different methods of doing this, even more if you want to mix and match between Windows, Mac OSX and Linux. Today I will just be looking at a Linux to Linux connection.

First off, for today I am going to be connecting to a system that I built for my parents (details of the build to come) to grab a few photographs that my dad took on his latest holiday. They have a central Linux system with a file server through port forwarding on port number 400.

The first method to connect is via a simple ssh shell command

ssh dad@X.X.X.X -p 400

Where dad is my dads username, X.X.X.X is his IPv4 address and -p 400 says that i want to connect on port 400 which will tell the router at my dads house that I actually want to talk to the file server. What I have now is a connection to file server and access to everything on that local machine, with the input output information sent to my machine.

So what can i do with this. Funnily enough, this simple command is powerful if you want to use the remote computer. You can run an update, run any command line program (cat /etc/passwd :P) or execute shell scripts to achieve tasks like backup or batch file manipulation. To copy files between computers we are going to need another command.

At this point we have three different options, two command line choices and one GUI, (the GUI only needs your computer to have the GUI not the remote computer.

scp dad@X.X.X.X:400/home/dad/image.jpg /home/me/image.jpg

or

sftp dad@X.X.X.X:400
get /home/dad/image.jpg

or

Places -> connect to server -> Fill out needed information -> connect
enter password -> navigate through window to file -> drag and drop to desired location

The first option is my favorite and the most simple command if you know exactly what you want and where you want to put it. The first part logs onto the ssh server and locates the file for copying, and the second part says where the file should be put. While it is simple, it has no margin of error, it either works or doesn’t, and can do some squiggly things.

The second option is the best for looking for and finding files without a GUI. You must first login to the ssh server in ftp mode and then find and “get” (download) the file. I don’t really use this option often, but it’s handy to know.

The third option is great if you want a graphical way of interacting file on an external server. This is very simple once you have seen what to do. Hopefully I will be able to do a quick video to show how to do this, which I will link at a later stage.

This outlines the way that you can connect to different Linux boxes on either you own subnet, or even over the internet. Stay tunned for more posts related to connecting computers, in particular mac and windows to linux connection.

Using APT – The Complete How to Guide


Debian’s APT (Advanced Packaging Tool) is an amazing program that adds greatest level of manipulability to the distributions that use it. It enables the user to control the software that runs on their computer, making it exactly the way they like it, without any extra programs that do not get used. This is unlike any other non-*nix operating system, giving A clear advantage, in my opinion, to it’s users.

Essentially APT is a front end to dpkg, a base level program for installing and removing and providing information about .deb packages. Going into depth of dpkg and .deb files is for another time, but briefly a .deb package is a way of puting a program into a container that makes it’s installation much simpler for the end user and dpkg was written for handling these packages.

APT is also able to use the RPM package management system, using the apt-rpm method (explained later). This is a newly acquired feature of APT which I believe that was added to make the Debian distribution POSIX compliant. RPM was created for the Red Hat Linux distribution as a simple package management system which has solid dependency handling.

Now APT is available on all Debian based distributions such as Ubuntu, Mint, Knoppix and a number of others, as well as Solaris (not open any more). This means that of all the users of Linux distributions, many have access to APT. Lets see the program in action. Open the terminal and enter the following

sudo apt-get update

To run the APT program you must run as a super user or provide super user privileges. You will also notice that there is the `tack get` addition of the apt program. This is because the program has many sub layers, with others such as apt-cache, apt- secure and apt-key. The update program updates the latest information from the sources list.

The sources list is the location of the Debian packaged programs that are installable via a direct download from apt-get. Finding the best sources for the apt-get program will be in a quick tip, for now we will assume that all the sources will be the optimal ones. Now lets perform a second command related to the sudo apt-get update command, upgrade

sudo apt-get upgrade

This command upgrades the current installed programs in a number of ways. First it checks to see if all the packages are the latest version of the installed programs. If there are new versions it does a dependency check and asks if you would like to update at which stage it will let you know of any other programs need to be installed. (hit enter here) It will then proceed to update and install until your current installed programs are how they should be at their newest version.

How about installing programs. Simple ! Just use the sudo apt-get install command. The trick is knowing exactly what program you need. For example, I am writing a program which works with a database in c. I need the mysql header files which are not installed by default. The trick with accessing header files is to download the development packages of the need programs so I used the following install command.

sudo apt-get install libmysqlclient-dev

These 3 main commands can handle 90% of your apt needs. Below is a list that i have compiled about commands and there functions that you may come across needing in your use of the program.

sudo apt-get remove `program`

This removes the installation of the program specified but still lives on the system

sudo apt-get autoremove

This automatically removes all the programs which are not being used and are not depended on by any other programs.

sudo apt-get clean

This cleans the archives of the downloaded programs. Any Archives that are not needed are removed, freeing up space.

sudo apt-get autoclean

This automatically removes the archives that are not needed and any dependencies as well

sudo apt-get source `program`

This downloads a source version (if available) that will need other install methods, in order to examine the source code and build specifically the way you choose.

deb `link` `version` `relationship`

This adds a source to the list. Link is source location, version is the word number of the version being used eg ubuntu uses “hardy” or your version equivalent and debian would use ‘lenny’ for v5 or ‘squeeze’ for v6. The relationship is a little more complex. Most likely it will be partner, But it may equally be universe or multiverse.

apt-cache search `program`

searches all repositories and list ever instance of the program. This is a great way to find a program that you are looking to download.

sudo apt-get dist-upgrade

This updates packages which have may have any number of dependency discrepancies.

This list is incomplete. Its a ‘working copy’ of the most commonly used apt commands. The important thing to remember is to run the update before you start working with apt so that you have the latest information, as well as anytime you add or remove sources.

If you have any questions or use any commands relating to apt that I haven’t listed, please let me know in a comment below.

Quick Tip – Man Pages


Linux Man pages are one of the most useful resources a Linux user has in his arsenal when it comes to learning about a command line functionality or just about anything you could need to know about Linux. For example, today I want to transfer some log files from my server to my main computer analysis but a small script I had written, I decided do this via ftp (just cause I could). So the first thing that I did was check the man page to ensure I used the correct commands, by using man ftp.

man ftp

This shows an interactive document. You can scroll down to read more, press q to exit.

There are many different sections to the man program, each containing a different subset of commands and programs. This allows for different documentation of similar or the same named programs. For example, the apt program has to expansions – annotation processing tool and advanced packaging tool. Usually typing in a command into man produces the output you want, sometimes though you have to go searching.

man apt

This gives the man page for annotation processing tool.

man 8 apt

This gives the man page for advanced packaging tool.

If you come across the someone else referencing a man page, or you yourself want to reference a man page you would write it man program(section) eg for advanced packaging tool you would use man apt(8)

Selecting Hardware for a Server – Ubuntu Server


Hardware is KEY.

You must have the BEST.

I am here to tell you that ITS ALL LIES. Fed to us through not just the media but big businesses and other corporate goons, bent on taking us for as much money as we can. The truth of the matter is that unless your running windows 7 ultimate packed full of processor heavy programs, you only need the most minimal of hardware setups by todays standards to run a server and even then it will most like be underused. (more…)

Installing the Base System – Ubuntu Server


Now we have the hardware, and the software setup and ready to go, we can begin the install. make sure that you have set up the hardware with all the peripherals attached.

Step 1

First we must enable the computers BIOS to load from the cd. To do that turn the computer on. Depending on the chip set of the computers motherboard there will be different keys for entering the BIOS setup. It is usually either F1, F2, del, esc, or F10. press one of these key before the computer loads the boot loader screen. Once into the BIOS you will want to find the boot load order. Once you have found the boot load setting, change the order so that the CD-DISK option (or similar) is number one.

exit and save from the BIOS screen and insert the install disk from Part 3.

Step 2

The computer will load into the disk. First it will ask for your language, Hit enter for the default (English) or select your preferred language. Next it will bring up a basic option list. Select the “install ubuntu server” option. This will then load an ncurses screen (blue white and red with monospace font). It will then as a for a number of options

  • Select your langauge
  • Select Your Location

You will now be asked whether you want to automatically select your keyboard. I have never had this work successfully, however you could give it a try, particularly if you have a non standard keyboard, otherwise select no.

If you pressed no and you have a standard keyboard it is a US keyboard. There will be 2 or 3 screens where you must select your keyboard. If you have a nonstandard keyboard, select your variation.

If you pressed yes you will have to press some keys and it will work out which keyboard you have. There will be multiple keys.

Step 3

After selecting your keyboard the installer will load a few additional component and then set up DHCP. This means it is looking for an Internet connection. If you do not have an INTERNET connection auto detection will fail and you will have to select “set up DHCP later” other wise it will auto configure and you should be able to move straight on.

If it fails and you have the server plugged in to the Internet there are a few simple things you can check. Make sure that if you have multiple Ethernet ports (multiple NIC’s), try changing the Ethernet cord to another port. Make sure that you actually can connect to the Internet on another computer (you should have another computer else you wouldn’t need to set up a home server).
You can also try manually setting up the DCHP setting, although that shouldn’t be necessary.

You will then need to set the host name. You can set this to what ever you like.

You will then be asked whether you the time zone detection has configured the correct zone, if it has not you will need to change it, otherwise hit enter.

Step 4

Now we have to setup the partitions. Select The Guided – use entire disk and setup LVM. You will then need to select which hard drive you want to use. Select the one that you want to install the primary operating system on. It must be no less than about 10GB, although for the best results from a home server 60 – 80 GB would be optimal (For there to be enough room to backup and store all of your important information.

Next select yes you would like to save the changes. The next screen will ask you for how much you would like to use for the / (root) and swap partitions. I suggest using no less than 20GB or 40 – 50% of the drive. This allows for expansion if necessary.

Save the Logical Volume Management and write the changes to disk.

Step 5

After the completion of Step 4, The system will be installed. At some point, you will be asked to enter a name for the new user. This is the main user that controls the server, they will have root access through the sudo command. You cannot call this user admin or root. I suggest entering your Name.

Next You need to enter the users username, I usually make this my first name or administrator. but it is up to you (you cannot use admin or root). You will then be asked for the users password. If you will be able to access the server from the Internet ie it servers web pages or has public ftp etc, you will need a strong password. Make it more than eight letters, with a mixture of numbers, lower and uppercase letters and if your frantic symbols as well. If you home server will never be accessible from the Internet then you can make the password as simple as you like (still suggest at least 4 letters).

The last two parts of step 5 are 1 – do you want to encrypt you home directory. The answer here should be no. unless you are storing sensitive data it won’t need to be encrypted and 2 – are you behind a proxy. Leave this blank unless you use a proxy server to connect to the Internet

You system will now start to install

Step 6

After a short period of time you will be asked whether you want to automatically update your system. You should manually do this for greater control, so select “No automatic updates”
Another short period of time later you will be asked what extra software you would like to install. I prefer to manually add extras software so I would hit enter, however if you would like the installer to automatically setup the system add which of the servers you would like it to setup by scrolling to the server and pressing space bar and hit enter when you are finished.

The installation is now nearly complete, all you have to do is hit yes when asked if you would like to add the grub boot loader to the master directory and then when prompted remove the cd and restart the computer.

Now you have installed the base ubuntu server.

Next we will add setup users and and user version control so that the files can be better managed.

%d bloggers like this: