Fuel PHP – First Impressions


There are two main competing objectives of any framework. It needs to be fast and it needs to do everything that the user needs. The irony is that often the users needs are greater than the speeds they desire. This Dichotomy of a users desires may drive diversity, but it also encourages the search for the perfect medium, speed and functionality.

Fuel – The php framework – is simple and easy to use, provided that you are comfortable using the command line enough that you can change directory and edit a few files. It is fast. The first thing that I did in testing was download, install and load it up in the browser. The page rendered in less than 0.01 seconds and used only 1.34mb of memory (same speed but less memory footprint than the base codeigniter install). The design of the home page was fresh and clean. If nothing else it leaves you feeling optimistic. Always a good feeling to have when you first start using a new technology.

Something that strikes my as clever is the simplicity of the framework. It gets out of your way when you coding. In other attempts to do this you always felt like you have to completely separate your code from the frameworks and carefully knit them together later. Fuel has cleverly taken design initiatives that web developers have been discovering of over the last 2 years and applied them into a clean, fresh and unobtrusive style. Allowing a free flowing feeling when a user is coding.

The Fuel website is a real gem (no ruby intended ūüôā ). Clean and simple and has everything you need. The documentation is good, but probably not awesome. It can be a little fiddle to navigate, maybe I need to spend a little bit more time using it.

On doing some research, there appears to be a slight ‘contradiction’ between the use of codeigniter and Fuel, as much of the functionality is quite similar. In my opinion however, I feel that this Fuel is quite an improvement upon this and other php frameworks but hey, I only just started.

I would like to finish this short post with a quick question. Do you use a php framework, and what do you think of the ones you use.

Advertisements

Making the jump – Codeigniter to Fuelphp


One of the issues that seems to come up for me often is the need to rapidly create a launchpad for my ideas. Usually this means some form of webpage that performs a particular function or perhaps a blog post or something creative. The problem is that creation takes time and as a university student I don’t have the time to spend fiddling around with a development environment to achieve exactly what I need.

Up until now I have used Codeigniter as my default development platform. I haven’t created any live pages as I can not afford to have a server off campus to host my “dev doodles” and my university blocks outside access into my web home web server. None the less, I have been using Codeigniter for some time now and I love the simplicity of it, and have built much of my own functionality into it. For those who do not know, Codeigniter is like the pre-made foundations to a backend of a website.

But today I ran into a realisation, like a truck bearing down on me. I have been noticing that my code it getting bloated, for example I was needing 8 lines of code to load a view instead of one. And while normally this wouldn’t bother me, I have come to the conclusion that Codeigniter is no longer performing optimally for my needs.

So I have decided to make the jump, to stop using codeigniter and start using fuelphp. Fuelphp, is a knew discovery, sort of. I have seen it before, and even viewed it’s source but until now it has been in development, only just having released its version 1.0.

Over the next few days I have decided to transform my current project, a personal cms for my dads photography from codeignier code into fuelphp code. I am hoping for a smooth transition, but we all know thats not going to happen, lifes just not that fair. And so I will document the transformation and let you know just how it works out.

Working For a Client – Building a Computer System


At one point in time, those people with a rough to well formed knowledge of a computer have been asked to help out a family member or friend with an ‘silicon’ based issue. It is usually a case of having clicked on a suspect link or open a attachment from a phishing email and more often than not there is either a quick fix or no fix(other than a clean OS install). Sometimes though, very occasionally, you might get asked to setup an entire home computer system.

This is what I intend on being a series of posts, not so much on the nitty gritties of setting up individual components of a system, but creating the entire workable system. At the conclusion of the series I will be releasing my backup program written for the system in a form that everyone can use.

A point that should be made is that this is the first time I have done this is at a non personal level. The resources that I have used range from manuals to forums and wiki’s to websites and is by no means a complete body of knowledge, and it should never be expected to be complete. The point is that what I have done is not necessarily the quickest, most efficient way of doing things (although it might be). Hopefully if nothing else it leads to a thorough discussion and a greater understanding if computer systems as a whole.

Project Outline

This project is centered around a small photography hobby as well as general data storage. There where a number of considerations when taking into account the clients needs.

  1. How much data storage would be needed in the long and short term.
  2. How often would files need to be moved around and accessed
  3. What peripherals would be used now and in the future and what would the level of difficulty be to add them at creation date or later.
  4. What would the future needs of the client be in 1 year, 5 years, 10 years time

Concern 1

The first concern is how much data storage is needed. This usually isn’t the first question I would ask when discussing the needs of a client, however, I soon found that depending on the size of the system the physical needs vary greatly. If only a small volume of storage is needed then a centralized server may not be needed. If a large system is needed for working with large files then working over a network is preferable to a adding a USB HDD to a pre-existing computer.

The client is a hobby photographer and works with often large photo’s (>20 Mb) and in large collections. They need to be able to have fast access to the images as well know that they can not lose any data by accident. I calculated from the information that I was able to gather, that a file space of approximately 4Tb (Terabytes) would be sufficient.

Concern 2

The second concern, following directly from the first, is how often will the files need to be accessed and used. This is relatively straight forward, In this case, the files need to be both backed up and stored on a local server and will be accessed many times a day. This helps us determine the way that we will arrange the HDD arrays and define the backup scripts.

Concern 3

The third concern is what peripherals will be needed now and in the future and hard difficult they will be to setup. The first and foremost is the camera’s themselves, then any portable devices such as tablet PC’s and smart phones. There is also guest computers for collaborative work and visitors, A potential Home theater PC and lastly printers.

Most of these are relatively straight forward. Most just require the setup and regular rotation of a password, The camera’s come with software and The media based computer can be built on a UPnP system. The only difficult setup would be the network computers.

Concern 4

The forth, final and most important concern is, what are the FUTURE needs. This is a difficult question, and one that i was not able to answer due to a lack of foresight of the technology industry. Who knows what level of technology will be accessible in 1-5 years time (at the time of writing only ADSL internet at 1500 kbps is accessible and afford-ability of wireless devices is very low).

Conclusion

This now leaves us at the starting point in our build. We have a general overview of the needs of our client, and roughly an outline of the types of services that will be integrated.

In the next part of the series, I will be going through the basis of the network topology and the hardware that was purchased prior to and during the build.

If there is any questions or comments please leave a note, I would love to start some discussion on this topic as it seems to becoming more and more requested by the community.

Installing Ruby RVM on Ubuntu and Fedora


Installing ruby on Linux is easy, just run sudo apt-get install ruby or yum install ruby. However, version control can be difficult to manage. That is, scripts that run on a version on your computer may not run as well on another. So wouldn’t it be great if you could have 2 or more versions or the ruby interpreter on your computer.

To do this we will use the RVM – Ruby Version Manager. This is a great program, Allowing you to have complete environments for running ruby. This includes the interpreter and gems. And besides the few intricacies of setting it up, It is relatively easy to do.

My instructions are fairly similar to the official RVM instructions. You can find them at rvm.beginrescueend.com and more about RVM at there index page.

OK. Lets go..

Firstly make sure that you have a version ruby installed and curl for downloading the script (most distributions come standard with this, although minimal setups will not.

UBUNTU
sudo apt-get install ruby curl

FEDORA
yum install ruby curl

Now make sure that you are logged in as the user that you want to install ruby for. This may be a no brainier however, if you have more than one user you may wish to install ruby for everyone or for a specific person. I would very much suggest NOT install RVM for everybody, because different users may choose to run different ruby interpreters which may cause problems latter on down the track.

Time to get dirty with the command line.

Open up a terminal session and cd to your home directory. The first thing is to download the install script and run it in bash

bash < <(curl -s https://rvm.beginrescueend.com/install/rvm)

Once the script has run issue the following command to load RVM every time you login.

echo '[[ -s "$HOME/.rvm/scripts/rvm" ]] && . "$HOME/.rvm/scripts/rvm" # Load RVM function' >> ~/.bash_profile

Now you need to close all your open terminal windows and then launch another. When the new one is open execute the following.

source .bash_profile

Now, to test that we have done all the previous correctly you need to use the following command and compare the results.

type rvm | head -l

The should output 'rvm is a function'

Thats it. Ruby Version Manager has been installed. Unfortunately, just having it installed is not much good, we need to look at gemsets and ruby versions, but that is for another post.

Ruby, The OOP language of the future


Ruby is AWESOME.

Yes thats right awesome. Ruby is a language that is both simple and powerful. It is completely object orientated, where everything that you work with is an object. That is numbers, strings, data structures and any program construct that you can think.

Created in japan by Yukihiro ‚Äúmatz‚ÄĚ Matsumoto and released in 1995, It has since grown into a widely used langauge, in the top 10 listed languages on github project language page. This is mostly due to the natural order of programming, having a limited amount of clutter.

Ruby is also an interpreted language. We will go into this more later, but essentially means that it runs inside a special program that is built to take the ruby code and execute it. It is not necessarily the fastest interpreted languages, but it is one of the most powerful. There is a number of different interpreters, invoking different mechanisms and built in different languages (the major ones are c and java). This allows the user to implement different interpreters depending on the execution environment.

Lets look at some code, how about a hello world program. But first we need to install ruby on our machine. For ubuntu and fedora the code for installation is as follows.

sudo apt-get install ruby1.8 rubygems1.8

yum install ruby.i386 ri.i386 ruby-mode.i386

Now create a file named helloworld.rb and put the following line on the first line.

puts "Hello World!"

Yep thats it. Run it in the command line by using ruby helloworld.rb and it will output will be the string Hello World! This ease of use is due to mostly to the number of executed lines of per line of ruby code, averaging around 15 per ruby line.

Before continuing on with more examples, finishing of this post on ruby will be about ruby on rails, the new web programming application which comes inbuilt with its own web server and many other features.

Rails as its collectively known, was released to the public in 2003 by David Heinemeier Hansson, although now is actively developed by a team of 1600 community developers lead by a core team. The core concept behind rails is that it allows you to right the beautiful code of your ruby apps, in a web environment.

Thats about all for a first introduction of ruby, next time i will show you how to set up a working environment for deploying ruby on your machine (Linux first and then windows and mac). Why not read up on the ruby programming language.

Quick Tip – Connecting to Another Machine (Linux to Linux)


In todays quick tip I will discuss methods of connecting to another computer. There are many different methods of doing this, even more if you want to mix and match between Windows, Mac OSX and Linux. Today I will just be looking at a Linux to Linux connection.

First off, for today I am going to be connecting to a system that I built for my parents (details of the build to come) to grab a few photographs that my dad took on his latest holiday. They have a central Linux system with a file server through port forwarding on port number 400.

The first method to connect is via a simple ssh shell command

ssh dad@X.X.X.X -p 400

Where dad is my dads username, X.X.X.X is his IPv4 address and -p 400 says that i want to connect on port 400 which will tell the router at my dads house that I actually want to talk to the file server. What I have now is a connection to file server and access to everything on that local machine, with the input output information sent to my machine.

So what can i do with this. Funnily enough, this simple command is powerful if you want to use the remote computer. You can run an update, run any command line program (cat /etc/passwd :P) or execute shell scripts to achieve tasks like backup or batch file manipulation. To copy files between computers we are going to need another command.

At this point we have three different options, two command line choices and one GUI, (the GUI only needs your computer to have the GUI not the remote computer.

scp dad@X.X.X.X:400/home/dad/image.jpg /home/me/image.jpg

or

sftp dad@X.X.X.X:400
get /home/dad/image.jpg

or

Places -> connect to server -> Fill out needed information -> connect
enter password -> navigate through window to file -> drag and drop to desired location

The first option is my favorite and the most simple command if you know exactly what you want and where you want to put it. The first part logs onto the ssh server and locates the file for copying, and the second part says where the file should be put. While it is simple, it has no margin of error, it either works or doesn’t, and can do some squiggly things.

The second option is the best for looking for and finding files without a GUI. You must first login to the ssh server in ftp mode and then find and “get” (download) the file. I don’t really use this option often, but it’s handy to know.

The third option is great if you want a graphical way of interacting file on an external server. This is very simple once you have seen what to do. Hopefully I will be able to do a quick video to show how to do this, which I will link at a later stage.

This outlines the way that you can connect to different Linux boxes on either you own subnet, or even over the internet. Stay tunned for more posts related to connecting computers, in particular mac and windows to linux connection.

Using APT – The Complete How to Guide


Debian’s APT (Advanced Packaging Tool) is an amazing program that adds greatest level of manipulability to the distributions that use it. It enables the user to control the software that runs on their computer, making it exactly the way they like it, without any extra programs that do not get used. This is unlike any other non-*nix operating system, giving A clear advantage, in my opinion, to it’s users.

Essentially APT is a front end to dpkg, a base level program for installing and removing and providing information about .deb packages. Going into depth of dpkg and .deb files is for another time, but briefly a .deb package is a way of puting a program into a container that makes it’s installation much¬†simpler for the end user and dpkg was written for handling these packages.

APT is also able to use the RPM package management system, using the apt-rpm method (explained later). This is a newly acquired feature of APT which I believe that was added to make the Debian distribution POSIX compliant. RPM was created for the Red Hat Linux distribution as a simple package management system which has solid dependency handling.

Now APT is available on all Debian based distributions such as Ubuntu, Mint, Knoppix and a number of others, as well as Solaris (not open any more). This means that of all the users of Linux distributions, many have access to APT. Lets see the program in action. Open the terminal and enter the following

sudo apt-get update

To run the APT program you must run as a super user or provide super user privileges. You will also notice that there is the `tack get` addition of the apt program. This is because the program has many sub layers, with others such as apt-cache, apt- secure and apt-key. The update program updates the latest information from the sources list.

The sources list is the location of the Debian packaged programs that are installable via a direct download from apt-get. Finding the best sources for the apt-get program will be in a quick tip, for now we will assume that all the sources will be the optimal ones. Now lets perform a second command related to the sudo apt-get update command, upgrade

sudo apt-get upgrade

This command upgrades the current installed programs in a number of ways. First it checks to see if all the packages are the latest version of the installed programs. If there are new versions it does a dependency check and asks if you would like to update at which stage it will let you know of any other programs need to be installed. (hit enter here) It will then proceed to update and install until your current installed programs are how they should be at their newest version.

How about installing programs. Simple ! Just use the sudo apt-get install command. The trick is knowing exactly what program you need. For example, I am writing a program which works with a database in c. I need the mysql header files which are not installed by default. The trick with accessing header files is to download the development packages of the need programs so I used the following install command.

sudo apt-get install libmysqlclient-dev

These 3 main commands can handle 90% of your apt needs. Below is a list that i have compiled about commands and there functions that you may come across needing in your use of the program.

sudo apt-get remove `program`

This removes the installation of the program specified but still lives on the system

sudo apt-get autoremove

This automatically removes all the programs which are not being used and are not depended on by any other programs.

sudo apt-get clean

This cleans the archives of the downloaded programs. Any Archives that are not needed are removed, freeing up space.

sudo apt-get autoclean

This automatically removes the archives that are not needed and any dependencies as well

sudo apt-get source `program`

This downloads a source version (if available) that will need other install methods, in order to examine the source code and build specifically the way you choose.

deb `link` `version` `relationship`

This adds a source to the list. Link is source location, version is the word number of the version being used eg ubuntu uses “hardy” or your version¬†equivalent and debian would use ‘lenny’ for v5 or ‘squeeze’ for v6. The relationship is a little more complex. Most likely it will be partner, But it may equally be universe or multiverse.

apt-cache search `program`

searches all repositories and list ever instance of the program. This is a great way to find a program that you are looking to download.

sudo apt-get dist-upgrade

This updates packages which have may have any number of dependency discrepancies.

This list is incomplete. Its a ‘working copy’ of the most commonly used apt commands. The important thing to remember is to run the update before you start working with apt so that you have the latest information, as well as anytime you add or remove sources.

If you have any questions or use any commands relating to apt that I haven’t listed, please let me know in a comment below.

Quick Tip – Man Pages


Linux Man pages are one of the most useful resources a Linux user has in his arsenal when it comes to learning about a command line functionality or just about anything you could need to know about Linux. For example, today I want to transfer some log files from my server to my main computer analysis but a small script I had written, I decided do this via ftp (just cause I could). So the first thing that I did was check the man page to ensure I used the correct commands, by using man ftp.

man ftp

This shows an interactive document. You can scroll down to read more, press q to exit.

There are many different sections to the man program, each containing a different subset of commands and programs. This allows for different documentation of similar or the same named programs. For example, the apt program has to expansions – annotation processing tool and advanced packaging tool. Usually typing in a command into man produces the output you want, sometimes though you have to go searching.

man apt

This gives the man page for annotation processing tool.

man 8 apt

This gives the man page for advanced packaging tool.

If you come across the someone else referencing a man page, or you yourself want to reference a man page you would write it man program(section) eg for advanced packaging tool you would use man apt(8)

Update – 3 new server tutorials


A while ago I wrote the first 2 tutorials on setting up a home server, and slated in for further release 3 more tutorials on hardware, burning an ISO and installing the base system. Well today they have been released.

Hopefully they work well, and no one falls into any traps following them. I intend to two follow these up with some more complex tutorials on user based management relating to specific tasks such as (the next one) creating users with default folder and files as well as git version control, creating a simple backing up process, working with a mail program and many others.

All the tutorials can be found linked on the build a home server page.

I look forward to your responses and would love to hear of your success/failure as well as any changes you feel need to be made.

Selecting Hardware for a Server – Ubuntu Server


Hardware is KEY.

You must have the BEST.

I am here to tell you that ITS ALL LIES. Fed to us through not just the media but big businesses and other corporate goons, bent on taking us for as much money as we can. The truth of the matter is that unless your running windows 7 ultimate packed full of processor heavy programs, you only need the most minimal of hardware setups by todays standards to run a server and even then it will most like be underused. Read the full post »

%d bloggers like this: