Control VMware Fusion from the Command Line

Background

Virtualization has been a key technology for me as a contractor. Because I have multiple clients, keeping their projects sequestered from each other is a snap: one or more VMs (virtual machines) per project. Development, testing, and in some cases deployment is all on separate VMs.

I’ve been using VMware Workstation since it first came out 1999. The ability for me to seamlessly move VMs between Windows, Linux, and OS X has been a key reason I’ve stuck with them for so long.

Automating tasks via scripting one of many reasons I insist that my workstation run  Un*x of some flavour. I’ve been using OS X for quite a while quite happily as a software developer, however VMware Fusion (for OS X) doesn’t present the full set of features available on VMware Workstation (for Linux and Windows). Fortunately hidden under the hood are the same tools.

The Executable vmrun

VMware provides a tool called vmrun that allows common operations to be performed on VMs — starting, suspending, taking snapshots, etc. On OS X it’s tucked away in the VMware Fusion bundle under Contents/Library.

In my shell startup I add the directory to PATH:

# VMware Fusion
if [ -d "/Applications/VMware Fusion.app/Contents/Library" ]; then
    export PATH=$PATH:"/Applications/VMware Fusion.app/Contents/Library"
fi

Find the Virtual Machine’s .vmx File

To use vmrun you need to have the path to the .vmx file that resides inside of the VM bundle on OS X. For example, I have a VM with an install of RedHat Enterprise Linux 7:

RHEL7 VM

This is actually not a single file, but a directory called RHEL7.vmwarevm. The contents can be seen in the Finder by right clicking:

Show Contents

This will open up the directory and show the various files that make up the virtual machine.

Inside the Bundle

As can be seen, the .vmx file is prominently displayed.

Put the Pieces Together

To start the VM from the command line one uses the “start” parameter. For example, if the above virtual machine were in one’s home directory one can type:

$ vmrun start ~/RHEL7.vmwarevm/RHEL7.vmx

vmrun Commands

If one runs vmrun without parameters it gives a farily long summary of commands that it accepts. One can:

  1. control the power state of the VM,
  2. control snapshots,
  3. perform various operations inside a running VM, and
  4. other operations such as installing tools and cloning.

A short list of common operations:

Description Command Parameters
List running VMs list
Start a VM start /path/to/vmx/file
Suspend a VM suspend /path/to/vmx/file
Take a snapshot snapshot /path/to/vmx/file snapshot name

 Additional Information

VMware has a PDF on vmrun here.

 

Posted in SysAdmin | Tagged , , , , | Leave a comment

Controlling RHEL 7 Services

One change from RHEL/CentOS 6 to the RHEL 7 beta is how services are controlled. The old service and chkconfig commands are replaced with systemctl. These are my quick and dirty notes compiled from the Fedora Project systemd and SysVinit to Systemd Cheatsheet pages.

Basic Control

The old system command’s replacement is very similar, with services having .service appended:

systemctl start|stop|restart|status name.service

For example:

systemctl restart httpd.service

Service Boot time Control

To get a list of available services and their boot time status:

systemctl list-unit-files --type=service

To set a service to start (or not) at boot time:

systemctl enable|disable <em>service</em>.service

For example:

systemctl enable mariadb.service
systemctl enable httpd.service

Run Levels

Run levels are called targets, have been simplified, and have names now. An incomplete list:

  1. poweroff.target (run level 0)
  2. rescue.target (single-user mode; run level 1)
  3. multi-user.target (normal run level 3)
  4. graphical.target (normal run level 5)

To set the default run level:

systemctl set-default multi-user.target

To change the run level:

systemctl isolate name.target

For example, to enter single user mode:

systemctl isolate rescue.target

And the appropriate services will be stopped and started.

Additional Reading

  • A description of how systemd fits into the boot process here.
  • Another nice summary here.

Updates

2014-07-17
Updated setting the default run level per CertDepot’s suggestion. Added the “Additional Reading” section.
Posted in SysAdmin | Tagged , , , , , , , | 2 Comments

Display a Server-Supplied Drop Down List Using AngularJS

These are my notes on displaying a list of server-supplied objects in a drop down list using AngularJS.

Background

I have a server that supplies lists of lookup objects that are used in an AngularJS-based single-page application (SPA). The SPA obtains a list through an API call to the server. The server returns an ordered list of JSON objects. Every object in every list includes a key value, a display value, and supplementary data. For the purposes of this article, only the key and display values are of any concern.

For example, the SPA needs a list of units of measure. The server supplies a list of objects along these lines. They key value is called code and the display value is called display:

[
    {
        code: "L",
        display: "L",
        description: "litres"
    },
    {
        code: "ML",
        display: "mL",
        description: "millilitres"
    },
    ... etc ...
]

In the SPA code, each lookup table is wrapped in its own Angular service.

From List of Objects to Dropdown Using <select>

Angular can be told to create a dropdown list using an array of objects thus:

<select ng-model="product.uom"
        ng-options="u.display for u in units" />

Here ng-options tells Angular to build the dropdown list showing the display attribute of each object. Whenever the user chooses an item, the entire associated object is stored in $scope.product.uom (uom means units of measure). For my purposes this is very handy since I want access to the entire object.

Defaulting to an Value

This works beautifully until an edit page is shown. When displaying data from the server, the dropdown shows a blank selection even though $scope.product.uom contains an object with all the correct values!

The problem is that Angular matches based on object references, not object contents. This can be illustrated thus:

var a = {foo: "bar"};
var b = {foo: "bar"};
var c = a;

Variables a and b contain two separate objects that by chance have attributes with the same values. Variables a and c contain the same object pointer.

In the example above, Angular will recognize the value in $scope.product.uom only if it points to an object in the master list $scope.units. The fact that the server-supplied object has identical attributes is irrelevant — Angular only cares whether the object pointers are identical.

To get around this, when an object is loaded from the server for editing, the lookup values are replaced with pointers to the corresponding objects in the dropdown list. An unsophisticated but functional bit of code to perform this substitution might be:

// Wrapper function to retrieve a product
// from the server, keyed on productId.
apiProduct.lookup(productId, function(product) {
    $scope.product = product;

    // Replace the server-supplied lookup value
    // with the matching value
    // in the $scope.units array.
    $scope.product.uom = lookup_by_code(product.uom.code, $scope.units);
});

function lookup_by_code(code, data) {
    for(var i=0; i<data.length; i++)
        if(data[i].code == code)
            return data[i];

    return null;
};

Update

There is a JSFiddle that demonstrates the value/reference problem concisely.

Posted in Programming | Tagged , , , , , , | Leave a comment

Upgrading Node.js using npm

The Node.js ecosystem provides a tool to update Node from within npm called, simply, “n”.

Install n thus:

sudo npm cache clean -f
sudo npm install -g n

I don’t know that clearing the cache is actually necessary, but a number of people have recommended doing so.

Update to the latest version of node using:

sudo n stable

n allows node versions to be changed easily. The n package listing has details.

Posted in Programming | Tagged , , , | Leave a comment

Strange npm Errors

I’ve gotten some strange errors with npm which were resolved by clearing npm’s cache. The brute force method is:

sudo npm cache clean -f

This falls under the same category as strange C/C++ behaviour resolved by removing all .o files, strange Python behaviour resolved by removing all .pyc files. Caching or otherwise keeping around intermediaries is a boon for speed, but can bite when the cache gets stuffed up.

Posted in Programming | Tagged , | Leave a comment

How to Access a Local Node Server Using Websockets

Background

The AngularJS web application that I’m working on runs on a remote server, but needs to access laboratory instruments connected to the local computer that is running the web browser. JavaScript running in the web browser runs in a sandbox and is prohibited from accessing local hardware.

We explored several possibilities of how to work around this and found a fairly simple solution. The local computer runs a small Node.js program to act as glue between the instrument and the local web browser. Node.js communicates with the instrument’s USB serial port using the Node serialport plugin.

Node also runs express to serve up a simple AngularJS web application for diagnostics. We also connect socket.io to the express instance to provide an interactive communication pipeline between the Node.js program and the main web application.

The Problem Space

One of the traditional iron clad security paradigms of web programming is that JavaScript served up from a server cannot access another server. This works for nearly all web sites, but there are instances where being able to share resources across servers is desirable. For example, if our web app can communicate with local laboratory instruments it’s a big win for my client.

The Approach

The W3C has published Cross-Origin Resource Sharing specifications which provide a standardized method for doing this. To implement this, the non-origin server (in our case, the Node.js server) has to provide HTTP headers to the web browser indicating that it will accept the cross-origin request.

If these headers are missing, the web browser will not complete the HTTP request. For example, Firefox 29 — in its debug console — will report

Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://localhost:8888/socket.io/1/?t=1402443850582. This can be fixed by moving the resource to the same domain or enabling CORS.

This means that the web browser is denying access to Node.js running on localhost because (a) it is a different destination than the original server (the origin) and (b) the Node.js server is not granting permission for the cross-origin request.

Thus the problem boils down to coaxing socket.io to provide those headers when the web browser connects.

The Snag

I have not been able to get this to work on socket.io version 1.0 and higher. To avoid wasting time I reverted to pre-1.0 thus:

npm install --save socket.io@"<1.0"

In the Node.js program’s main app.js, I added one line to allow connections from any cross-origin server (see line 2 below). Note that this is development code running on an isolated network inaccessible from the Internet. One should think hard before leaving this open to all comers.

var io = require('socket.io').listen(server);
io.set('origins', '*:*');
server.listen(8888);

If you look in the socket.io source file ./lib/manager.js you’ll see the lines:

  if (origin) {
    // https://developer.mozilla.org/En/HTTP_Access_Control
    headers['Access-Control-Allow-Origin'] = origin;
    headers['Access-Control-Allow-Credentials'] = 'true';
  }

This may prove useful during debugging if adding the set('origins' ... call doesn’t work as expected.

Unanswered Questions

This solution doesn’t appear to work for serialport.io version 1.0 and higher.

References

Cross-Origin Resource Sharing official W3C documentation.

Using CORS introduction to CORS.

Enable Cross-Origin Resource Sharing sample code.

Socket.io doesn’t set CORS header(s) on Stackoverflow

 

 

 

 

Posted in Micro & Hardware, Programming | Tagged , , , , , | Leave a comment

Python 2.7, Django, and MySQL on OS X

For some reason, getting Python and MySQL talking on OS X has been an annoyance. These are my notes for getting the two to talk to each other in a Python 2.7 virtual environment for a Django project.

The Django 1.6 docs recommend using the MySQLdb package. Its installation uses the mysql_config executable.

I have the following set up:

  1. MySQL 5.6.16
  2. Python 2.7.3
  3. PyCharm 3.1.1, which was used to create
  4. a Python virtual environment with pip located in $HOME/upharm27
$ locate mysql_config
...
/usr/local/mysql-5.6.16-osx10.7-x86_64/bin/mysql_config
...
$ export PATH=$PATH:/usr/local/mysql-5.6.16-osx10.7-x86_64/bin
$ ~/upharm27/bin/pip install mysql-python

I have gcc 4.7.2 available, but curiously, the installer gave the following message:

Installing collected packages: mysql-python
Running setup.py install for mysql-python
gcc-4.2 not found, using clang instead

The install succeeded using clang, so it’s nothing more than a curiosity at this point.

I was able to verify that the package was installed with:

$ ~/upharm27/bin/python
Python 2.7.3 (v2.7.3:70274d53c1dd, Apr  9 2012, 20:52:43)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import MySQLdb
>>>

 

Update

I just followed these notes using the latest XCode 5.1 on OS X 10.9 and the mysql-python install failed. Apparently the clang Apple ships produces errors on unknown flags by default. I was able do get around this by:

$ export CFLAGS=-Qunused-arguments
$ export CPPFLAGS=-Qunused-arguments
$ ~/upharm27/bin/pip install mysql-python

Thanks to the good folks at StackOverflow. More details may be found here.

Posted in Programming | Tagged , , , | Leave a comment