This is a cheap lab bench power supply that given its flaws is a surprisingly solid piece of equipment. I wouldn’t call this a precision power supply, but its tolerances and ripple are acceptable for ordinary bench work. (And really, if you’re doing precision work, you’ll invest in a precision supply anyhow.)
There are two methods for communicating with the unit: a serial interface and a proprietary binary USB interface. One may use either the DB-9 connector or USB to access the serial interface.
I wrote a Python wrapper for the serial protocol to encapsulate the various tidbits of information I’ve encountered on the Internet. The firmware is buggy and there are various gotchas. Hopefully the Python wrapper will address the worst of the problems one might encounter.
Chris Coyier has a nice overview of CSS specificity, or why my CSS doesn’t override their CSS.
An acquaintance sent this to me last year and I thought the link might be useful to others before I deleted the old e-mails.
I haven’t looked at Tower’s GUI since their version 1 beta so I can’t comment on their product. However git is git and that makes the cheat sheet useful.
(I settled into using SourceTree for my daily work a while ago.)
The G-code for turning a spindle on is M3, but that command alone will not work. It needs a non-zero speed parameter. For example:
will turn the spindle on. To turn it off use M5.
As always, stay safe.
I have a large repository that takes up a modest number of gigabytes. When attempting to push it to a new remote repository, the push failed, complaining that the pack size exceeds the maximum allowed.
First of all, let’s get out of the way the fact that repacking the local repository or fiddling with the pack.packSizeLimit limit configuration setting won’t fix the problem. That will simply tidy up your local machine.
As I understand it (corrections welcome), the problem is a collision of several things. When performing this massive beginning-to-end push, git creates a massive pack on the fly and pipes that across the network to the remote machine. The remote machine needs to be able to perform memory mapping on this huge wad of data. File system, CPU architecture, and memory needs have to be satisfied for this to work. Otherwise, the pack size error is reported and the push fails. Annoyingly, this can happen after you’ve transferred gigabytes of data across a network with a bottleneck, completely wasting a lot of time.
Fortunately the work-around is simple. Push the repository in chunks, working your way up the tree.
If your repository has a lot of branching, you may be able to push a branch at a time, as the generated pack will be for that branch.
This repo of mine has a very linear history, and feeling a little lazy I used my git GUI (SourceTree) to make a temporary branch about a third way up the tree, and pushed that. I moved the temporary branch another third of the way up the tree, and pushed that. Finally I could push master and remove the temporary branch.
If the repository were big and hairy enough one could write a script to traverse the tree and programmatically push at appropriate commit points, but for me it’s an exceptional situation that doesn’t warrant that type of effort.
We have some maintenance tasks that require some run time, that we’d like to launch from the web browser. Programmatically, the most natural thing is to spawn a process that performs the task and completes asynchronously. The results are recorded in the database for later harvesting.
As far as I can tell, the same general rules for forking apply when forking from within Django: close database connections, close open file handles, and release other resources that cannot be shared across process boundaries.
Django, apparently, will automatically re-connect to the database if the connection is closed. This makes the job much simpler. Different web sites say that the parent process should close its database connection, and others say that the child process should close its database connection.
In the face of this conflicting information, I chose to close the parent process’ database connection before calling os.fork(). Reöpening database connections incurs a small penalty that are not a concern as this is done once.
from django.db import connection
Don’t fork with a database connection open.
new_pid = os.fork()
if not new_pid:
Thus far there seem to be no side effects from taking this approach. As always, additional information is welcomed.