Skip to main content

Posts about technology (old posts, page 5)

Getting free diskspace in python

To calculate the amount of free disk space in Python, you can use the os.stafvfs() function. For some reason, I can never find the docs for os.statvfs() on the first or second try (it's in the "Files and Directories" section in the os module), and I never remember how it works, so I'm posting this as a note to myself, and maybe to help out anybody else wanting to do the same thing. A simple free space function can be written as:

import os

def freespace(p):
    """
    Returns the number of free bytes on the drive that ``p`` is on
    """
    s = os.statvfs(p)
    return s.f_bsize * s.f_bavail
I use the f_bavail attribute instead of f_bfree, since the latter includes blocks that are reserved for the the super-user's use. I'm not sure, however, on the distinction between f_bsize and f_frsize.

Got my wireless working in Linux 2.6.24

I previously posted that I had problems getting my wireless device working with the new 2.6.24 kernel, running into a kernel oops in the process. In kernels prior to 2.6.24 I used the bcm43xx driver, and let NetworkManager handle connecting to wireless networks. I've since had some time to play around with 2.6.24 a bit more, and I'm happy to say wireless is working now! Here's what I did: - Install b43-fwcutter - Add b43 to /etc/modules - Add ', ATTR{type}="1"' after the MAC address to the line in /etc/udev/rules.d/z25_persistent-net.rules that contains your wireless device. This ensures that udev will assign the same interface name to the wireless device as it had before, which means you don't have to reconfigure your firewall!

Linux 2.6.24: First impressions - disappointed

The linux-kbuild-2.6.24 package was finally available in Debian today. (Small aside: why does it always take a few days after the release of the linux-image packages before the linux-kbuild package is available?) I need to use the proprietary nvidia drivers on my machines, so I have to wait for the kbuild backage before I can compile and install the nvidia driver for the new kernel. Anyway...after a short 'sudo m-a a-i -l 2.6.24-1-amd64 nvidia', I could reboot into the shiny new kernel! New kernels always seem faster, so I was getting excited after booting up. After logging in though, I couldn't connect to my wireless network. I had previously been using the bcm43xx driver, and looking through the changelog, I discovered it had been deprecated in favor of the new b43 / b43legacy drivers. Ok, no problem, just load the new module...wait for network-manager to pick it up...wait for it...wait...wait...Screw it. Edit /etc/network/interfaces, uncomment the stuff for the wireless device, and then 'ifup eth2'. Kernel oops. Well that sucks. Back to 2.6.23 I go. Incidentally, it's not just this oops in 2.6.24 that has me disappointed. Everything since 2.6.18 has been a bit risky. It used to be that upgrading a kernel within the same major.minor release was a relatively safe thing to do. I actively use two different kernels on my machine at home: - 2.6.21 since it supports the raw1394 interface that dvgrab requires to download video from my camcorder, but wireless is very flaky - 2.6.23 since wireless is more robust I still occasionally get lockups, forcing a hard reboot. Maybe this is my fault, I am running the proprietary nvidia driver, and I do use suspend to ram quite a bit, even though it thinks my hardware isn't supported. Maybe too much is changing too fast between kernel releases, not allowing userspace to keep up? Not sure, all I know is I'm doing much more rebooting in my Linux machine than I used to.

OpenWRT to the rescue!

Last night I thought I bricked my old Linksys WRT54G wireless router. I wanted to see if the latest firmware would resolve some problems I had with my wireless connection being dropped. After the firmware upgrade, I didn't have the dropped connection problem any more...I had a new problem - I couldn't connect to the router at all! No wireless access, no LAN access. The most I could do was ping it. I decided to check out OpenWRT to see if my hardware was supported, and how one was supposed to go about flashing new firmware onto the router. Luckily the TFTP method worked, and now I'm back up and running! Maybe it's my imagination, but it seems like the connection is faster now...

RFE: Better tab completion

Dear Lazyweb, Somebody please extend my shell's (zsh right now) tab completion so that it searches the following and expands as appropriate: - Strings visible in any visible terminals - Host/path names for any visible terminals I can't count the number of times I've wanted to copy a file from the current directory in one terminal to the current directory in another terminal, mostly on remote machines. I'd love to be able to type: 'scp myfile.py remotehost:<tab>' and have whatever directories I have active terminals on included in the list of possibilities that I can cycle through. No idea how one would go about doing this...The shell needs to communicate with the terminal emulator, so maybe an extension of the terminal-title-setting mechanism would work?

Dear Lazy Web, what kind of disc is in my drive?

Dear Lazy Web, How do you identify the type of CD/DVD inserted into a CDR/RW/DVD+-RW drive in Linux? dvd+rw-mediainfo does a great job with DVDs, but is there anything equivalent for CDs? Or better yet, a single tool that tells you all you need to know about a disc in the drive? Hope to hear from you soon, Chris

Using the clipboard with Vim

Via Planet Debian: Enrico Zini posts about using the clipboard with vim Two cool things I learned from Enrico's post:

  1. xclip is cool. Especially with combined with zsh. Say you want to have the output of a command printed to the terminal, but also copied to the clipboard. sha1sum * | tee >(xclip)
  2. I read the x11-selection help page in vim and discovered that you can access the copy/paste clipboard with the '+' register. Wow! No more pasting in text, having the indentation screwed up, undoing, setting paste mode, pasting again, unsetting paste mode!

Python Warts, part 2 - the infamous GIL

I'm going to come out and say that the global interpreter lock (GIL) in Python bothers me. For those who don't know, the GIL in the C implementation of Python allows only one thread to be running Python code at any one time. Extension modules executing C/C++/!Python code can release the GIL so that other threads can run, but this doesn't apply in general to regular Python code. Ian Bicking posted a while back about the GIL of Doom. Granted, his post was originally written in October 2003, so things have changed a bit since then. I believe the main thrust of his argument was that there are only a few cases where the GIL would really get in your way. Those cases are basically where you are doing some CPU bound task that isn't easily separated into separate processes, and is running on a multi-processor machine. The way to get around the GIL in Python is to split up your application into separate processes, and use some kind of inter-process communication (IPC) mechanism to transfer work/results between processes. The message seems to be: "You don't really want to use a shared address space threading model, do you? I'm sure you'd much rather just use a separate process. Everybody knows that's better." Suggestions such as calling time.sleep(0), or fiddling around with sys.setcheckinterval are hacky, and clutter up your code for no good reason other than to work around deficiencies of the interpreter. Yes, sharing an address space with multiple threads of execution can be tricky. But IPC is no picnic either. Starting a new process can be expensive. os.fork() isn't available on all platforms. There is, AFAIK, no portable shared memory module for python (POSH seems to be dead?), so to send data between processes you need to set up a socket, pipe or use temporary files, leading to extra code (more to write, more to read and understand afterwards), setup overhead (system calls aren't free), performance impact (serializing data isn't free), and room for buggy implementations (did you clean up your temporary files? did you close your socket? did you set restrictive permissions on your socket file?) In many ways, threading is much simpler. It's simple to set up, has low overhead, no data copying costs, and is self-contained in the process so you're not leaking out-of-process resources (sockets files, bound addresses, temporary files, etc.). Python has never been the type of language to prevent the developer from doing "unsafe" things, that's why there aren't really private members on classes. Ian Bicking again writes (in a different post),

An important rule in the Python community is: we are all consenting adults. That is, it is not the responsibility of the language designer or library author to keep people from doing bad things. It is their responsibility to prevent people doing bad things accidentally. But if you really want to do something bad, who are we to say you are wrong? It's your program. Maybe you even have a good reason.
Python's GIL is getting in my way. Yes I can do bad things with multiple threads sharing one address space, but that should be my problem, not a restriction of the language implementation. With multi-core CPUs becoming more and more common, and not only in the server domain, I think this will become more and more of an issue for Python. In the short term, some slick IPC would be nice, but in the long term a truly multithreaded Python interpreter would benefit everybody. Talk is cheap, I know...code is what counts here. Maybe a PEP or SIG could be started to flesh out what would be required to get this accomplished for Python 3000.