tag:blogger.com,1999:blog-32331282024-02-08T04:35:45.657-08:00pyx|piks| n. a box at the Royal Mint in which specimen gold and silver coins are deposited to be tested annually at the trial of the pyx.Donovan Prestonhttp://www.blogger.com/profile/07076057843365973055noreply@blogger.comBlogger74125tag:blogger.com,1999:blog-3233128.post-89852650031937196582013-03-19T08:28:00.000-07:002013-03-21T21:10:13.499-07:00Guido doesn't want non-portable assembly in Python and it's understandable<p>(I wrote a long comment in the <a href="https://news.ycombinator.com/item?id=5395385">Hacker News discussion of Guido's slides about his plans for async io and asymmetric coroutines in Python 3.4</a>, but I thought it was good enough to deserve a blog post)</p>
<p>From a certain perspective [Guido's desire to keep non-portable stack slicing assembly out of Python] is a rational decision. Because the CPython API relies so heavily on the C stack, either some platform-specific assembly is required to slice up the C stack to implement green threads, or the entire CPython API would have to be redesigned to not keep the Python stack state on the C stack.</p>
<p>Way back in the day <a href="http://www.python.org/dev/peps/pep-0219/">the proposal for merging Stackless into mainline Python</a> involved removing Python's stack state from the C stack. However there are complications with calling from C extensions back into Python that ultimately killed this approach.</p>
<p>After this Stackless evolved to be a much less modified fork of the Python codebase with a bit of platform specific assembly that performed "stack slicing". Basically when a coro starts, the contents of the stack pointer register are recorded, and when a coro wishes to switch, the slice of the stack from the recorded stack pointer value to the current stack pointer value is copied off onto the heap. The stack pointer is then adjusted back down to the saved value and another task can run in that same stack space, or a stack slice that was stored on the heap previously can be copied back onto the stack and the stack pointer adjusted so that the task resumes where it left off.</p>
<p>Then around 2005 the Stackless stack slicing assembly was ported into a CPython extension as part of py.lib. (By Armin Rigo. A million thanks from me for this.) This was known as greenlet. Unfortunately all the original codespeak.net py.lib pages are 404 now, but <a href="http://agiletesting.blogspot.com/2005/07/py-lib-gems-greenlets-and-pyxml.html">here's a blog post from around that time that talks about it</a>.</p>
<p>Finally the <a href="https://pypi.python.org/pypi/greenlet">relevant parts of greenlet were extracted</a> from py.lib into a standalone greenlet module, and eventlet, gevent, et cetera grew up around this packaging of the Stackless stack slicing code.</p>
<p>So you see, using the Stackless strategy in mainline python would have either required breaking a bunch of existing C extensions and placing limitations on how C extensions could call back into Python, or custom low level stack slicing assembly that has to be maintained for each processor architecture. CPython does not contain any assembly, only portable C, so using greenlet in core would mean that CPython itself would become less portable.</p>
<p>Generators, on the other hand, get around the issue of CPython's dependence on the C stack by unwinding both the C and Python stack on yield. The C and Python stack state is lost, but a program counter state is kept so that the next time the generator is called, execution resumes in the middle of the function instead of the beginning.</p>
<p>There are problems with this approach; the previous stack state is lost, so stack traces have less information in them; the entire call stack must be unwound back up to the main loop instead of a deeply nested call being able to switch without the callers being aware that the switch is happening; and special syntax (yield or yield from) must be explicitly used to call out a switch.</p>
<p>But at least generators don't require breaking changes to the CPython API or non-portable stack slicing assembly. So maybe now you can see why Guido prefers it.</p>
<p>Myself, I decided that the advantages of transparent stack switching and interoperability outweighed the disadvantages of relying on non-portable stack slicing assembly. However Guido just sees things in a different light, and I understand his perspective.</p>
Donovan Prestonhttp://www.blogger.com/profile/07076057843365973055noreply@blogger.com2tag:blogger.com,1999:blog-3233128.post-65442432507871280392013-01-10T11:37:00.000-08:002013-01-10T14:05:39.133-08:00Your giant proprietary (or at least silo) codebase is a huge liability<p>There has been a lot of news this week about vulnerabilities in very low-level platform code being used in production by many many people. First there was a ruby exploit, and now today I see that there is a new java zero day.</p>
<p>The truth is, these kinds of exploits are absolutely everywhere. When off-the-shelf libraries are assembled together to make a whole that is greater than the sum of the parts, strange interactions are possible that the original integrators never conceived of.</p>
<p>In the case of the ruby exploit, from what I read it seems something like: Part of the web decoding machinery that could decode URL encoded parameters was extended to be able to decode XML. The XML decoding machinery was then extended to be able to decode YAML.</p>
<p>YAML has a syntax for serializing arbitrary Ruby objects, and when that YAML file is deserialized a new instance of that object is created. With careful crafting of the input file, a large variety of arbitrary code execution is possible.</p>
<p>This is also the reason it is not a good idea to use pickle as a network serialization format in Python. You might think, "oh, I'll use marshal. Marshal doesn't support arbitrary class serialization." But take a look at the list of object types marshal does support:</p>
<blockquote> None, integers, long integers, floating point numbers, strings, Unicode
objects, tuples, lists, sets, dictionaries, and code objects
</blockquote>
<p>Code objects. I rest my case. Of course, you would have to be calling the return results from the marshal module in order for a code object constructed by an attacker to run on your server, but some hacker somewhere is probably going to figure out some crazy way.</p>
<p>Which brings me to my main point: I've observed over the years that for some reason business type people and even some programmers seem to think that a large proprietary codebase that nobody else is allowed to look at is an asset. It's not; it's a liability!</p>
<p>You don't understand what's in your code. You don't understand what's in the code of the large number of libraries that you use every day. Codebases are written over weeks, months, years, by different people, in different frames of mind.</p>
<p>There are solutions to this code complexity problem. We can break large complex code bases into small parts that are very explicit and careful about validating their input. We can completely isolate these parts from each other so that they can't accidentally (or maliciously) break something.</p>
<p>Libraries could strive for simplicity and explicitness rather than kitchen-sink-itis. If a surgeon wants to do surgery, they are going to choose a light, sharp, well-balanced scalpel, not an old Swiss Army knife.</p>
<p>Code that only a few people have to look at doesn't have to be clear. Only those few people have to bear the mental burden of holding that nasty code in their head. Code that a lot of people need to look at has a higher probability of being clear. This is one advantage of open source; obviously, it's not enough.</p>
<p>My suggestion for reducing the complexity in interactions like these is to create simpler, more well-defined libraries and isolate these libraries from each other in different processes.</p>
<p>Processes evolved in the 70s to isolate users from each other but now it is 2013 and we could start isolating more and more libraries from each other. For languages that don't use reference counting, fork with copy on write may be good enough to allow us to actually use many many UNIX processes for a single application without consuming too many resources.</p>Donovan Prestonhttp://www.blogger.com/profile/07076057843365973055noreply@blogger.com1tag:blogger.com,1999:blog-3233128.post-37939826612447022412012-10-10T15:29:00.001-07:002012-10-10T15:39:45.652-07:00Getting Started Developing for Firefox OS Screencasts<p>I've been working on Boot 2 Gecko (Firefox OS) for the last 9 months now, and it has been both a completely insane project and an awesome project. Insane amount of work, awesome implementation.</p>
<p>Writing an OS from the ground up is no easy task. Luckily, we're not doing that. We are building on top of the linux kernel and gecko, both open source projects that have lots of effort put against them.</p>
<p>It is really starting to firm up now, especially after a few weeks ago when we had the feature freeze. There is still a lot to do, however, and we are going to be trying to bring in developers from other areas of the company to help fix bugs and make this thing stable.</p>
<p>Luckily, the development process just got a lot easier with two things that recently landed. One is that the b2g desktop nightly builds now include a build of gaia, so you can just download a nightly build, double-click, and go. The other is that the remote debugger gained the ability to load code over-the-wire as part of the debugger protocol, so the way gaia packages up apps and refers to them using app:// urls is now debuggable without nasty workarounds.</p>
<p>As we are ramping up newer developers to help with the project, we need clear documentation of the development process. The Gaia/Hacking page is the canonical reference for how to do absolutely everything, but it's overwhelming. To help with this, I made a series of 5 screencasts that cover the basics of using b2g desktop nightly builds, remote debugging with b2g desktop, hacking on gaia itself in b2g desktop, flashing a phone with gaia changes, and what to do if Firefox OS asks you to choose from two homescreens or if remote debugging does not show your source for your app.</p>
<p>As an aside, I find it hilarious that there are all these incorrect rumors about the speed and the memory of the phone, when the correct specs were actually *announced* in February. I guess people would rather speculate and spread rumors than read press releases.</p>
<h2>B2G Desktop Intro (Firefox OS)</h2>
<iframe src="http://player.vimeo.com/video/50801661?title=1&byline=1&portrait=1" width="500" height="314" frameborder="0" webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe> <p><a href="http://vimeo.com/50801661">B2G Desktop Intro (Firefox OS)</a> from <a href="http://vimeo.com/donovanpreston">Donovan Preston</a> on <a href="http://vimeo.com">Vimeo</a>.</p>
<h2>Debugging Gaia (Firefox OS) with the Remote Debugger</h2>
<iframe src="http://player.vimeo.com/video/51009173?title=1&byline=1&portrait=1" width="500" height="314" frameborder="0" webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe> <p><a href="http://vimeo.com/51009173">Debugging Gaia (Firefox OS) with the Remote Debugger</a> from <a href="http://vimeo.com/donovanpreston">Donovan Preston</a> on <a href="http://vimeo.com">Vimeo</a>.</p>
<h2>Hacking on Gaia in Debug Mode</h2>
<iframe src="http://player.vimeo.com/video/51103252?title=1&byline=1&portrait=1" width="500" height="314" frameborder="0" webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe> <p><a href="http://vimeo.com/51103252">Hacking on Gaia in Debug Mode</a> from <a href="http://vimeo.com/donovanpreston">Donovan Preston</a> on <a href="http://vimeo.com">Vimeo</a>.</p>
<h2>Flashing Gaia onto a Firefox OS Phone and Remotely Debugging a Firefox OS Phone</h2>
<iframe src="http://player.vimeo.com/video/51104170?title=1&byline=1&portrait=1" width="500" height="314" frameborder="0" webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe> <p><a href="http://vimeo.com/51104170">Flashing Gaia to a Firefox OS Phone and Remotely Debugging a Firefox OS Phone</a> from <a href="http://vimeo.com/donovanpreston">Donovan Preston</a> on <a href="http://vimeo.com">Vimeo</a>.</p>
<h2>Firefox OS Tips</h2>
<iframe src="http://player.vimeo.com/video/51104746?title=1&byline=1&portrait=1" width="500" height="314" frameborder="0" webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe> <p><a href="http://vimeo.com/51104746">Firefox OS Tips</a> from <a href="http://vimeo.com/donovanpreston">Donovan Preston</a> on <a href="http://vimeo.com">Vimeo</a>.</p>Donovan Prestonhttp://www.blogger.com/profile/07076057843365973055noreply@blogger.com0tag:blogger.com,1999:blog-3233128.post-70858853757856060812012-03-05T18:55:00.001-08:002012-03-05T18:55:22.899-08:00Where are the Peer-to-Peer web apps?<p>Perhaps the reason we have not seen single-page html applications that connect directly to peers without an intermediate server is that browsers cannot easily listen on a local port. They can open outgoing connections all day long, but WebRTC may be the first web standard that allows the browser to listen on a port. If there are others, please let me know.</p>
<p>Of course, the WebRTC spec looks overly complicated for the incredibly simple thing I want to do. I just want the browser to be able to listen on a port like any other process on the machine can. Sure, there are security implications, but these exist for everything the browser exposes to web applications, and there's an entire class of Peer-to-Peer web apps that simply cannot easily be written using current web technologies.</p>
<p>There are many examples of a Peer-to-Peer experience being delivered to users using a client-server architecture. ChatRoulette, Omegle, and even more recently, products like Google Hangouts. These applications must be implemented by using servers in the middle to connect the peers, making scaling them much harder than it would be if browsers could just listen.</p>
<p>There is an opportunity to explore a generic Client-Agent-Peer architecture, where Clients (Browsers) talk to an Agent server using HTTP to configure the state of the Agent, which would then be contacted by the Peer on behalf of the Client when the Browser is not online. When the Browser is online, the Agent can refer the Peer directly to the Client. When the Browser is offline, the Agent can handle the request itself using a cached copy of the material the Browser was sharing, or it can just decline to fulfill the request.</p>
<p><a href="http://wiki.secondlife.com/wiki/Reverse_HTTP" title="Reverse HTTP">Reverse HTTP</a> was my attempt to push out the simplest thing that could possibly work to get browsers to talk to each other. It didn't really go anywhere in terms of being implemented in actual browsers. Coincidentally, someone else had the same idea around the same time, and implemented Reverse HTTP in terms of actual <a href="http://reversehttp.net/">HTTP Requests encoded in Responses</a>, and vice versa. This makes it possible to write a pure javascript client rather than needing the browser to support the Upgrade protocol itself.<br /></p>
<p>Really, though, it would be very nice if all of the hacks and tricks and workarounds weren't necessary, and I could just listen on a port with javascript. I'll keep dreaming.</p>
Donovan Prestonhttp://www.blogger.com/profile/07076057843365973055noreply@blogger.com3tag:blogger.com,1999:blog-3233128.post-21617977605997834182011-08-11T14:30:00.000-07:002011-08-11T14:30:01.024-07:00<b>Coverage and Profile Information Gathered from -D</b><br />
<br />
The disassembly information provided by SpiderMonkey's -D switch is much richer than the plain coverage data I gathered with my trace hook. However, the disassembly is printed straight to stdout which makes it more difficult to separate from test output and harder to parse. So, I wrote a small patch which makes -D take the filename to write the disassembly to instead of stdout.<br />
<br />
<a href="http://pastebin.mozilla.org/1296772">http://pastebin.mozilla.org/1296772</a><br />
<br />
I need to get my situation with the mozilla-central repository figured out so I can create a branch and commit. In the meantime, there's the small patch.<br />
<br />
Then, I rewrote my coverage_parser.py script in dom.js to parse the -D output and was able to generate nice coverage files, including displaying the number of total bytecodes executed on each line, and nice profile files, sorted from the lines which executed the most bytecodes down to those that executed the least.<br />
<br />
<a href="http://www.flickr.com/photos/fzzzy/6030567525/" title="Screen Shot 2011-08-10 at 6.47.44 PM by fzZzy, on Flickr"><img src="http://farm7.static.flickr.com/6189/6030567525_2d2a1e146f.jpg" width="500" height="313" alt="Screen Shot 2011-08-10 at 6.47.44 PM"></a><br />
<br />
<a href="http://www.flickr.com/photos/fzzzy/6031475964/" title="Screen Shot 2011-08-10 at 9.30.22 PM by fzZzy, on Flickr"><img src="http://farm7.static.flickr.com/6133/6031475964_e8a293b2ed.jpg" width="500" height="313" alt="Screen Shot 2011-08-10 at 9.30.22 PM"></a><br />
<br />
With the test suites we scraped together from various places, we have almost 50% coverage of dom.js right off the bat.Donovan Prestonhttp://www.blogger.com/profile/07076057843365973055noreply@blogger.com0tag:blogger.com,1999:blog-3233128.post-89522862756986312962011-08-08T14:12:00.000-07:002011-08-08T14:12:52.540-07:00<b>Dumping Bytecode with SpiderMonkey</b><br />
<br />
While trying to implement my code coverage tool, SpiderMonkey's -D flag was brought to my attention.<br />
<br />
You need to have a debug build of SpiderMonkey. You can find instructions on how to get the source and build <a href="https://developer.mozilla.org/En/SpiderMonkey/Build_Documentation">here.</a><br />
<br />
For the given input file foo.js:<br />
<br />
<blockquote>var a = 1 + 1;<br />
var b = 2 + 2;<br />
var c = a + b;<br />
</blockquote><br />
Running with the command-line switch -D gives:<br />
<br />
<blockquote>$ js -D foo.js<br />
--- PC COUNTS foo.js:1 ---<br />
loc counts x line op<br />
----- ---------------- ---- --<br />
main:<br />
00000:1/0/0 x 1 bindgname "a"<br />
00003:1/0/0 x 1 int8 2<br />
00005:1/0/0 x 1 setgname "a"<br />
00008:0/0/0 x 1 pop<br />
00009:1/0/0 x 2 bindgname "b"<br />
00012:1/0/0 x 2 int8 4<br />
00014:1/0/0 x 2 setgname "b"<br />
00017:0/0/0 x 2 pop<br />
00018:1/0/0 x 3 bindgname "c"<br />
00021:1/0/0 x 3 getglobal "a"<br />
00024:1/0/0 x 3 getglobal "b"<br />
00027:1/0/0 x 3 add<br />
00028:1/0/0 x 3 setgname "c"<br />
00031:0/0/0 x 3 pop<br />
00032:1/0/0 x 3 stop<br />
<br />
--- END PC COUNTS foo.js:1 ---</blockquote><br />
Useful and interesting.Donovan Prestonhttp://www.blogger.com/profile/07076057843365973055noreply@blogger.com2tag:blogger.com,1999:blog-3233128.post-43748862469914452472011-08-04T19:24:00.000-07:002011-08-04T19:27:17.549-07:00Code Coverage Reporting in JavaScript<br />
<br />
Now that I am working on dom.js, I need to learn an entirely new environment and all the tools that go with it. JavaScript is also fundamentally different from other scripting languages because usually the environment it is executing in is the browser rather than the command line. However, dom.js is designed to be used in environments where a native DOM does not already exist, such as in Node.js or in SpiderMonkey. Since it's such a unique project many of the existing tools don't really apply.<br />
<br />
I went looking for code coverage tools that we could use to determine how much of the dom.js code was being exercised by the test suites we have in place right now. Several coverage tools exist for JavaScript, as discussed on StackOverflow here: <a href="http://stackoverflow.com/questions/53249/are-there-any-good-javascript-code-coverage-tools">http://stackoverflow.com/questions/53249/are-there-any-good-javascript-code-coverage-tools</a><br />
<br />
For example, the ffhrtimer project (<a href="http://hrtimer.mozdev.org/">http://hrtimer.mozdev.org/</a>) is a nice firefox extension that provides high resolution timers and UI to display JavaScript code coverage, but it can't easily be integrated into the SpiderMonkey command line js and only runs on FireFox 3.0.<br />
<br />
JSCoverage is interesting (<a href="http://siliconforks.com/jscoverage/">http://siliconforks.com/jscoverage/</a>) but it requires source-level translations on the javascript in order to record coverage information. It makes this easy by providing a web server that automatically translates javascript that it serves as well as a proxy that translates any javascript that passes through it, but this does not really fit into our model where we are testing from the command line.<br />
<br />
js-test-driver (<a href="http://code.google.com/p/js-test-driver/wiki/CodeCoverage">http://code.google.com/p/js-test-driver/wiki/CodeCoverage</a>) looks good but it also is designed to work in a browser environment.<br />
<br />
Finally, JSChiliCat (<a href="http://jschilicat.sourceforge.net/">http://jschilicat.sourceforge.net/</a>) is getting closer because it allows running of tests without a browser being involved, but it also requires Rhino for running the tests. dom.js needs the HEAD version of SpiderMonkey since it uses extensions to JavaScript which are not widely implemented, Proxies and WeakMaps.<br />
<br />
So, I looked at the way ffhrtimer gathers coverage data and modified SpiderMonkey to have a --coverage switch which installs a simple hook that prints out the current javascript filename and line number for every line of execution.<br />
<br />
<blockquote>JSTrapStatus<br />
CoverageHook(JSContext *cx, JSScript *script,<br />
jsbytecode *pc, jsval *rval, void *closure)<br />
{<br />
const char* filename = JS_GetScriptFilename(cx, script);<br />
uintN lineno = JS_PCToLineNumber(cx, script, pc);<br />
<br />
printf("CoverageHook %s %d\n", filename, lineno);<br />
<br />
return JSTRAP_CONTINUE;<br />
}<br />
</blockquote><br />
Here's how this hook is installed as a callback:<br />
<br />
<blockquote>JS_SetInterrupt(cx->runtime, &CoverageHook, NULL);<br />
</blockquote><br />
Now, the question is how best to store the data for use by an analysis tool. ffhrtimer used an in-memory data structure to keep track of which lines in which files had been visited, but for simplicity I think I am going to use the code above, writing the files and line numbers to a file 'coverage.out' for post-processing with a python script.Donovan Prestonhttp://www.blogger.com/profile/07076057843365973055noreply@blogger.com0tag:blogger.com,1999:blog-3233128.post-6611911955204722612011-08-02T12:09:00.000-07:002011-08-02T12:09:24.135-07:00I started work at Mozilla yesterday. It has been quite a whirlwind. I'm sitting next to Brendan Eich and working with David Flanagan. David's <a target="_blank" href="http://www.amazon.com/JavaScript-Definitive-Guide-Activate-Guides/dp/0596805527?ie=UTF8&tag=widgetsamazon-20&link_code=btl&camp=213689&creative=392969">JavaScript: The Definitive Guide</a><img src="http://www.assoc-amazon.com/e/ir?t=widgetsamazon-20&l=btl&camp=213689&creative=392969&o=1&a=0596805527" width="1" height="1" border="0" alt="" style="border:none !important; margin:0px !important; padding: 0px !important" /> was the reference I turned to when I seriously started with web programming in 2000, and when Brendan Eich wrote JavaScript I was working on server-side scripts hosted in LambdaMOO and had no idea what I wanted to do with my career. Life is strange. If I could go back in time and tell my 1995 self where I am now I don't think I would believe myself.<br />
<br />
I'm having a lot of fun learning more about the history of JavaScript and the projects we are working on. After being so deeply embedded in the Python world for so long, it feels refreshing to venture into completely alien terrain. Some things are familiar and some things are incredibly strange. It feels very natural overall; I think if something like WebSockets had existed in 2000 I never would have discovered Python and would have stuck with JavaScript and the LambdaMOO programming language. Python was class oriented; JavaScript was prototype oriented like LambdaMOO. I needed some intermediate glue language to handle JavaScript's inability to use plain old socket objects though, and thus my love affair with Python was born.<br />
<br />
The first project I am helping with is <a href="https://github.com/andreasgal/dom.js">dom.js</a>, a project whose aim is to implement the common browser DOM APIs in pure JavaScript. This project will be useful for a server-side implementation of the DOM for use in node.js and will also be useful as a DOM implementation for <a href="https://github.com/mozilla/narcissus/">Narcissus</a> which is just straight up JavaScript written in JavaScript.<br />
<br />
Woah, man. Meta. I love it.<br />
<br />
Finally there is <a href="http://mozillalabs.com/zaphod/">Zaphod</a>, which is a FireFox extension which installs Narcissus as the default JavaScript interpreter, useful for rapid prototyping of changes to JavaScript itself.Donovan Prestonhttp://www.blogger.com/profile/07076057843365973055noreply@blogger.com0tag:blogger.com,1999:blog-3233128.post-90293814608247811092010-11-16T09:08:00.000-08:002010-11-16T09:08:32.426-08:00<h1>Simplified one-wire transmission system</h1><br />
<img src="http://img696.imageshack.us/img696/4492/magnifyingtransmitteron.png"><br />
<br />
If the secondary does not have distributed capacitance which cancels the coil's self induction, then an appropriately sized capacitor should be added between the coil and ground.Donovan Prestonhttp://www.blogger.com/profile/07076057843365973055noreply@blogger.com0tag:blogger.com,1999:blog-3233128.post-45241841286882162802010-11-16T08:24:00.000-08:002010-11-16T08:35:18.898-08:00<h1>Magnifying transmitter replication</h1><br />
<b>Key tuning factors</b><br />
<br />
Coil length must match one quarter the wavelength of the impulse frequency.<br />
<br />
Coil capacitance, including a capacitor added to coil, must cancel self-induction of the coil.<br />
<br />
Impulse rise and fall slopes must be as sharp as possible, possibly in the nanosecond range.<br />
<br />
Impulse duration must be as short as possible. The duty cycle of the wave should be as close to 0 as possible.<br />
<br />
The center tap on a bifilar pancake coil will be the point at which the output voltage is highest. The terminal goes here. (? -- is this true? Take measurements)<br />
<br />
How to calculate the capacity of the terminal? Terminal construction? Terminal should possibly be glass, although could also be metal<br />
<br />
How is the size and positioning of the extra coil calculated?<br />
<br />
<b>One wire electrical transmission driving loads</b><br />
<br />
Step-down coils, wound in the opposite direction but otherwise exactly the same as the transmitter, can be attached to the one-wire transmission system to drive loads.<br />
<br />
The AV plug and a smoothing capacitor is added to power a load.<br />
<br />
As many receivers as desired may be added to drive loads.<br />
<br />
The wardenclyffe tower magnifying transmitter used the earth as the one wire.Donovan Prestonhttp://www.blogger.com/profile/07076057843365973055noreply@blogger.com0tag:blogger.com,1999:blog-3233128.post-4994343389386820442009-01-15T06:11:00.000-08:002009-10-22T03:58:45.312-07:00Spawning 0.8.8 released<p>Another minor Spawning release cleans up some log messages, the way PYTHONPATH and the location of python are determined, and adds some convenient command-line options for controlling the operation of the server. Here are the release notes:</p><br/><br/><ul><br/><li>Added --access-log-file command line option to allow writing access logs to someplace other than stdout. Useful for redirecting to /dev/null for speed</li><br/><li>Correctly extract the child's exit code and clean up the logging of child exit events.</li><br/><li>Add coverage gathering using figleaf if the --coverage command line option is given. When gathering coverage, the figleaf report can be downloaded from the /_coverage url.</li><br/><li>Add a --max-memory option to limit the total amount of memory spawning will use. If this is exceeded a SIGHUP will be sent to the controller causing the children to restart.</li><br/><li>Add a --max-age option to limit the total amount of time a spawning child is allowed to run. After the time limit is exceeded a SIGHUP will be sent to the controller to cause the children to restart.</li><br/><li>Instead of just passing the PYTHONPATH environment variable through to the children, construct the PYTHONPATH from the contents of sys.path.</li><br/><li>Instead of just trying to run 'spawn' with /usr/bin/env when restarting, just run sys.executable -m spawning.spawning_controller, making it more likely that the controller will run correctly when restarting.</li><br/><li>Add a --verbose option and change the default startup procedure to not log the detailed dictionary of configuration information.</li><br/></ul><br/><br/><br/><br/>Donovan Prestonhttp://www.blogger.com/profile/07076057843365973055noreply@blogger.com0tag:blogger.com,1999:blog-3233128.post-42671585384161159232009-01-14T06:57:00.000-08:002009-10-22T03:58:45.332-07:00Python binding for Mongrel's http11 parser<p>Last weekend I wrote a Python binding for Mongrel's http11 parser. I will probably integrate this into eventlet.wsgi and Spawning at some point, but it's not really that much faster than eventlet.wsgi's existing pure python http parser, so I'm not in a hurry.</p><br/><br/><p>I decided to release <a href="http://github.com/fzzzy/pyhttp11/tree/master">the code</a> on github in case anybody else is interested in playing around with it in the meantime. This is the first time I've used git to commit and push. So far my impression of git compared to darcs and hg is very good. git seems easy enough to use and is very fast. There's not a compelling reason to use it over hg except for the fact that pretty much everybody else in the world seems to be leaning towards git.</p><br/><br/>Donovan Prestonhttp://www.blogger.com/profile/07076057843365973055noreply@blogger.com0tag:blogger.com,1999:blog-3233128.post-8439809003896296452008-12-19T13:40:00.000-08:002009-10-22T03:58:45.343-07:00Evolution of Codependency in Antagonistic Relationships<p>I'm reading <a href="http://www.amazon.com/gp/product/0201483408?ie=UTF8&tag=pyx-20&linkCode=as2&camp=1789&creative=9325&creativeASIN=0201483408">Out of Control: The New Biology of Machines, Social Systems, & the Economic World</a><img src="http://www.assoc-amazon.com/e/ir?t=pyx-20&l=as2&o=1&a=0201483408" width="1" height="1" border="0" alt="" style="border:none !important; margin:0px !important;" />, a most excellent book about complexity. This quote caught my eye:</p><br/><br/><blockquote>In defending itself so thoroughly against the monarch, the milkweed became inseparable from the butterfly. And vice versa. Any long-term antagonistic relationship seemed to harbor this kind of codependency. (p 74)</blockquote><br/><br/><p>This made me realize something about the nature of governments and war: Governments evolved to protect resources and people from the threat of outside invasion. An organizing structure was required to create and maintain a fighting force capable of resisting invasion from neighbors. However, it's now obvious that governments are in a codependent relationship with war: If there were no more war, then there would be no need for a government's ability to organize a fighting force. Therefore it's in a government's best interest to ensure that <em>war never ceases.</em></p><br/><br/><p>However, just like any other codependent relationship, a lot of denial takes place. I doubt most politicians would come out and say that a prime function of government is to create war. Actions speak louder than words, though, and it's clear that in the thousands of years of human civilization there have been plenty of wars.</p><br/><br/>Donovan Prestonhttp://www.blogger.com/profile/07076057843365973055noreply@blogger.com2tag:blogger.com,1999:blog-3233128.post-44491618755367162342008-12-19T13:14:00.000-08:002009-10-22T03:58:45.352-07:00lxml + eventlet mashup<p>Since Ian was kind enough to <a href="http://blog.ianbicking.org/2008/12/10/lxml-an-underappreciated-web-scraping-library/">give me instructions</a> that gave me a working lxml (I had never been able to compile it before), I thought I'd write a quick scraper by mashing lxml together with eventlet.</p><br/><br/><p>The result is a thing of beauty:</p><br/><br/><pre><br/>from os import path<br/>import sys<br/><br/>from eventlet import coros<br/>from eventlet import httpc<br/>from eventlet import util<br/><br/>from lxml import html<br/><br/>## Make httpc work -- I'll make it work without this soon<br/>util.wrap_socket_with_coroutine_socket()<br/><br/>def get(linknum, url):<br/>print "[%s] downloading %s" % (linknum, url)<br/>file(path.basename(url), 'wb').write(httpc.get(url))<br/><br/>def scrape(url):<br/>root = html.parse(url).getroot()<br/>pool = coros.CoroutinePool(max_size=8)<br/>linknum = 0<br/>for link in root.cssselect('a'):<br/>url = link.get('href', '')<br/>if url.endswith('.mp3'):<br/>linknum += 1<br/>pool.execute(get, linknum, url)<br/>pool.wait_all()<br/><br/>if __name__ == '__main__':<br/>if len(sys.argv) == 2:<br/>scrape(sys.argv[1])<br/>else:<br/>print "usage: %s url" % (sys.argv[0], )<br/></pre><br/><br/><p>This script manages to max out my bandwidth -- 800KB/sec at home and 2.5MB/sec at work -- without breaking a sweat. It oscillates between about 10% and 20% CPU on my MacBook Pro. Nice!</p><br/><br/>Donovan Prestonhttp://www.blogger.com/profile/07076057843365973055noreply@blogger.com1tag:blogger.com,1999:blog-3233128.post-46554640722045617582008-12-06T19:09:00.000-08:002009-10-22T03:58:45.362-07:00ptth (Reverse HTTP) implementation in a browser using Long Poll COMET<p>ptth is an idea I have planning on implementing for a few years now. The basic idea is that you take normal HTTP semantics and reverse them, meaning that the client (from the TCP perspective) acts like a server (from the application perspective), and the server (from the TCP perspective) acts like a client (from the application perspective) and makes requests on the client whenever it feels like it. This is distinguished from most normal COMET semantics in that ptth retains all of http's characteristics even though the underlying transport is radically different looking at the TCP level.</p><br/><br/><p>When I was at Linden Lab, I advocated using this technique in the Second Life Viewer as a refinement of the Plain Old COMET implementation currently in use (which I also helped implement). I wrote a <a href="http://wiki.secondlife.com/wiki/Reverse_HTTP">wiki page</a> describing how the http Upgrade: header can be used to initiate a ptth connection, effectively turning a socket that the client opened to the server around, allowing the server to make requests on the client as if the server had opened a connection to the client (even though it didn't). I even did an <a href="http://soundfarmer.com/paste/B4B75B7A-2E16-4CB1-B0BC-98ADAD2B9214.py">implementation in Python</a> showing how once the Upgrade: has been performed the semantics are exactly the same as normal http. This means with a little hackery it's possible (and in the Python case, almost trivial) to reuse existing http client and server libraries. All you have to mess around with is the setup of the socket; once both sides have an open socket and have agreed to Upgrade:, you just grab the underlying socket and pass it to the client or server library and away you go.</p><br/><br/><p>Even though I didn't get the chance to implement and deploy this technique in the Second Life Viewer and Server before I left Linden for Mochi, I still hope this gets implemented someday, as I think it is a very elegant and efficient technique. While implementing the real ptth Upgrade: in C++ will be more challenging than doing a quick Python prototype, once the dirty business of extracting sockets and injecting them into the client and server libraries used is complete, it should be a very reliable technique since at that point everything is exactly the same as normal http.</p><br/><br/><p>However, it won't be possible to do these type of Upgrade: shenanigans when we are in the browser's Javascript environment and don't have access to low level details like socket APIs. Therefore, I also specced out what ptth would look like running over a Plain Old COMET Long Poll style transport. The <a href="http://wiki.secondlife.com/wiki/Reverse_HTTP#COMET_Fallback">wiki page</a> describes encoding the reverse request and response as JSON for ease of parsing and generating in Javascript, but other content-types could be used (application/x-http-request and application/x-http-response perhaps, or maybe the message/http mime type could simply be used or modified to be message/http+request and message/http+response?)</p><br/><br/><p>On Saturday the 6th we had a Mochi Hack Day at our office, and I was hacking on my perpetual hacking project, Pavel. If you don't know me personally and haven't heard me talk about Pavel, someday I'll flesh out the ideas behind it more fully in a series of web pages, but for now you can read <a href="http://ulaluma.com/pyx/archives/2005/05/multiuser_progr.html">this old blog post</a> to get a rough idea of what it is. The post uses the term "graphical multiuser networked programming environment" to describe the basic idea. From the very beginning I conceived ptth as a vehicle for driving updates of the user interface to Pavel, so I decided to get down to it and actually implement it. Since I have implemented so many COMET servers at this point that I have lost count, it turned out to be almost trivially easy, and I had something working in a few hours.</p><br/><br/><p>And now the part everyone has been waiting for: the demo. The demo takes place in firebug, where you can see the Javascript side of the Long Poll operating, and in the eventlet backdoor running inside of a terminal. The eventlet backdoor gives me a Python interactive REPL into the process which is serving the server side of the Long Poll, and allows me to manually inject ptth messages into the system which then get delivered to the browser, which then responds to the request. The first thing you see me doing is building a simple ptth request by hand, encoded in JSON. I then inject this message into the ptth system, copying and pasting the uuid of the user who is connected via the Firefox browser in the background. You can see the debug printing in Firebug showing that the request was delivered to the browser, and the result in the backdoor's REPL is the response that was generated in Javascript and sent back to the server.</p><br/><br/><embed src="http://soundfarmer.com/content/movies/ptth-to-browser.mov" height="495" width="640" autoplay="false"></embed><br/><br/><p>This means that I now need to implement some sort of web framework in Javascript. I know Dojo has already done this and I'm sure other people will start to experiment with this idea as well, but for my purposes I'll probably come up with something super simple. The idea that immediately came to my mind is to have URIs represent XPath into the html document, and PUT replacing the selected node with the content fragment from the request body of the PUT.</p><br/><br/>Donovan Prestonhttp://www.blogger.com/profile/07076057843365973055noreply@blogger.com2tag:blogger.com,1999:blog-3233128.post-55988081524420838182008-07-29T09:33:00.000-07:002009-10-22T03:58:45.373-07:00Eventlet 0.7 and Spawning 0.7 Released<h1>Eventlet 0.7</h1><br/><br/><p>Eventlet 0.7 fixes some very long-standing bugs. First of all, there was a CPU leak in the select hub which would cause an http keep-alive connection to consume 100% CPU while it was open. The problem was that every file descriptor was being passed in to select, even if the callback for the readiness mode was None. This bug has been in since the very beginning of eventlet, and it's great to have it fixed!</p><br/><br/><p>Second, another old bug. It's now possible to use Eventlet's SSL client to talk to Eventlet's SSL server. There was a subtle bug in the way SSL sockets would raise an error in some conditions instead of returning '' to indicate the connection was closed.</p><br/><br/><p>Finally, some memory leaks in the libevent and libev hubs (fairly new code) were fixed, so if you're using Eventlet with libevent or libev try it out and see how it performs for you.</p><br/><br/><p>Also, this release pulls in a bunch of API additions from the Linden SVN repository. Ryan Williams is now maintaining an HG repository which is synched with the SVN repository, so integrating patches between branches will now be much easier.</p><br/><br/><p><b>Update July 30, 2008</b>This release of eventlet also supports stackless-pypy again. I had to check for the absence of the socket.ssl object, and re-enable the poll hub. To try this out, check out and translate pypy-c following the instructions <a href="http://codespeak.net/pypy/dist/pypy/doc/getting-started.html">on the pypy site</a>, and then run one of the eventlet examples (for example, "./pypy-c /Users/donovan/src/eventlet/examples/wsgi.py")</p><br/><br/><p>Download Eventlet 0.7 from PyPI: <a href="http://pypi.python.org/pypi/eventlet/0.7">http://pypi.python.org/pypi/eventlet/0.7</a><br/><br/><h1>Spawning 0.7</h1><br/><br/><p>Spawning has improved a lot since I last wrote about it. It now has a command line script, "spawn", which makes it easy to quickly serve any wsgi application. The concurrency strategy is also now extremely flexible and can be configured for a plethora of use cases.</p><br/><br/><p>The default is to use one non-blocking i/o process with a threadpool, which makes it easy to use with any existing wsgi applications out there that assume shared memory and the ability to block.</p><br/><br/><p>However, it's possible to independently configure the number of i/o processes, the number of threads, and even configure it to be single-process, single-thread, with fully non-blocking i/o (thanks to eventlet's monkey patching abilities).</p><br/><br/><p><b>Update July 30, 2008</b>This release of spawning also has an experimental Django factory. To run a Django app under Spawning, run "spawn --factory=spawning.django_factory.config_factory mysite.settings".</p><br/><br/><p>Take a look at the Spawning PyPI entry for more information: <a href="http://pypi.python.org/pypi/Spawning/0.7">http://pypi.python.org/pypi/Spawning/0.7</a></p><br/><br/>Donovan Prestonhttp://www.blogger.com/profile/07076057843365973055noreply@blogger.com3tag:blogger.com,1999:blog-3233128.post-483423880855345852008-06-16T12:10:00.000-07:002009-10-22T03:58:45.382-07:00Spawning 0.1 Released<p>Spawning is an experimental mashup between Paste and eventlet. It provides a server_factory for Paste Deploy that uses eventlet.wsgi. It also has some other nice features, such as the ability to run multiple processes to take advantage of multicore processors and multiprocessor machines, and graceful code reloading when modules change or the svn revision of a directory changes. Graceful reloading means new processes are immediately started which start serving new incoming requests, but old processes hang around processing the old requests until those requests are complete.</p><br/><br/><p>This is very early still. The code is currently hard-coded to run one process, but once I figure out how to use Paste Deploy's configuration files a bit better I will make it configurable. I mostly wanted to get it out quickly because Ian Bicking asked for it in the comments of my <a href="http://ulaluma.com/pyx/archives/2008/06/eventlet_05_rel.html#comments">last blog post</a>, and to get feedback. I'd like more of this code to be shared between Spawning and mulib's 'mud' server. I also need a better name than Spawning.</p><br/><br/><p>You can download a tarball <a href="http://soundfarmer.com/eventlet/Spawning-0.1.tar.gz">here</a> or you can clone the Mercurial repository <a href="http://donovanpreston.com:8888/spawning">here.</a></p><br/><br/><br/>Donovan Prestonhttp://www.blogger.com/profile/07076057843365973055noreply@blogger.com1tag:blogger.com,1999:blog-3233128.post-69289095236171503232008-06-12T04:49:00.000-07:002009-10-22T03:58:45.391-07:00Eventlet 0.5 Released<p>The last release of eventlet was 0.2, which we did when we re-open-sourced the fork of eventlet I worked on while I was at Linden Lab. 0.2 was released quite a while ago, and eventlet has seen significant improvement in the meantime.</p><br/><br/><p>The main change in this release is the ability to use libevent as the multiplexing api instead of raw select or poll. If libevent and the Python wrapping are not installed, eventlet will still fall back, first checking for the presence of poll and falling back to select if it is not available.</p><br/><br/><p>Another major change in this release is a much improved eventlet.wsgi server. The wsgi server now supports Transfer-Coding: chunked as well as Expect: 100 Continue, and is quite fast. I tested it against an eventlet based wsgi server I wrote which uses wsgiref (from the Python 2.5 standard library) and my informal tests showed eventlet.wsgi being several hundred requests a second faster at serving a "Hello, World!" wsgi application.</p><br/><br/><p>This release also features significant refactoring, cleaner code, support for cooperative operations on pipes (and unix domain sockets) as well as sockets, more tests, and docstrings for pretty much everything. The documentation, which was non-existant before, is now pretty comprehensive.</p><br/><br/><p>To install, just "easy_install eventlet" and start hacking!</p><br/><br/><p><br/><ul><br/><li>PyPI page: <a href="http://pypi.python.org/pypi/eventlet/0.5">http://pypi.python.org/pypi/eventlet/0.5</a></li><br/><li>Overview: <a href="http://wiki.secondlife.com/wiki/Eventlet">http://wiki.secondlife.com/wiki/Eventlet</a></li><br/><li>Documentation: <a href="http://wiki.secondlife.com/wiki/Eventlet/Documentation">http://wiki.secondlife.com/wiki/Eventlet/Documentation</a></li><br/><li>Mercurial Repository: <a href="http://donovanpreston.com:8888/eventlet">http://donovanpreston.com:8888/eventlet</a></li><br/></ul><br/></p><br/><br/>Donovan Prestonhttp://www.blogger.com/profile/07076057843365973055noreply@blogger.com12tag:blogger.com,1999:blog-3233128.post-10927596707475905092008-06-02T03:42:00.000-07:002009-10-22T03:58:45.401-07:00REST + Actors<p>I had a really good idea over the weekend for using eventlet and mulib to combine the concepts of REST and Actors. Eventlet has had an Actor class for a while now, but I haven't really used it for anything. After otakup0pe twittered a link to the Reia language (everyone knows how much of a language geek I am) I started thinking about Actors again and how I could have applied them to various work problems I solved in the last few years. The last time I really tried to do anything serious with Actors was when I wrote the latest version of Pavel on top of the just-written (at the time) eventlet. I also tried to mix a prototype object system in there and the actor coroutines were implicit in the semantics of usage (an Actor which called a method on another Actor would be implicitly causing a switch into the other Actor's coroutine), which in retrospect was perhaps a bit too ambitious.</p><br/><br/><p>Ryan Williams wrote the current eventlet Actor (eventlet.coros.Actor) and it's much simpler and more straightforward: You override the received method to handle messages, and other actors call the cast method to send messages. This is different from my previous implementation (and also what my ideal would be) in that you get called back for every message, meaning the main coroutine is generic and there's no need to keep track of where the Actor's coroutine is to serialize an actor. This means it would be possible to request a representation of an Actor at any time between messages. The state would include all the Python instance variables along with all the unhandled messages currently in the Actor's mailbox.</p><br/><br/><p>So, with that realization, it suddenly becomes trivial to write a mulib handler for the Actor class. GET and PUT with the appropriate content types (application/json for example) would get or set the current state of the Actor. DELETE would delete it. POST enqueues a message in the actor's mailbox (it just calls cast with the body of the request). Simple and straightforward. I'm totally going to do this soon -- it probably would have been faster to just do the implementation rather than blog about it :-)</p><br/><br/><p>Oh, one more thing -- to enhance the experience of actually using these semantics, the cast method should become a generic method that dispatches based on pattern matching (using mulib.shaped). I haven't figured out what an efficient implementation of this would look like yet, but I'm going to try a brute-force implementation just for fun.</p><br/><br/>Donovan Prestonhttp://www.blogger.com/profile/07076057843365973055noreply@blogger.com0tag:blogger.com,1999:blog-3233128.post-84043674753169678812008-05-14T05:55:00.000-07:002009-10-22T03:58:45.411-07:00Template on PUTI just had a cool idea. Usually, people run HTML templating engines on GET. They fetch some data, load an HTML template, and then mash the two together. My idea is to instead run the templating engine on PUT. The body of the PUT would have the data to be templated. The URL that was PUT to would determine which template to use. The response from the PUT would contain the fully templated output, equivalent to what the client would get by doing a GET to that url at any point afterwards.<br/><br/><br/>Donovan Prestonhttp://www.blogger.com/profile/07076057843365973055noreply@blogger.com2tag:blogger.com,1999:blog-3233128.post-18066451636544176882007-09-29T17:08:00.000-07:002009-10-22T03:58:45.421-07:00REST to LSL via Python and Comet<p>For those that may not know, I got a job at Linden Lab, the creators of Second Life. I really enjoy what I do, and I find this reduces the urge to work on recreational programming on the kinds of things that I enjoy. (It has also seemed to reduce my blog output to almost nil.)</p><br/><br/><p>I've had a project on the back burner for a while that involves pushing data into LSL (the Linden Scripting Language runtime that Second Life uses) over <a href="http://en.wikipedia.org/wiki/Comet_(programming)">Comet</a>.</p><br/><br/><p>And it works amazingly well! Of course, I just had to toss <a href="http://en.wikipedia.org/wiki/Representational_State_Transfer">REST</a> in there as well.</p><br/><br/><p>I'll release this code as open source next week. Meanwhile, here's a movie to show you what the hell I am talking about. I also like how this movie shows off the surreal aspect of collaborative programming!</p><br/><br/><embed src="http://soundfarmer.com/content/movies/rest-lsl-comet-bridge.mov" height="600" width="798" autoplay="false"></embed><br/><br/><br/><br/>Donovan Prestonhttp://www.blogger.com/profile/07076057843365973055noreply@blogger.com4tag:blogger.com,1999:blog-3233128.post-82433182016233997642007-06-08T10:17:00.000-07:002009-10-22T03:58:45.436-07:00My blog is growing weeds...It's been over a year now since I posted to this blog. What happened? I got a job at Linden Lab just over a year ago. Somehow, my blogging just stopped during that time. I have been busy though, and have been using twitter recently. I'll probably start posting here again with more frequency, as I have lots of things to talk about, but until then check out my twitter:<br/><br/><a href="http://twitter.com/donovanpreston">http://twitter.com/donovanpreston</a><br/><br/>Donovan Prestonhttp://www.blogger.com/profile/07076057843365973055noreply@blogger.com0tag:blogger.com,1999:blog-3233128.post-21756694865464817672006-05-01T15:18:00.000-07:002009-10-22T03:58:45.445-07:00Awesome<a href="http://item.slide.com/i/uid=xMSTXOmcSLEA6VtJkKLBU8gpjvGBg-a6VjoCtm9EqOxePF8Rs-o29Um-fpzIp5TVajSMFiFH9vk"><img src="http://item.slide.com/i/uid=xMSTXOmcSLEA6VtJkKLBU8gpjvGBg-a6VjoCtm9EqOxePF8Rs-o29Um-fpzIp5TVajSMFiFH9vk" width="600" /></a><br/><br/><p>Picked this up at Kid Robot in the Haight this weekend. It is so freaking awesome. I want more, but <a href="http://dot-s.net/">can't read japanese :-)</a></p><br/><br/><a href="http://item.slide.com/i/uid=dYs0FyrwTPK9GYdMJTY-a-tzeHHtHE8bTHqb3V-8AdxjoJGx_E0GmcyhYWEYtJMTjdqN9xOp3tA"><img src="http://item.slide.com/i/uid=dYs0FyrwTPK9GYdMJTY-a-tzeHHtHE8bTHqb3V-8AdxjoJGx_E0GmcyhYWEYtJMTjdqN9xOp3tA" width="600" /></a><br/><br/>Donovan Prestonhttp://www.blogger.com/profile/07076057843365973055noreply@blogger.com1tag:blogger.com,1999:blog-3233128.post-64821233422203437192006-04-25T09:59:00.000-07:002009-10-22T03:58:45.455-07:00Running Ubuntu on Mac OS X<p>One of the first things I did when I got my MacBook is install Parallels, CPU virtualization software that lets me run Windows XP in a window on Mac OS X, so I can easily test our site with Internet Explorer. It's very fast, and very, very friendly.</p><br/><br/><p>Recently I was tasked with discovering whether it is possible to do non-blocking file reads and writes to a filesystem that is mounted over NFS. I tried on OS X, and I was unable to get a read to return EWOULDBLOCK. So, I decided to install Ubuntu on Parallels. I downloaded the iso, burned it to a CD, created a new virtual machine, and installed it.</p><br/><br/><p>Everything worked flawlessly. Ubuntu has always been incredibly high quality, and it has only gotten nicer in the year since I used it last. It's polished, beautiful, and just works. It is definitely something that I could install on my machine for my Mom with a web browser and mail reader.</p><br/><br/><p>Here's a screenshot:</p><br/><br/><a href="http://www.soundfarmer.com/pictures/screenshots/ubuntu-parallels.png"><br/><img width="600" height="800" src="http://www.soundfarmer.com/pictures/screenshots/ubuntu-parallels.png" /><br/></a><br/><br/>Donovan Prestonhttp://www.blogger.com/profile/07076057843365973055noreply@blogger.com8tag:blogger.com,1999:blog-3233128.post-21150800785234975012006-04-03T12:10:00.000-07:002009-10-22T03:58:45.466-07:00Writing your Python REPL history to a file<p>Something I have often wished for is the ability to save the history of a Python interactive session to a file. I often screw around in the interpreter to figure out how I am going to implement something, and it is tedious to go through and copy/paste all the lines out of the terminal into an editor and clean it up. Luckily, I discovered there is an easier way in the readline module:</p><br/><br/><pre><br/>import readline<br/>readline.write_history_file('my_history.py')<br/></pre><br/><br/><p>I'm sure this is going to come in handy many times.</p><br/><br/>Donovan Prestonhttp://www.blogger.com/profile/07076057843365973055noreply@blogger.com5