tag:blogger.com,1999:blog-56193091543090757352024-03-22T10:49:16.545+11:00Japanese SoapboxRamblings of an Australian in Japan (and now back in Australia)Gaijinhttp://www.blogger.com/profile/13150968867429127703noreply@blogger.comBlogger46125tag:blogger.com,1999:blog-5619309154309075735.post-70299831731675405892017-07-11T08:05:00.001+10:002017-07-11T22:15:23.641+10:00Ethereum MiningAll the kids are doing it. I've been doing it too for a while with 3 cards. And it's a total waste of money. <b>Don't do it!</b> Don't bother! Stick your money in something else, BUY Ethereum if you must!<br />
<br />
The problem is that we're all fighting over the same piece of pie. The more miners, the smaller the share. And before you shout "You just want it all for yourself!" and pull out the pitchforks, let me explain why it's too late, and why I'll shortly be stopping.<br />
<br />
Rewind back to late May 2017 when the price shot up to $200. I assume a larger contributing force behind this were investors in Japan and elsewhere in Asia realizing that bitcoin was technically less capable than it's newer cousin, Ethereum. Several Asian exchanges started supporting it. Japan introduced a law legitimizing crypto-currencies and the rest was greed feeding greed.<br />
<br />
Of course, I joined the bandwagon. I got in around here and shortly after the price shot up to $400. At the peak, my little 3 card setup was making me almost $20 USD a day. It's around this point that I imagine everyone doing this pats themselves on the back, tells themselves how clever they are and does an entirely unrealistic projection of how much money they are going to make.<br />
<br />
Since then, the value has dropped back to <a href="http://www.coindesk.com/ethereum-price/">~$220 USD</a> but the miners haven't stopped. They've been piling on like crazy. News of GPU shortages have only piqued interest and fueled the fire further. What these punters are forgetting to factor in is not just the price correction, but the difficulty increase. I thought I was clever. I thought I had factored it in but my projections were off. It turns out the world is far sillier and greedier than I expected.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://etherchain.org/charts/difficulty"><img border="0" data-original-height="390" data-original-width="1121" height="220" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhA5sNhiXxQ2rQgO_7BQsoSp9yBbm1J3Fu0S9Nt9aEUVPBLoSKfrHk46mJzlZUMgf-T35pFa7SS2bdUNWm7oykzlFtjgRz6WHN57AGwe-JBVBwYFY1sLJYA8GG2sSbo7-d1BPQFZGSs2sAq/s640/difficulty_20170711.png" width="640" /></a></div>
Ethereum, like all proof-of-work crypto-currencies, targets a certain "block rate". This is effectively the rate at which blocks of the transaction ledger get signed off on. To avoid any old person declaring their ledger is <i style="font-weight: bold;">the</i> ledger, miners spend a lot of time working on solving a complex computing problem. The one that solves it gets their name attached to the ledger and 5 ETH for their trouble. The network is designed so that this happened with a median time of 15 seconds.<br />
<br />
To simply that even further: Every hour, roughly 1200 ETH at $225 each ($270kUSD), is up for grabs for miners.<br />
<br />
It's easy to see why a gold rush was inevitable. But....<br />
<br />
See that little red box on the difficulty evolution. Thats a bump of over 100 trillion in difficulty. At 33MH/s hashing rate (a single GTX1070 graphics card), that's equivalent to 30 million cards jumping on the mining bandwagon... overnight! Yes, there is some statistical error in there, but in any case it's a big increase!<br />
<br />
Current returns are only a few dollars a day and electricity needs to be factored into that. Paying back your $500 graphics cards is going to take you an eternity if current trends continue.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhYbTiyx0jxdxLen9IT2LsLklz7oPWejz2O_KsCpdskEykEjfSnUTiV3JNQqDRR8m3CODv8jBKQ-LkBbgnCsTiPtx9LE3Q_IWivpmACBAXLCfUEJArPErubj62ufEuIfIItUQ7jIpx087iY/s1600/coindesk_20170711.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="347" data-original-width="884" height="157" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhYbTiyx0jxdxLen9IT2LsLklz7oPWejz2O_KsCpdskEykEjfSnUTiV3JNQqDRR8m3CODv8jBKQ-LkBbgnCsTiPtx9LE3Q_IWivpmACBAXLCfUEJArPErubj62ufEuIfIItUQ7jIpx087iY/s400/coindesk_20170711.png" width="400" /></a></div>
<div style="text-align: center;">
<br /></div>
What happened, I suspect, is that some early big investors bought ETH and then realized the markup they were paying and have decided to mine instead. Meanwhile, ICOs rode the hype, sucking money out of the system by driving up demand and then cashing out. There have also been exchange hacking and network capacity issues that have scared people off.<br />
<br />
I have seen photos of shops that churn out hundreds of rigs with thousands of cards at inflated prices. These are making the news and gullible first timers are jumping in without understanding the ecosystem. The clever money at the moment is in selling the shovel to the gold miner I assume...<br />
<br />
I imagine there are a lot of people out there that have invested in this with cards on back-order so there is a delay in adding capacity to the network but already we're dangerously cost to returns dipping below the cost of electricity.<br />
<br />
For me, I was going to burn the electricity on heating anyway so I'll continue through the winter and then sell it all off. It was an interesting flash in the pan, but still just a flash.Gaijinhttp://www.blogger.com/profile/13150968867429127703noreply@blogger.com0tag:blogger.com,1999:blog-5619309154309075735.post-43314787490812654062016-04-27T13:05:00.000+10:002016-04-27T13:05:18.216+10:00DNS-over-HTTPSI noticed by chance in early April that Google Public DNS (the 8.8.8.8/8.8.4.4 people) now offer <a href="https://developers.google.com/speed/public-dns/docs/dns-over-https">DNS-over-HTTPS</a>. I thought this would a nice little addition for privacy but it seems so new that nothing out there supports it! A few weekends later, I now have a little proxy daemon and for the past week I've been running it on my OpenWRT router without issue! It's not perfect but I've uploaded the code <a href="https://github.com/aarond10/https_dns_proxy">here</a> if anyone is interested.<br />
<br />
Cleanups, some security auditing and test coverage work needed but I feel it's working well enough to release it to others at this stage. Hope it's helpful to someone else!Gaijinhttp://www.blogger.com/profile/13150968867429127703noreply@blogger.com0tag:blogger.com,1999:blog-5619309154309075735.post-66641102269264192552015-12-04T11:50:00.000+11:002015-12-04T11:50:08.114+11:00Old Mac Mini runs old OpenSSH with broken cipher setI couldn't find much about this on the interwebs but I happen to know an OpenSSH developer so I went straight to the experts here.<br />
<br />
Basically, if you run OpenSSH on a mac mini, you shouldn't trust Apple to give you updates and you're implementation may well be full of holes. The latest OpenSSH version is 7.1 and my 2009 mac mini with all the updates was reporting<br />
<blockquote class="tr_bq">
$ ssh -V<br />
OpenSSH_5.5p1, OpenSSL 0.9.8n 24 Mar 2010</blockquote>
<div>
Wow... That's not good.<br />
<br />
If you MUST run with this version and you want to be security conscious (and NSA paranoid) so you've restricted your allowed cipher list on your <b>client</b> machines, note that aes128-gcm is advertised by this broken Apple build but not actually supported by the binary. This will look like an immediate disconnection after connecting. You have this problem if your system.log contains:<br />
<br />
<blockquote class="tr_bq">
$ tail -n 100 /var/log/system.log| grep fatal<br />
Dec 4 11:16:09 macmini.lan sshd[27028]: fatal: matching cipher is not supported: aes128-gcm@openssh.com [preauth]</blockquote>
<br />
The quick fix (you should probably upgrade OpenSSH anyway, somehow..) is to add a cipher line to /etc/sshd_config as follows:<br />
<blockquote class="tr_bq">
Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour</blockquote>
<br />
Hope that saves someone else some confusion. :D</div>
Gaijinhttp://www.blogger.com/profile/13150968867429127703noreply@blogger.com0tag:blogger.com,1999:blog-5619309154309075735.post-83186454437447928792015-02-18T20:26:00.001+11:002015-02-18T20:26:05.131+11:00Playing with the ESP8266<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjEwn8yNs_YozJsKRKctlhTSrTO0TEiqReKPZ3PZpxRVd_t5HTIZXrfmCTcfD5bwNgbzAdLneWTzcgVSImoctk_YxQRVvb14u1kqi9-TtkfFjTnuVZfs0EjprzIeR9AZD_qDsX0N70SXE2B/s1600/IMG_20150218_202014.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjEwn8yNs_YozJsKRKctlhTSrTO0TEiqReKPZ3PZpxRVd_t5HTIZXrfmCTcfD5bwNgbzAdLneWTzcgVSImoctk_YxQRVvb14u1kqi9-TtkfFjTnuVZfs0EjprzIeR9AZD_qDsX0N70SXE2B/s1600/IMG_20150218_202014.jpg" height="320" width="240" /></a>We live in amazing times. For $2.50 each I got some <a href="http://www.esp8266.com/wiki/doku.php">ESP8266</a> on AliExpress and flashed nodemcu on them with minimal fuss. I've now got kilobytes of flash spare on a WiFi device with 10 digital IO, 1 analog input and a standby current draw of 78 micro amps!<br />
<br />
Thanks to the author of <a href="http://www.whatimade.today/flashing-the-nodemcu-firmware-on-the-esp8266-linux-guide/">this</a> for the instructions. Basically just put the FTDI adapter in 3.3v mode and get breadboarding with the magic pinouts described <a href="https://github.com/themadinventor/esptool">on the esptool</a> github page and flashed the latest firmware binary blob. Reboot and I get a Lua prompt via minicom. Worked flawlessly.<br />
<br />
Next up, DS18B20 temperature sensor, an I2C humidity sensor and some soldering. :)<br />
<br />
<br />Gaijinhttp://www.blogger.com/profile/13150968867429127703noreply@blogger.com0tag:blogger.com,1999:blog-5619309154309075735.post-54474033616352978592015-01-10T01:30:00.001+11:002015-01-22T14:57:10.534+11:00Reverse-engineering Efergy's internet-connected smart power meter<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhhqk_2eZeqixoytt3riFXgHNuQ6PuhPP4Egsrs3R0V_q8R-42aV-g4riCCbbOEf_Z9OWf59NiRFzKc15NTpehpz6syDxipdgwr-7hM0LEUKXoRBu4JZ41wdHHFnSz1uFO8DJONoy9u9FDQ/s1600/engagehubdin_uk_kit.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhhqk_2eZeqixoytt3riFXgHNuQ6PuhPP4Egsrs3R0V_q8R-42aV-g4riCCbbOEf_Z9OWf59NiRFzKc15NTpehpz6syDxipdgwr-7hM0LEUKXoRBu4JZ41wdHHFnSz1uFO8DJONoy9u9FDQ/s1600/engagehubdin_uk_kit.jpg" height="282" width="320" /></a></div>
<br />
I just got myself one of these <a href="http://www.jaycar.com.au/productView.asp?ID=MS6204">Efergy Home Hub Online</a> things. I'm super impressed with the physical build quality. Sadly though, nothing's perfect. With this unit two fundamental things bothered me, both which are addressable to some degree.<br />
<br />
<h3>
1. Problematic DHCP?</h3>
<br />
I plugged it in and the network port came alive. Data LEDs blinked but the solid red light indicating the device was booting stayed on forever. I changed the Ethernet cable, port, lots of reboots, turned my 1Gbit ports down to 100Mbit... Nothing. It turned out whatever DHCP client they've implemented doesn't seem to work with a Fritz Box 7390. I've never seen this issue before. I returned the first unit thinking it was bad. When the second unit did the same thing I tried shoving the Ethernet cable into the Ethernet port of my desktop and fired up udhcp. To my surprise, that did the trick. Now I need to figure out some way to get this on my network without forwarding traffic through my desktop that isn't always on.. Urgh!<br />
<br />
<h3>
2. The all-your-data-are-belong-to-us cloudy thing.</h3>
<br />
Everything these days seems to want to provide an app. That alone isn't a problem, but in order to serve data to the app, your data generally lives in the cloud and this is where I reach for my tin-foil hat and begin the fun little process of reverse engineering the protocol this thing uses. :D<br />
<br />
I don't think there is anything sinister going on here. I just want control of my data and would like my device to continue to work if (or rather when) the manufacturer decides to stop hosting it. The web interface this thing provides is pretty slick and there is both an iPhone and Android app but when it comes to my personal data, I'd rather hold it myself, thanks! So on that note...<br />
<br />
<h3>
</h3>
<h3>
The solution: Reverse Engineering!</h3>
<div>
<br />
The hardware I presume was actually produced by Efergy but it seems that the software for the "hub" device was provided by a company called <a href="http://www.hildebrand.co.uk/">Hildebrand</a> based out of London and it's Hildebrand that actually receive the raw meter readings and store them on their servers. The Efergy website hosts the interface but the raw data goes to a sensornet.info domain that is registered to Hildebrand.<br /></div>
<h3>
</h3>
<h3>
Boot Sequence</h3>
<div>
<br />
I have a box stamped with HK 1.1 Firmware AU on the bottom of it. I can't guarantee other regions behave the same way but here goes.</div>
<div>
<ol>
<li>DHCP request is broadcast to network. Device waits for a repsonse.</li>
<li>DHCP response received. Device starts resolving using DHCP-provided DNS server the hostnames "uk.pool.ntp.org" and "ff.ee.dd.aabbcc.h2.sensornet.info" where "aabbccddeeff" is the MAC address of the device.</li>
<li>Device requests from the resovled sensornet host IP the URL https://<ip>/get_key.html. Response is HTTP/200 of the form "TT|ALPHANUMERIC"</li>
<li>Device requests from the same host IP the URL "https://<ip>/check_key.html?p=TT&ts=
<div class="p1">
0000019D&h=<some hash>. HTTP/200 with an empty response is returned.</div>
</li>
<li><div class="p1">
Device starts posting periodically to https://<ip>/h2 with "Content-Type: application/eh-data" and content of the form: "123456|1|EFCT|P1,0.00.".</div>
</li>
<li><div class="p1">
Device periodically also sends requests to https://>ip>/h2 with "Content-Type: application/eh-ping" and an empty body, presumably just to let the service know it's alive.</div>
</li>
</ol>
<div>
Some other nice tidbits:</div>
<div>
<ul>
<li>SSL is used but certificates are never checked so MITM is very easy to do to read the data.</li>
<li>The "ts" GET argument in check_key.html is not a timestamp. I've seen it go backwards and always seems close to zero. I suspect it's irrelevant as we can return "success" on any data. We don't really care about authentication here.</li>
<li>The data format seems to be "<Sensor ID> | 1 | <sensor type> | <INPUT>,<Reading>."</li>
<li>I suspect unit of measurement to be in milliamps but haven't yet confirmed this. The user will have to take voltage and power factor into account to work out kW and kW/h.</li>
</ul>
</div>
<h3>
</h3>
<h3>
Update: A fake cloudy thing.</h3>
</div>
<div>
<br />
I had a Raspberry Pi that I wasn't doing much with so I turned it into my data logger by running my own DHCP, DNS and HTTPS servers, each pointing the device to the rPi instead of the hildebrand servers. I have a USB to Ethernet dongle to talk to the hub and the rPi ethernet adapter to talk to my LAN. Win! Source code is on github <a href="https://github.com/aarond10/powermeter_hub_server">here</a>.</div>
<div>
<br /></div>
<div>
<br /></div>
Gaijinhttp://www.blogger.com/profile/13150968867429127703noreply@blogger.com0tag:blogger.com,1999:blog-5619309154309075735.post-29759737817406769582014-09-20T23:05:00.003+10:002014-09-20T23:14:55.076+10:00Graphing HDD health with smartctl<div class="tr_bq">
I proudly built myself a front door for my TV cabinet recently - the very same TV cabinet that houses my NAS. Two weeks later and two crashes of my NAS box (that coincidentally uses my drives for swap), I discover 2 of my 4 HDD's had started giving errors, one had completely died. Turns out this thing called "ventilation" is important after all! *shrugs*</div>
<br />
<a name='more'></a>Being a reasonably diligent fellow, I had backups. I didn't have to use them though. ZFS to the rescue and I replaced both drives, one at a time. In honour of this auspicious restoration of my data redundancy, I hereby present my latest in ghetto monitoring scripts:<br />
<br />
<b>smartctl_log.sh</b><br />
<blockquote class="tr_bq">
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">#!/bin/bash<br /># Run from a cronjob<br />TIMESTAMP=$(date +"%s")<br />for i in `seq 0 3`; do<br /> smartctl --attributes /dev/${DEV} | grep "^[ 0-9]" | awk '{ print "'${TIMESTAMP},${DEV}',"$2","$4 }' >> /mnt/tank/logs/smartd.log<br />done<br /> DEV=ada${i}</span></blockquote>
<b>gen_index.sh</b><br />
<blockquote>
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">#!/bin/bash<br />#<br /># Quick and dirty HTML generator for displaying smartctl stats.<br />#<br />FILENAME=$1<br />cat > index.html << EOF<br /><html><head><title>Smartctl Graphs</title><head><body><h1>Smartctl Graphs</h1><br />EOF<br />DRIVES=`cat ${FILENAME} | awk -F, '{print $2}' | sort | uniq`<br />TYPES=`cat ${FILENAME} | awk -F, '{print $3}' | sort | uniq`<br />MIN_TIMESTAMP=`date --date="last year" +"%s"`<br />for t in ${TYPES}; do<br /> rm -f /tmp/gnuplot.data.*<br /> for d in ${DRIVES}; do<br /> cat ${FILENAME} | grep "$d,$t" | awk -F, '{ if(int($1) > int('${MIN_TIMESTAMP}')) print $1" "$2" "$4 }' | sort -n > /tmp/gnuplot.data.${d}<br /> done<br /> cat > /tmp/gnuplot.cmd << EOF<br />set term png<br />set output "gen_${t}.png"<br />#set size 17,17<br />set title "${t}"<br />set style data fsteps<br />set timefmt "%s"<br />set format x "%Y/%m/%d %H:%M"<br />set yrange [0:]<br />set xdata time<br />set xtics rotate<br />set grid<br />set key bottom left<br />EOF<br /> echo -ne "plot " >> /tmp/gnuplot.cmd<br /> for d in ${DRIVES}; do<br /> echo -ne "'/tmp/gnuplot.data.${d}' using 1:3 title columnheader(2) with lines," >> /tmp/gnuplot.cmd<br /> done<br /> cat /tmp/gnuplot.cmd | gnuplot<br /> echo "<div style=\"float:left;width:340px;\"><img width=\"320\" src=\"gen_${t}.png\"></div>" >> index.html<br />done<br />echo "<div style=\"clear:both\"><center>Generated at `date` on `hostname`</center></div>" >> index.html<br />echo "</body></html>" >> index.html</span></blockquote>
This monstrosity produces glorious graphs like these:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGJOneeZ-UBWv7XVubyhfnWRH8KIi0F72pA1_tLBytqJzMrWSXHjlHXxeau0BqBhEdc7VH5bhyphenhyphen3RRxCvabMKeZJ4djw_ppIKiZgycHlMvBgpqn-OcSJEG3TJNvhN7yShcpsBbOZcaKs723/s1600/Selection_001.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGJOneeZ-UBWv7XVubyhfnWRH8KIi0F72pA1_tLBytqJzMrWSXHjlHXxeau0BqBhEdc7VH5bhyphenhyphen3RRxCvabMKeZJ4djw_ppIKiZgycHlMvBgpqn-OcSJEG3TJNvhN7yShcpsBbOZcaKs723/s1600/Selection_001.png" height="195" width="400" /></a></div>
<br />
Now these scripts are not exactly shining examples of what you should do. They're probably more like counter-examples. For starters, this is going to slow down linearly the longer you run it.<br />
<br />
In any case, for fellow lazy fellows, this may be enough for you as it was for me. I am mainly interested in running this after-the-fact when I notice issues so I can look for a correlated downward trend in a graph and more confidently predict my drives pending demise. YMMV.<br />
<div>
<span style="font-family: Courier New, Courier, monospace;"><br /></span></div>
Gaijinhttp://www.blogger.com/profile/13150968867429127703noreply@blogger.com0tag:blogger.com,1999:blog-5619309154309075735.post-15446083661056959642013-06-15T15:06:00.000+10:002014-09-20T23:11:58.293+10:00rPi + tvheadend + shepherdMy home TV setup involves a file server (FreeNAS), a Mac Mini running XBMC and a Raspberry Pi running TVHeadend.<br />
<br />
In setting up the rPi side of this, I came across a lot of scattered instructions so I thought I'd bring them together. Nothing is particularly difficult but I suspect I'll be doing this again at some stage in the future so for the sake of myself and others...<br />
<br />
<a name='more'></a><b>Edit: </b>It's actually pretty easy to use rasbian and build tvheadend from head now. This will save you messing with raspbmc's init scripts and the kernel modules mentioned below are now in the stock rasbian kernel so your job there also just got easier. Finally, tvheadend does such an awesome job at EIT OTA tv guide data that I have dropped shepherd completely and live happily with guide information that only gives me about a weeks notice of upcoming programming.<br />
<h4>
Raspbmc</h4>
<ol>
<li><a href="http://www.raspbmc.com/download/">Download raspbmc</a> (kernel 3.6.11 at time of writing). At the time of writing the network boot version was looping, failing to install modules so I went with the <a href="http://download.raspbmc.com/downloads/bin/filesystem/prebuilt/raspbmc-final.img.gz">standalone image.</a></li>
<li>dd if=<file> of=/dev/sdX</li>
<li>Boot pi.</li>
<li>ssh pi@<ip> (password raspberry)</li>
<li>Setup locale as prompted.</li>
<li>Disable xbmc and patch tvheadend script to start after mounting filesystems:</li>
</ol>
<blockquote class="tr_bq">
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> $ sudo -i<br /> # chmod a-x /scripts/xbmc-watchdog.sh<br /> # vi /etc/init/xbmc.conf<br /> -start on (started dbus and started mountall)<br /> +#start on (started dbus and started mountall)<br /> # vi /etc/init/tvheadend.conf<br /> -start on (started xbmc and enable-tvheadend)<br /> +start on (started dbus and stopped mountall)</span></blockquote>
<br />
<h4>
Shepherd</h4>
Install dependencies for and download <a href="http://svn.whuffy.com/">shepherd</a>:
<br />
<blockquote class="tr_bq">
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">$ sudo apt-get update && sudo apt-get install libxml-simple-perl libalgorithm-diff-perl libgetopt-mixed-perl libdata-dumper-simple-perl libdate-manip-perl liblist-compare-perl libdatetime-format-strptime-perl libhtml-parser-perl libxml-dom-perl libgd-gd2-perl libarchive-zip-perl libio-string-perl xmltv libdbi-perl libsort-versions-perl && wget 'http://www.whuffy.com/shepherd/shepherd' && perl shepherd </span>
</blockquote>
... answer prompts then go to bed. This will take a while. ...
<br />
<br />
To have the tv_grab_au script picked up by tvheadend, symlink it to /usr/bin:<br />
<blockquote class="tr_bq">
<span style="font-family: 'Courier New', Courier, monospace; font-size: x-small;">$ sudo ln -s /home/pi/.shepherd/tv_grab_au /usr/bin/tv_grab_au</span></blockquote>
<br />
<h4>
<span style="font-family: inherit;">RTL2832U kernel module</span></h4>
<div>
<span style="font-family: inherit;">I've tried an AF9015 device (TinyTwin V3) and the quality of the drivers are terrible. It half works but has issues with I2C signalling, the IR remote and use of both tuners concurrently. In the end I gave up and went with an RTL2832U single-tuner device I had (EZCAP). The kernel module for it doesn't come with stock raspbmc. You can build it yourself or opt for the lazy option and download it <a href="http://forum.stmlabs.com/showthread.php?tid=6780">here</a>.</span></div>
<blockquote class="tr_bq">
<span style="font-family: inherit;"><build/download module and copy to /lib/modules/3.6.11/kernel/drivers/media/dvb/dvb-usb/><br />$ sudo depmod -a</span></blockquote>
<br />
<h4>
<span style="font-family: inherit;"><span style="line-height: 18px;">Reboot</span></span></h4>
The changes to date should take effect. You should be able to connect to tvheadend at http://<ip>:9981. Add muxes for Australian locations and select the XMLTV: tv_grab_au option in the EPG settings. The first EPG run will take a while. Shepherd is intentionally very slow with its EPG crawling.<br />
<br />Gaijinhttp://www.blogger.com/profile/13150968867429127703noreply@blogger.com0tag:blogger.com,1999:blog-5619309154309075735.post-54418205395547337112013-05-19T23:40:00.000+10:002013-06-13T08:58:26.707+10:00The state of 3D printingThe world of 3D printing has intrigued me for years. An industrial designer friend of mine has been tinkering with 3D printed prototypes and objects for a few years now and I've been surveying the state of things for a while. I particularly liked the look of the<a href="http://www.kickstarter.com/projects/formlabs/form-1-an-affordable-professional-3d-printer">Form1</a> on kickstarter although the limited choice of material is a potential issue. Anyway, for whatever reason, after a few hours of trawling shapeways, I feel compelled to rehash my favourites:<br />
<br />
<b>8.</b> I have trouble believing the claim that this actually works. The sheer number of components here and the tolerances they must have is incredible.<br />
<a href="http://www.shapeways.com/model/247069/animaris-geneticus-parvus-5.html?li=productBox-search"><img alt="#5" height="237" src="http://images1.sw-cdn.net/model/picture/674x501_247069_127086_1338413387.jpg" width="320" /></a><br />
<b><br /></b>
<b>7.</b> An elegant chopstick holder that would look at home at the most sophisticated dinner table.<br />
<a href="http://www.shapeways.com/model/167484/hashioki-one.html?li=moreFromDesigner&material=6"><img alt="With hashi 1" height="236" src="http://images1.sw-cdn.net/model/picture/674x501_167484_110902_1338413386.jpg" title="Hashioki" width="320" /></a><br />
<b><br /></b>
<b>6.</b> A lens cap holder that attaches to your camera strap. Ingeniously simple and useful and it looks so professional it's hard to tell from the picture what the object actually is.<br />
<a href="http://www.shapeways.com/model/284994/lens-cap-holder-customizable.html?li=productBox-search"><img alt="Description" height="237" src="http://images1.sw-cdn.net/model/picture/674x501_284994_133280_1338413387.jpg" width="320" /></a><br />
<br />
<b>5.</b> A Galaxy S3 case + credit card holder + money clip + bottle opener. Seriously, I wish I had a Galaxy S3 right now.<br />
<a href="http://www.shapeways.com/model/817469/galaxy-s3-case-w-card-holder-money-clip-n-opene.html?li=productBox-search"><img height="237" src="http://images1.sw-cdn.net/model/picture/674x501_817469_694690_1354860189.jpg" width="320" /></a><br />
<br />
<b>4.</b> A blast from my childhood past! Evil tentacle!<br />
<a href="http://www.shapeways.com/model/561183/day-of-the-tentacle-purple-6cm.html?li=productBox-search"><img height="237" src="http://images1.sw-cdn.net/model/picture/674x501_561183_769276_1358855427.jpg" width="320" /></a><br />
<br />
<b>3.</b> Serious jewellery. Sure, you have to take it to a professional jeweller to get it properly made but the fact the designs like this can be based on a 3D printed base is still very impressive.<br />
<a href="http://www.shapeways.com/model/445577/gcd6-heart-shaped-engagement-ring.html?li=productBox-search"><img height="237" src="http://images1.sw-cdn.net/model/picture/674x501_445577_156389_1338413388.jpg" width="320" /></a><br />
<br />
<b>2.</b> You can actually get stuff printed in stainless steel now. Awesome!<br />
<a href="http://www.shapeways.com/model/402747/chain-mesh-bowl-6in.html?li=productBox-search"><img alt="Size example" height="237" src="http://images1.sw-cdn.net/model/picture/674x501_402747_149727_1338413388.jpg" width="320" /></a><br />
<br />
<br />
<b>1.</b> This mug looks incredible. Printed in ceremic, the detail is amazing. The price is the only reason I've held back and I assume these will drop significantly with time. <br />
<a href="http://www.shapeways.com/model/631850/snake-mug.html?li=productBox-search"><img height="237" src="http://images1.sw-cdn.net/model/picture/674x501_631850_491819_1342181124.jpg" width="320" /></a><br />
<br />
<br />
Gaijinhttp://www.blogger.com/profile/13150968867429127703noreply@blogger.com0tag:blogger.com,1999:blog-5619309154309075735.post-67059982646205259362013-03-24T20:15:00.000+11:002013-03-25T01:06:18.286+11:00You CAN still get a pre-paid data SIM card in JapanI read a lot of blogs stating you couldn't get a pre-paid SIM card in Japan. It's true that the big players (NTT Docomo, Softbank, AU) don't sell these anymore but turns out that there is still at least one way to do this.<br />
<br />
A company called <a href="http://www.bmobile.ne.jp/lineup.html">b-Mobile</a> sells <a href="http://www.bmobile.ne.jp/l_1gb/index.html">pre-paid SIM cards valid for 1 month</a> (about $USD32) and 3 month (about $USD100) with 1GB of data each. Yes, I wonder if anyone goes with the 3 month option...<br />
<br />
The cards don't support calls but they run on NTT Docomo's LTE, HSDPA, 3G networks and data coverage seems <a href="http://www.nttdocomo.co.jp/support/area/index.html">very good</a>.<br />
<br />
You can only get them from big retailers. In Osaka, that means BIC Camera in Namba or, in my case, Yodobashi Camera in Umeda. You also need to activate them in Japanese - with a Japanese mobile apparently. So it definitely helps if you speak Japanese, have a Japanese friend, or if you're super convincing, maybe you can persuade the sales person to do it for you. From what I could tell, the U300 SIM they used to offer in English, pre-activated 24 hours after mail order purchase is no longer on offer. :(<br />
<br />
In my case, I've been running on a 1GB, 1 month MicroSIM for a week now and using SkypeIn for receiving incoming calls. Calls are very high latency (almost unusable) but it has served its purpose so far - if I get a call that isn't working well, I just fall back to contact via email.<br />
<br />
Best of luck Japanese travellers!Gaijinhttp://www.blogger.com/profile/13150968867429127703noreply@blogger.com0Osaka, Osaka Prefecture, Japan34.6937378 135.5021650999999634.4848083 135.17944159999996 34.902667300000005 135.82488859999995tag:blogger.com,1999:blog-5619309154309075735.post-67695586530099177162013-03-08T00:34:00.003+11:002013-03-08T00:34:29.514+11:00The worlds ugliest webm streaming webserver?I was trying to find an easy way to get low latency video from a webcam to a remote browser today. Requirements:<br />
<br />
<ol>
<li>Quick deployment</li>
<li>Runs on Raspberry Pi</li>
<li>Runs with out-of-the-box debs (see quick deployment)</li>
</ol>
<div>
My hacky solution was a Django python app that uses the StreamingHttpResponse class, gstreamer and a pipe. Disguisting, but works well. Sadly, latency is about 10 seconds to localhost so its not exactly live... </div>
<div>
<br /></div>
<div>
<blockquote class="tr_bq">
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">import pygst<br />pygst.require("0.10")<br />import gst</span></blockquote>
</div>
<div>
<blockquote class="tr_bq">
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">def grab(request):<br /> """ Return a webm live stream from the first attached webcam. """<br /> class VideoStreamer():<br /> def __init__(self):<br /> self.pipein, self.pipeout = os.pipe()<br /> self.player = gst.parse_launch ("v4l2src ! video/x-raw-yuv,width=640,height=480,framerate=10/1 ! ffmpegcolorspace ! vp8enc max-latency=1 lag-in-frames=1 ! webmmux name='m' streamable=true ! fdsink fd=%d" % self.pipeout)<br /> self.player.set_state(gst.STATE_PLAYING)<br /> def start(self):<br /> fd = os.fdopen(self.pipein)<br /> try:<br /> while True:<br /> yield fd.read(4096)<br /> except Exception, e:<br /> print "Exception was ", e<br /> def __del__(self):<br /> self.player.set_state(gst.STATE_NULL)<br /> os.close(self.pipeout)<br /> return StreamingHttpResponse(VideoStreamer().start(), content_type="video/webm")</span></blockquote>
<br />
Requires pygst and django 1.5 (use pip).</div>
Gaijinhttp://www.blogger.com/profile/13150968867429127703noreply@blogger.com0tag:blogger.com,1999:blog-5619309154309075735.post-19461039146199590282013-03-04T12:37:00.001+11:002013-03-04T12:37:46.498+11:00Bricked Netgear Stora? Arduino to the rescue!<div class="separator" style="clear: both; text-align: left;">
My <a href="http://japanesesoapbox.blogspot.com.au/2013/01/datamule-my-first-android-app.html">latest attempt</a> at getting offsite backups working for me in the most convenient manner possible involves scattering storage devices around at places I frequently visit such as relatives places, etc. I thought I'd re-purpose a Netgear Stora device I had lying about (P.S. Don't ever buy one of these <a href="http://forum1.netgear.com/showthread.php?t=65342">if you value your privacy</a>) by modding the firmware on it. Turns out, an arduino makes a great TTL serial adapter if you short the reset pin to GND. :) </div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
I'm also particularly proud of my ghetto MacGyver-like pin connectors. They were created with PVC tape rolled around the leg of a resistor and then cut into thirds and slipped over each pin to hold them in place. (Yes, I need to get myself some more electronics gear...)</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
Pin outs <a href="http://www.openstora.com/wiki/index.php?title=Root_Access_Via_Serial_Console">here</a> in case anyone stumbles across this wanting to do something similar. :)</div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiiGdbxxbzDYqrhCESR5Gxt6QbwvK-fnjPqamlRVNFArT9sv7WRUYDMBw0IC4-xnLbuyNxiO9SkhmtJDGeZXACK7iWC_dzGbE9uQcsNsbzRBuziz5LtTN4wpj-W85oW03cmFll4vCzcecGd/s1600/IMG_20130304_121028.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiiGdbxxbzDYqrhCESR5Gxt6QbwvK-fnjPqamlRVNFArT9sv7WRUYDMBw0IC4-xnLbuyNxiO9SkhmtJDGeZXACK7iWC_dzGbE9uQcsNsbzRBuziz5LtTN4wpj-W85oW03cmFll4vCzcecGd/s200/IMG_20130304_121028.jpg" width="150" /></a> <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj00R6jZhkV9EqvtSDOXjv7n0zqauCtSRQ_cdelKeghziCywkwQ9kIouOC5P2SAviXJR5oEC2mYWDhtsjbFqtbfXyU1puE4C2HZTSZTar_Q5jHTDFFvMcv1N7cHU2zYf2vdNmPpPFbEy_Oq/s1600/IMG_20130304_121041.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj00R6jZhkV9EqvtSDOXjv7n0zqauCtSRQ_cdelKeghziCywkwQ9kIouOC5P2SAviXJR5oEC2mYWDhtsjbFqtbfXyU1puE4C2HZTSZTar_Q5jHTDFFvMcv1N7cHU2zYf2vdNmPpPFbEy_Oq/s200/IMG_20130304_121041.jpg" width="150" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnXbx2eiV9m9T3aYk1lMTQiar1Fo-lvYFjkXP2Enpsy7t3eqzmkd7Sr47eYNwrGGU5RuIKwCN2Rrz5vFoB8_ADb-Ipptu3XwTq1JMJgmyWQRyB7F7SGDbkaIkVjvTgaQ_8-3Xk0Lg72wqI/s1600/IMG_20130304_121049.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnXbx2eiV9m9T3aYk1lMTQiar1Fo-lvYFjkXP2Enpsy7t3eqzmkd7Sr47eYNwrGGU5RuIKwCN2Rrz5vFoB8_ADb-Ipptu3XwTq1JMJgmyWQRyB7F7SGDbkaIkVjvTgaQ_8-3Xk0Lg72wqI/s200/IMG_20130304_121049.jpg" width="150" /></a></div>
Gaijinhttp://www.blogger.com/profile/13150968867429127703noreply@blogger.com0tag:blogger.com,1999:blog-5619309154309075735.post-49093516718543039542013-01-12T22:34:00.001+11:002013-01-12T22:34:14.619+11:00DataMule - My first Android app. :) <span style="font-family: inherit;">I've been toying with various backup solutions for months now. I've tried </span><a href="https://tahoe-lafs.org/">tahoe-lafs</a>. I tried my own <a href="http://japanesesoapbox.blogspot.com.au/2012_03_01_archive.html">hacky scripts</a> and before that I even had the audacious idea of writing my own P2P backup software from scratch, starting at the low-level <a href="http://japanesesoapbox.blogspot.com.au/2011/12/epollthreadpool.html">RPC</a> layer.. My current approach is to use <a href="http://duplicity.nongnu.org/">duplicity</a> and throw the files on <a href="https://developers.google.com/storage/">Google's cloud storage</a>. The biggest problem with this latest approach is that full backups of my 100GB of data take about 12.5 days of maxing out my ADSL uplink. QoS eases some of the frustrations trying to use the internet while this is happening but doesn't completely remove them.<br />
<br />
So DataMule is going to be a simple Android app that includes an SFTP/SSH client and configuration for pairs of WIFI SSIDs.<br />
<br />
When you come into range of your "source" SSID, the SD card partition on your phone gets filled up with data copied via SFTP from your source server (until full). When you come into range of your "destination" SSID, the process is reversed and data is copied to the destination machine via SFTP. To complete the data migration, rsync is kicked off on the destination host with flag to delete after copying. The rsync will connect directly back to the source and copy (from the source) to the new file. This will just kick off a file hash verification and either delete the file (if its OK) or fix it if its not (with hopefully minimal bandwidth).<br />
<br />
A HDD at work, a HDD at my parents place and an Android app and I'm all set. So far I've got a skeleton app working but its very very rough.<br />
<br />
More updates to come!Gaijinhttp://www.blogger.com/profile/13150968867429127703noreply@blogger.com0tag:blogger.com,1999:blog-5619309154309075735.post-46029913277492894752012-09-16T01:26:00.000+10:002012-09-16T01:26:47.977+10:00Hacking at heat-maps and Google Maps API<div class="separator" style="clear: both; text-align: left;">
I've been hacking away at a map visualization that I'm surprised hasn't appeared elsewhere already - a heatmap that interpolates between sample points. </div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
Every other heat map I've come across approximates interpolation by rendering circles with fading alpha at sample points and relies on the density of points to achieve a smooth result.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
Below is an early screenshot of a canvas-based heatmap overlay I'm writing that is built from the Delaunay triangulation of the underlying data points (that's quite a mouthful!). </div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
I'm using a scanline-based Gouraud shading algorithm to render the triangles to the canvas myself having tried and failed to get WebGL, SVG and CSS to do the job for me (none of these can draw triangles with three different coloured vertices with any serious speed).</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizo2sSwAnH3Y6HMvyu6cHHVIQFUnvgczw-OSEQlRZ5wQ0yl1qX3Z7cSWANmxuM3ZZ92sv9aSRcI703DwlJxDZZU7F3Rl6P-ZD0WKLaFYPWxtezLjU544bIwSju9gvHmXrEOmYIhGrSn0ZW/s1600/Screenshot+from+2012-09-16+01:07:43.png" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="195" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizo2sSwAnH3Y6HMvyu6cHHVIQFUnvgczw-OSEQlRZ5wQ0yl1qX3Z7cSWANmxuM3ZZ92sv9aSRcI703DwlJxDZZU7F3Rl6P-ZD0WKLaFYPWxtezLjU544bIwSju9gvHmXrEOmYIhGrSn0ZW/s400/Screenshot+from+2012-09-16+01:07:43.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Prices for 2BR units across Sydney's inner west.</td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: left;">
If there is interest, I will clean up and release the code once I have it working reasonably well and post it somewhere. For now, I'm rethinking my choice of Gouraud shading because with under-sampled data like I have here, it gives some very ugly, sharp lines.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="" style="clear: both; text-align: left;">
I'm currently using it to display property values but hopefully others can think up some more good uses for it. </div>
<br />Gaijinhttp://www.blogger.com/profile/13150968867429127703noreply@blogger.com0tag:blogger.com,1999:blog-5619309154309075735.post-9818618561196029052012-05-25T23:34:00.003+10:002012-05-25T23:41:21.713+10:00GNU SDR and EZCap<div class="separator" style="clear: both; text-align: center;">
</div>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgzWGYCWr6_nqgZv8YhWTe6rVi2_sEUByOJTC8xIRYLvrQXrkj-6RNlWuJjxLGxXhPXx6GJVIXD8flByNcs84OMMlvGYPsy7qDIgXOm2fhliQJzBHHgVAw70CdJ9E12iKuH7Ea6139UMSxK/s1600/IMG_20120525_232232.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="173" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgzWGYCWr6_nqgZv8YhWTe6rVi2_sEUByOJTC8xIRYLvrQXrkj-6RNlWuJjxLGxXhPXx6GJVIXD8flByNcs84OMMlvGYPsy7qDIgXOm2fhliQJzBHHgVAw70CdJ9E12iKuH7Ea6139UMSxK/s200/IMG_20120525_232232.jpg" width="200" /></a>Another nerdy post. I seem to be getting worse! :P<br />
<br />
My ezcap USB dongle (RTL1832U/E4000) arrived from dealextreme yesterday and it took a whole hour of tweaking settings and wikipedia browsing to go from fuzzy crackle to working (and I think I understand the basics of it) software defined radio FM decoding. I started with someone elses grc file (although I can't remember who's) and tweaked it until I got to this setup.<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgoH-ndezXCfS-pJSt-4npkFI0x3YK0bqdB4i9B9lMy9CxodQbgmcaOaMHEPIeGW2wFOdBGtgtjgqL8QVL8sw6DHe_m0ql8YPQ4HH6cYuTQibBRGeBg10A-6-cQGe4QGMHLSxQwW6SqOLCl/s1600/GNURadio+FM+Receiver.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="205" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgoH-ndezXCfS-pJSt-4npkFI0x3YK0bqdB4i9B9lMy9CxodQbgmcaOaMHEPIeGW2wFOdBGtgtjgqL8QVL8sw6DHe_m0ql8YPQ4HH6cYuTQibBRGeBg10A-6-cQGe4QGMHLSxQwW6SqOLCl/s400/GNURadio+FM+Receiver.png" width="400" /></a></div>
<br />
In the top FFT you can see the station I'm tuned to in the middle and two stations on either side (they seem to be spaced every 800khz apart in Sydney). The bottom shows the filtered signal for the station I'm tuned to.<br />
<br />
An amateur radio expert would probably laugh at me as I arrived at these settings through nothing but a bit of a rough direction and a lot of sheer luck but it seems to sound pretty good! Now I need to think of a legitimate use for this. :)<br />
<br />Gaijinhttp://www.blogger.com/profile/13150968867429127703noreply@blogger.com0tag:blogger.com,1999:blog-5619309154309075735.post-26497811459123045482012-04-29T20:59:00.003+10:002012-09-09T13:04:03.764+10:00An experiment in Van Emde Boas layoutsGiven an arbitrarily large sorted, balanced binary tree of values, the obvious way to search for elements is binary search. In undergraduate classes I learned that a binary search requires log(n) operations and is thus as efficient as you can get without throwing additional storage space at the problem. One thing my lecturer didn't dive into though was the effect of CPU cache on this otherwise simple little log(n) algorithm.<br />
<br />
<a name='more'></a><br />
<br />
Jumping around in memory comes with significant costs. How significant? In the case of my Intel i5 quad-core 3Ghz CPU, an L1 cache miss costs 4ns (~12 cycles). An L2 cache miss costs 10ns (~30 cycles). An L3, 39ns (~117 cycles). RAM 39-60ns (117-180 cycles).<br />
<br />
While binary sort sounds great on paper, it turns out that repeatedly jumping back and forth in memory could really clock up a tonne of wasted cycles if you're doing it a lot. What makes matters worse is the fact that different architectures and even generations of CPU's will have different sized caches with different performance characteristics.<br />
<br />
Enter the <a href="http://en.wikipedia.org/wiki/Van_Emde_Boas_tree">van Emde Boas layout</a> - a data layout designed to minimise cache misses without regard for the size of the cache. (I have a coded up visual comparison in JavaScript <a href="http://www.aarondrew.com/van_emde_boas.html">here</a>.)<br />
<br />
The layout breaks a tree into sqrt(n) sub trees, each of sqrt(n) nodes. This continues recursively until the trees contain a single node. The idea is to try to locate related data close to each other in memory to minimise cache misses. The recursive nature of the layout means that it works relatively well regardless of cache size.<br />
<br />
This layout complexity obviously adds additional computational complexity. The question I wanted answered was whether or not the benefits outweigh the costs. Enter experiment time!<br />
<ol>
<li>Single templatized C++ class for binary searching</li>
<li>A "Traverser" class for each of: in-order (sorted array), tree-order (think breadth-first-search), and van Emde Boas order</li>
<li>CacheGrind</li>
</ol>
<div>
My vEB Traverser class:</div>
<div>
<blockquote class="tr_bq">
<pre style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;">class vEBTraverser {
public:
uint64_t root() {
d = 0;
cix = 1;
return 1;
}
uint64_t left() {
d++;
cix <<= 1;
return vEBIndex();
}
uint64_t right() {
d++;
cix = (cix << 1) + 1;
return vEBIndex();
}
private:
uint64_t cix;
uint64_t d;
uint64_t vEBIndex() {
// Start with largest sub-tree, work down to smallest.
uint64_t ix = 1;
uint32_t new_d = d;
for (char b = 4; b >= 0; --b) {
const uint64_t b_val = 1L << b;
if (d & b_val) {
// Determine sub triangle and add start offset to index.
const uint64_t masked_d = d & (b_val - 1);
const uint64_t new_node_size = (1L << b_val) - 1;
uint64_t subtri_ix = (cix >> masked_d) & new_node_size;
ix += new_node_size * (1L + subtri_ix);
}
}
return ix;
}
};</pre>
</blockquote>
</div>
<h2>
Results</h2>
<h3>
Cache Misses</h3>
<blockquote class="tr_bq">
<pre style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;">$ for i in inorder bfs vEB; do echo; echo; echo ======= $i =======; valgrind --tool=cachegrind --D1=32768,8,64 --LL=6291456,12,64 ./a.out $i; done
======= inorder =======
==23833== Cachegrind, a cache and branch-prediction profiler
==23833== Copyright (C) 2002-2010, and GNU GPL'd, by Nicholas Nethercote et al.
==23833== Using Valgrind-3.6.1-Debian and LibVEX; rerun with -h for copyright info
==23833== Command: ./a.out inorder
==23833==
--23833-- warning: Unknown Intel cache config value (0x76), ignoring
--23833-- warning: Unknown Intel cache config value (0xff), ignoring
--23833-- warning: L2 cache not installed, ignore LL results.
In Order
Searched first half of keyspace in 674778 usec.
Searched last half of keyspace in 678757 usec.
Searched whole keyspace in 680336 usec.
==23833==
==23833== I refs: 216,654,084
==23833== I1 misses: 1,013
==23833== LLi misses: 1,011
==23833== I1 miss rate: 0.00%
==23833== LLi miss rate: 0.00%
==23833==
==23833== D refs: 56,351,206 (37,579,957 rd + 18,771,249 wr)
==23833== D1 misses: 6,822,090 ( 6,808,199 rd + 13,891 wr)
==23833== LLd misses: 11,053 ( 5,878 rd + 5,175 wr)
==23833== D1 miss rate: 12.1% ( 18.1% + 0.0% )
==23833== LLd miss rate: 0.0% ( 0.0% + 0.0% )
==23833==
==23833== LL refs: 6,823,103 ( 6,809,212 rd + 13,891 wr)
==23833== LL misses: 12,064 ( 6,889 rd + 5,175 wr)
==23833== LL miss rate: 0.0% ( 0.0% + 0.0% )
======= bfs =======
==23836== Cachegrind, a cache and branch-prediction profiler
==23836== Copyright (C) 2002-2010, and GNU GPL'd, by Nicholas Nethercote et al.
==23836== Using Valgrind-3.6.1-Debian and LibVEX; rerun with -h for copyright info
==23836== Command: ./a.out bfs
==23836==
--23836-- warning: Unknown Intel cache config value (0x76), ignoring
--23836-- warning: Unknown Intel cache config value (0xff), ignoring
--23836-- warning: L2 cache not installed, ignore LL results.
Breadth-First Order
Searched first half of keyspace in 492395 usec.
Searched last half of keyspace in 558116 usec.
Searched whole keyspace in 513091 usec.
==23836==
==23836== I refs: 169,892,772
==23836== I1 misses: 1,018
==23836== LLi misses: 1,016
==23836== I1 miss rate: 0.00%
==23836== LLi miss rate: 0.00%
==23836==
==23836== D refs: 45,135,648 (37,489,692 rd + 7,645,956 wr)
==23836== D1 misses: 2,196,869 ( 2,182,966 rd + 13,903 wr)
==23836== LLd misses: 11,055 ( 5,880 rd + 5,175 wr)
==23836== D1 miss rate: 4.8% ( 5.8% + 0.1% )
==23836== LLd miss rate: 0.0% ( 0.0% + 0.0% )
==23836==
==23836== LL refs: 2,197,887 ( 2,183,984 rd + 13,903 wr)
==23836== LL misses: 12,071 ( 6,896 rd + 5,175 wr)
==23836== LL miss rate: 0.0% ( 0.0% + 0.0% )
======= vEB =======
==23839== Cachegrind, a cache and branch-prediction profiler
==23839== Copyright (C) 2002-2010, and GNU GPL'd, by Nicholas Nethercote et al.
==23839== Using Valgrind-3.6.1-Debian and LibVEX; rerun with -h for copyright info
==23839== Command: ./a.out vEB
==23839==
--23839-- warning: Unknown Intel cache config value (0x76), ignoring
--23839-- warning: Unknown Intel cache config value (0xff), ignoring
--23839-- warning: L2 cache not installed, ignore LL results.
vEB Order (traverser 24)
Searched first half of keyspace in 1354290 usec.
Searched last half of keyspace in 1370509 usec.
Searched whole keyspace in 1367261 usec.
==23839==
==23839== I refs: 488,281,602
==23839== I1 misses: 1,008
==23839== LLi misses: 1,006
==23839== I1 miss rate: 0.00%
==23839== LLi miss rate: 0.00%
==23839==
==23839== D refs: 68,705,251 (37,868,809 rd + 30,836,442 wr)
==23839== D1 misses: 1,416,027 ( 1,402,129 rd + 13,898 wr)
==23839== LLd misses: 11,055 ( 5,880 rd + 5,175 wr)
==23839== D1 miss rate: 2.0% ( 3.7% + 0.0% )
==23839== LLd miss rate: 0.0% ( 0.0% + 0.0% )
==23839==
==23839== LL refs: 1,417,035 ( 1,403,137 rd + 13,898 wr)
==23839== LL misses: 12,061 ( 6,886 rd + 5,175 wr)
==23839== LL miss rate: 0.0% ( 0.0% + 0.0% )</pre>
</blockquote>
<div>
As expected, we see the D1 miss rate where vEB < tree-order < in-order.</div>
<h3>
Real-world performance</h3>
<br />
Its almost impossible to make generic claims about real-world performance here. The actual cost-benefit argument will depend entirely on your combination of CPU, RAM, data set size and data access patterns. Nevertheless, in my case I'm testing with 64k of packed entries of 4 bytes of key and 1 byte of data. (Note that due to a limitation of my implementation of the vEB layout, I require tree depths d = 2^(2^n) and it would involve significantly more effort to run these benchmarks with 4G of entries.)<br />
<blockquote class="tr_bq">
<pre style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;">$ for i in inorder bfs vEB; do echo; echo; echo ======= $i =======; ./a.out $i; done
======= inorder =======
In Order
Searched first half of keyspace in 37566 usec.
Searched last half of keyspace in 32375 usec.
Searched whole keyspace in 32109 usec.
======= bfs =======
Breadth-First Order
Searched first half of keyspace in 16411 usec.
Searched last half of keyspace in 17368 usec.
Searched whole keyspace in 17736 usec.
======= vEB =======
vEB Order (traverser 24)
Searched first half of keyspace in 36477 usec.
Searched last half of keyspace in 38805 usec.
Searched whole keyspace in 38378 usec.</pre>
</blockquote>
<div>
So it looks as though despite vEB being a more cache-friendly layout, the cost of determining the index of nodes in a vEB layout tends to outweigh the benefits.<br />
<br /></div>
<h2>
Conclusion</h2>
<div>
I've been investigating vEB layouts for potentially turbo-charging a bunch of static lookup service code so this is a pretty disappointing result in that regard but still, all is not lost. BFS ordering is clearly a very big performance win for minimal computational cost. Also, the focus of my efforts so far have been exclusively on the CPU cache benefits vEB might provide but even if these are nullified by the extra computational overheads, slower storage technologies such as CD, flash and disk should still benefit significantly from the vEB layout. This is doubly true for media with expensive seeks. I might have to run another set of similar benchmarks on various storage media. I imagine the results would be much more in vEB's favour if applied to spinning media.</div>
<br />
FYI: If you're interested in the cache configuration of your machine and you run linux, this is a great little trick I discovered in my Googling travels:<br />
<blockquote class="tr_bq">
<pre style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;">$ grep . /sys/devices/system/cpu/cpu0/cache/index*/*</pre>
</blockquote>
<br />
<br />Gaijinhttp://www.blogger.com/profile/13150968867429127703noreply@blogger.com0tag:blogger.com,1999:blog-5619309154309075735.post-16537180876953781872012-03-25T19:47:00.003+11:002012-03-25T23:13:05.804+11:00How to do dirt-cheap, cloud-based, encrypted backups (Part 2)In my <a href="http://japanesesoapbox.blogspot.com.au/2012/03/how-to-do-your-own-dirt-cheap-cloud.html">previous post</a> I describing the method I used to store data online, I referred to what I was doing as a "backup" but it would have probably been more accurate to call it a near-real-time off-site mirror. In this post, I describe the pitfalls of my previous system and describe my much-improved latest technique.<br />
<br />
<a name='more'></a>Firstly, using s3fs, encfs and lsyncd worked well for small data sets of several megabytes but when scaled up to 10GB of source code, uni assignments and random other files, the round trip time to S3 and all the overheads in the system really start to add up. My internet connection should theoretically be able to upload that data in 24 hours. At the rate it was running when I stopped it, it would have taken about 2-3 weeks!<br />
<br />
Secondly, there is the issue of download necessity. I will rarely, if ever, want to access the data I am uploading. It is, in all senses of the word, a backup. 99.9% of the time I shouldn't care about reading files and 100% of the time I shouldn't have to care about individual files.<br />
<br />
Thirdly, I don't like being bound to S3. At some stage in the future a geographically disperse group of friends and I will all contribute a pool of disks and provide our own hosting for backups.<br />
<br />
With all that in mind, my latest strategy is somewhat simpler. It involves using ZFS for a storage filesystem and zfs send + cron for incremental backups.<br />
<br />
<b>Version 1.5: tar + cron</b><br />
Its probably worth mentioning that old-school GNU tar can also replace zfs send here. The <a href="http://www.gnu.org/software/tar/manual/html_node/Incremental-Dumps.html">--listed-incremental flag</a> for tar makes it trivial to do incremental backups and these can be encrypted via openssl or whatever tool you like and uploaded along with the shar checkpoint file for super-trivial backups. No need for encfs. No need for lsyncd. You can also trivially make use of compression to get the most out of your uplink. Something like:<br />
<br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: x-small;">$ tar --listed-incremental=/mnt/s3fs/2012.shar -cjf /mnt/`date +"%Y%m%d"`.tar ~/secure_files</span><br />
<br />
I started down this approach but the lack of local snapshots didn't make me happy. This system solves the "house burned down while I was out" scenario but not the "sleep-deprived programmer deletes his last 12 hours of work by mistake" scenario.<br />
<br />
<b>Version 2.0: ZFS to the rescue!</b><br />
I've used <a href="http://en.wikipedia.org/wiki/ZFS">ZFS</a><span id="goog_1685167906"></span><span id="goog_1685167907"></span><a href="http://www.blogger.com/"></a> before with great success. Its an excellent filesystem that makes snapshots, de-duplication, compression, raid(ish), and multi-device configurations much easier to deal with. The <i style="font-weight: bold;">only </i>issue I had with it is that I use linux at home, not Solaris / FreeBSD and the FUSE version of ZFS is not well maintained and has FUSE-related performance bottlenecks. Given I have a very limited space for hardware where I live, I was contemplating a complex FreeBSD-based xen dom0 host system to run an NFS exported ZFS filesystem and a Linux domU that I'd use for day-to-day computing. The fact I was considering such a complex mess I guess shows how desperate I was to find a solution. In any case, it was about this time that I stumbled across the wonderful <a href="http://zfsonlinux.org/">zfsonlinux project</a> that seems to have resolved the legal issues stopping ZFS integration with the linux kernel! From their website:<br />
<blockquote class="tr_bq">
The ZFS code can be modified to build as a CDDL licensed kernel module which is <em>not distributed</em> as part of the Linux kernel. This makes a Native ZFS on Linux implementation possible if you are willing to <a href="http://github.com/zfsonlinux/zfs/downloads">download</a> and <a href="http://zfsonlinux.org/zfs-building-rpm.html">build it</a> yourself.</blockquote>
Great! So I did. And its performance is quite impressive! I get the performance I'd expect to get from FreeBSD or Solaris (50+MB/sec for a single drive, and slightly less than double for two drives) and none of the CPU bottleneck issues I had with the FUSE version years ago. So, with a stable ZFS available for linux, I threw all the terabytes of storage I could find that could fit into my desktop and set about copying over my data. Now I can set up all the periodic local snapshots I want!<br />
<br />
To perform my snapshotting and upload my daily deltas, I've written a small shell script as follows:<br />
<br />
<div style="padding-left: 16px;">
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;">#!/bin/bash</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"></span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;">#</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"># Triggers rolling periodic snapshots of ZFS filesystems</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;">#</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"># Mode (first argument) can be one of DAY,HOUR,MINUTE.</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"># The mode dictates the actual operations performed.</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"><br /></span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;">MODE=$1</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;">FILESYSTEMS="$2"</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;">S3PATH="/mnt/s3fs"</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;">S3BUCKET="my_bucket_name"</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;">S3PASSFILE="/path/to/.password-s3fs"</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;">ZFSPASSFILE="/path/to/.password-zfsbackup"</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"><br /></span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;">if [ "x$MODE" == "x" ] || [ "x$FILESYSTEMS" == "x" ]; then</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> echo "Usage: $0 <MODE>"</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> echo "Triggers rolling ZFS snapshots for configured filesystems."</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> echo</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> echo "MODE should be one of DAY,HOUR,MINUTE"</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> echo</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> exit 1</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;">fi</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"><br /></span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;">case $MODE in</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> "DAY" )</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> SNAPSHOT=`date +"day_%Y%m%d"`</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> ;;</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> "HOUR" )</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> SNAPSHOT=`date +"hour_%H"`</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> ;;</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> "MINUTE" )</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> SNAPSHOT=`date +"minute_%M"`</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> ;;</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> *)</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> echo "Invalid mode '$MODE' specified."</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> exit 1</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> ;;</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;">esac</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"><br /></span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;">for fs in $FILESYSTEMS; do</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> # Remove previous snapshot (only relevant in hour and minute snapshots)</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> if [ "$MODE" != "DAY" ]; then</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> /sbin/zfs destroy $fs@$SNAPSHOT 1> /dev/null 2> /dev/null</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> fi</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> /sbin/zfs snapshot $fs@$SNAPSHOT</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;">done</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"><br /></span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"># For daily backups, we store a snapshot delta on S3 in encrypted form.</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;">if [ "$MODE" == "DAY" ]; then</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> # Mount s3fs if need be.</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> if ! /bin/grep "s3fs $S3PATH" /proc/mounts 1> /dev/null; then</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> echo "Mounting S3FS"</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> /usr/local/bin/s3fs $S3BUCKET:/ $S3PATH -ouse_cache=/tmp,passwd_file=$S3PASSFILE</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> fi</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> for fs in $FILESYSTEMS; do</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> CHECKPOINT_FILE="$S3PATH/"`echo $fs | tr '/' '_'`".checkpoint"</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> OUTFILE="$S3PATH/"`echo $fs | tr '/' '_'`".$SNAPSHOT.lzma.aes256cbc"</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> if [ -f "$CHECKPOINT_FILE" ]; then</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> CHECKPOINT=`cat $CHECKPOINT_FILE`</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> /sbin/zfs send -i $fs@$CHECKPOINT $fs@$SNAPSHOT | lzma -z | openssl enc -aes-256-cbc -salt -pass file:$ZFSPASSFILE > $OUTFILE</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> else</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> /sbin/zfs send $fs@$SNAPSHOT | lzma -z | openssl enc -aes-256-cbc -salt -pass file:$ZFSPASSFILE > $OUTFILE</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> fi</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> echo $SNAPSHOT > $CHECKPOINT_FILE</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> done</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;">fi</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"><br /></span></div>
Now I just kick off this script crontab every hour and minute:<br />
<br />
<div style="padding-left: 16px;">
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;">0 0 * * * /root/snapshot.sh DAY "tank/home tank/photos"</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;">0 * * * * /root/snapshot.sh HOUR "tank/home tank/photos"</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;">* * * * * /root/snapshot.sh MINUTE "tank/home tank/photos"</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;">0 * * * * /root/check_zpool.sh</span><br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;">0 1 * * 0 /sbin/zpool scrub tank</span></div>
<br />
The <i>check_zpool.sh</i> script just emails me if<i> zpool status -x </i>returns anything other than "all pools are healthy".<br />
<br />
That's it! Having it running now, I don't know what its taken me so long to set something like this up!<br />
<br />
Its also worth mentioning the other niceties we can potentially get with this command. If we want to keep a synchronized filesystem with a friend, we can use ssh and cron to push filesystem deltas every so often to a remote read-only copy of our filesystem using zfs send/receive! I'll probably give that a go at some stage soon as a means of sharing family photos with my parents and siblings and post about it here.Gaijinhttp://www.blogger.com/profile/13150968867429127703noreply@blogger.com2tag:blogger.com,1999:blog-5619309154309075735.post-4507051489438130562012-03-20T00:17:00.002+11:002012-03-25T22:23:00.662+11:00How to do dirt-cheap, cloud-based, encrypted backups (Part 1)I've been dabbling on code with the final aim of building <a href="https://github.com/aarond10/rawdishfs">a peer-to-peer distributed filesystem</a> for a while now and I seem to keep <a href="https://github.com/aarond10/mpio/commits/master">hitting</a> <a href="https://github.com/aarond10/epoll_threadpool">diversions</a> along the way. In an internal monologue somewhat along the lines of "What would MacGyver do?", the idea for this concoction of open-source software and cloud services was born. In about 20 minutes or so of messing about (assuming you're comfortable with linux), I'll explain how you too can sleep sounder at night for $0.125/GB/month.<br />
<br />
<a name='more'></a>The basic idea is to store an encrypted copy of your files on Amazon S3 and use lsync (which uses the Linux inotify feature) to automatically push changes to your local files to "the cloud" as soon as you make them.<br />
<br />
If you're not fussed with encryption, you can certainly skip the EncFS step below but given <a href="http://www.wired.com/threatlevel/2011/05/dropbox-ftc/">I don't trust cloud storage providers not to snoop on my files</a>, I would encourage others to go the extra yard and run with EncFS too.<br />
<br />
Now assuming you're running ubuntu, the easiest way to get things off the ground is:<br />
<ol>
<li>Sign up for <a href="https://aws-portal.amazon.com/">Amazon AWS</a>. It takes a few minutes for the S3 setup process to complete so do this first if you haven't already.</li>
<li>Install all the apt-based software you'll need:<br />
<span style="font-family: 'Courier New', Courier, monospace; font-size: x-small;">$ sudo apt-get install build-essential encfs libfuse-dev lsyncd fuse-utils libcurl4-openssl-dev libxml2-dev mime-support</span>
<br />
<ol></ol>
</li>
<li>Download, build and install <a href="http://code.google.com/p/s3fs/wiki/FuseOverAmazon">s3fs</a>:<br /><span style="font-family: 'Courier New', Courier, monospace; font-size: x-small;">$ wget http://s3fs.googlecode.com/files/s3fs-1.61.tar.gz<br />$ cd s3fs-1.61<br />$ ./configure && make && sudo make install</span></li>
<li>By now you should have your Amazon Access Key. Create ~/.password-s3fs as follows:<br /><span style="font-family: 'Courier New', Courier, monospace; font-size: x-small;">$ echo ACCESSKEY:SECRETKEY > ~/.password-s3fs<br />$ chmod 600 ~/.passwd-s3fs</span></li>
<li>Head over to the <a href="https://console.aws.amazon.com/s3/home?#">AWS Console</a> and create an S3 bucket for yourself then run:<br /><span style="font-family: 'Courier New', Courier, monospace; font-size: x-small;">$ sudo mkdir /mnt/s3fs && sudo s3fs mybucket /mnt/s3fs -ouse_cache=/tmp,passwd_file=~/.password-s3fs</span></li>
<li>Create a new encrypted filesystem using s3fs as the storage point:<br /><span style="font-family: 'Courier New', Courier, monospace; font-size: x-small;">$ sudo mkdir /mnt/encfs && sudo encfs /mnt/s3fs /mnt/encfs<br /><p><br />New Encfs Password:<br />Verify Encfs Password: </span></li>
<li>Almost there. Now just set up the directory you want synced:<br /><span style="font-family: 'Courier New', Courier, monospace; font-size: x-small;">$ mkdir ~/secure_files && lsyncd -rsync ~/secure_files /mnt/encfs</span></li>
<li><span style="font-family: 'Courier New', Courier, monospace; font-size: x-small;"><span style="font-family: 'Times New Roman'; font-size: small;">You're done! Just start dumping your files into ~/secure_files and they'll be encrypted and uploaded to the cloud. </span></span></li>
</ol>
For 10 GB/year I expect I will pay about $20USD. Obviously your mileage will vary depending on how active you are in your backup directory and how much data you have.<br />
<div>
<br />
Automating this shebang is clearly something that you'll want to do but I'll leave that as an exercise for the reader. If you come up with any clever short cuts or additions to this, I'd love to hear about them.</div>Gaijinhttp://www.blogger.com/profile/13150968867429127703noreply@blogger.com2tag:blogger.com,1999:blog-5619309154309075735.post-90468345905670698472012-02-15T00:12:00.000+11:002012-02-15T00:12:22.326+11:00NSW Property price heatmapI've been considering buying a property of late but without a strong grasp of the greater Sydney area's geography I found I couldn't really judge whether a price was appropriate or not for a given suburb. Seeing how much as I love data, I threw together <a href="http://www.aarondrew.com/property/">Property Hot Spots</a> . Its backed by couchdb and aside from the initial tile generation, is essentially static. It could do with a design cleanup and perhaps a canvas-based heatmap that isn't so ugly but it served its purpose so I figured I'd post it here in case others find it useful. The newest data displayed here is from Jan 2012. The oldest a few years back. I might add date of sale somehow in a future change if I can find an uncluttered way to display it. :)<div>
<br /></div>
<div>
<a href="http://www.aarondrew.com/property/">Property Hot Spots</a></div>Gaijinhttp://www.blogger.com/profile/13150968867429127703noreply@blogger.com0tag:blogger.com,1999:blog-5619309154309075735.post-20998321740485324512012-02-10T01:51:00.000+11:002012-02-10T01:51:02.965+11:00How to read Japanese books above you Kanji levelAfter moving back to Australia, my kanji has gone seriously downhill. Some friends from Japan recently brought me over the 3 books of Haruki Murakami's 1Q84 for me to study with and I quickly found I was utterly useless on my own without a dictionary.<br />
<br />
Its a chicken-and-egg problem that I presume many other scholars of Japanese also have. Books with simple kanji are often targeted at younger audiences and can be painfully boring for adults. Books targeted at adults assume a kanji level much higher than I currently have. The end result is you get bored by either the content you're reading or the constant turning to your dictionary.<br />
<br />
So my new Japanese study pipeline involves:<br />
<br />
<ol><li>Scan a chapter at 300 DPI greyscale.</li>
<li>Run OCR over it.</li>
<li>Correct any errors. (I am seeing about a 95-98% accuracy rate so this is quick.)</li>
<li>Save the text as UTF-8 HTML.</li>
<li>Use Rikai-kun as necessary to get super-quick dictionary lookups when you need them.</li>
</ol><div>This process is not as involved as it sounds. Its probably about 30-60 seconds per page on average amortized across all the conversion tasks. I would have spent MUCH longer than this fumbling with my dictionary if I'd tried to read it the traditional way. </div><div><br />
</div><div>Also as a bonus, I am now building up a personal digital copy I can carry with me much more easily than the 3 hard cover books on the shelf!</div>Gaijinhttp://www.blogger.com/profile/13150968867429127703noreply@blogger.com0tag:blogger.com,1999:blog-5619309154309075735.post-25406483957606607182012-01-07T18:21:00.000+11:002012-01-07T18:21:56.895+11:00So you want to get your ustream video off your iPhone?UStream.tv is great for streaming events to distant relatives and such. My wife and I used their iPhone app to great success for our wedding last year. Unfortunately, for videos that we were able to record but not stream due to limited WiFi access, we have had no luck getting the incredibly slow uploads to actually complete. After a year and numerous attempts, I thought I'd find my own way to get these video's off the device without resorting to jailbreaking, etc.<br />
<br />
Ustream sends a HTTP request to ask for the location of an FTP server to upload your videos to. We will pretend to be that server.<br />
<br />
1. On an ubuntu box (call it 192.168.1.10):<br />
<br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> $ sudo apt-get install ftpd wireshark</span><br />
<br />
2. Find or setup a linux router between you and the internet and run this on it:<br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace; font-size: x-small;"><br />
</span><br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> $ iptables -t nat -I PREROUTING -p tcp --dport 21 -j DNAT --to-destination 192.168.1.10</span><br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> $ iptables -t nat -I PREROUTING -p tcp -d 74.217.100.0/24 -j REDIRECT --to-port 21</span><br />
<br />
The first line sends all FTP traffic to your local FTP server instead of wherever it was originally headed.<br />
The second line redirects all ports sent to the ustream FTP server IP subset (at the time of writing at least. ping red37.ustream.tv) to port 21.<br />
Together they make sure all the traffic ustream tries to send to itself gets sent to your ubuntu box instead.<br />
<br />
3. Now you either a. mess with your FTP server to allow all usernames / passwords to work (edit the source code, mess with authentication modules, etc.) or b. do as I did and run tcpdump on your linux router:<br />
<br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> $ sudo tcpdump -i eth0 "src iphone_ip or dst iphone_ip" -s 2000 -w iphone.pcap</span><br />
<br />
3a. Go to your iphone and try to upload a video. It will get stuck at Uploading 0%. Cancel the upload. Ctrl-C tcpdump and open the file on your ubuntu workstation with wireshark.<br />
<br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> $ wireshark iphone.pcap</span><br />
<br />
3b. The username and password ustream uses will be in a binary blob in the response to a request sent to http://rgw.ustream.tv/gateway.php that you should have in your iphone.pcap log file.<br />
<br />
3c. Run off and create the users as required:<br />
<br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> $ sudo useradd -m 1_12345_12345</span><br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> $ sudo passwd 1_12345_12345</span><br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> New password: .....</span><br />
<br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;"> New password (again): .....</span><br />
<div><br />
</div><br />
4. Go to your phone, hit "Upload".<br />
<br />
5. Profit! (Or simply savour your new-found ability to watch your precious videos wherever you like!)Gaijinhttp://www.blogger.com/profile/13150968867429127703noreply@blogger.com0tag:blogger.com,1999:blog-5619309154309075735.post-19234127906444805142011-12-07T22:49:00.000+11:002011-12-07T22:52:41.087+11:00epoll_threadpoolI set out a few months back to build a distributed filesystem with features similar to dropbox but with 100% encrypted storage that is shared amongst friends in a "dark-net" of sorts. In the process, I found myself wanting a fast, light-weight RPC system and, in turn, a fast, light-weight event queuing system. After fighting with libevent and msgpack-rpc, I eventually decided to write my own and epoll_threadpool was born.<br />
<br />
The library is still in its infancy but I don't expect it to grow much (if at all) in size. Its Linux only (epoll-based), dependencies are light, speed should be reasonable and all tests are passing so I thought I'd throw it out to the world to see if anyone finds it useful.<br />
<br />
Features:<br />
<br />
<ul><li>epoll-based</li>
<li>runs a thread pool, executing events on the first available thread.</li>
<li>easy-to-use IOBuffer class for streaming data.</li>
<li>TCP client and server support.</li>
</ul><div>Available on github as <a href="https://github.com/aarond10/epoll_threadpool">epoll_threadpool</a>.</div>Gaijinhttp://www.blogger.com/profile/13150968867429127703noreply@blogger.com0tag:blogger.com,1999:blog-5619309154309075735.post-70884390349639472472011-10-30T16:18:00.000+11:002011-10-30T16:25:03.311+11:00Qantas RantQantas rant! I also object to Mr Joyce's pay hike but lets put things in real numbers with respect to union demands. Pilots already get paid (in my opinion) WAY more than they should thanks to strong unionism and after a 17% hike - way more than inflation - a second hike seems plain greedy.<br />
<br />
A senior pilot's wage is around $500k according to <a href="http://finance.ninemsn.com.au/newsbusiness/aap/8254213/qantas-pilots-demand-perks-and-pay-rises">this</a> and presumably a senior co-pilot would be around $350k. According to <a href="http://www.bls.gov/oco/ocos107.htm">this</a> most pilots work 215hrs a month (75 hours of that flying). In hourly rates, that's $193/hr for pilots and $135 for co-pilots.<br />
<br />
If a plane flies 80% of the time, there are 86.4 of the 108 qantas grounded planes in the air at any given time with a pilot and copilot each. That's an operational cost of $28339.20 an hour (24/7) or $248 million a year. A 2.5% pay increase for pilots will this cost the airline $6.2 million in flight time alone. That doesn't cover ground-based preparation and other duties.<br />
<br />
There are no parties here that are NOT being greedy but the union is trying to tarnish the brand in order to push around management. Strike action is their biggest hammer and they've been swinging it around with way too little regard for the people they hit with it for way too long. If pilots are not happy with their conditions they should walk with their feet (out the door). If their salaries are average as they claim for senior pilots, they shouldn't have trouble finding work elsewhere.<br />
<br />
As for the CEO's salary, keep separate issues separate. That's a topic for another rant.Gaijinhttp://www.blogger.com/profile/13150968867429127703noreply@blogger.com0tag:blogger.com,1999:blog-5619309154309075735.post-38554424501004969612011-09-05T23:56:00.000+10:002011-10-07T23:15:05.686+11:00Can OpenWRT save my Netgear DGN2000?I made the mistake of buying a Netgear DGN2000 ADSL2+ modem when I first got ADSL. Not only does it not support IPv6, its WiFi range is pathetic, bridging between WiFi and ethernet seems to die after several days of operation, and under heavy load the device hard-crashes, requiring a reboot. I suspect the crashing might be due to poor thermal design and perhaps my specific device but clearly the software is not also without some blame. Given my experiences, I would <b>NEVER</b> recommend this device. But, now that I have one that I can't return, I'm going to document the process of installing OpenWRT to see if I can give this thing a second lease on life.<br />
<br />
<a name='more'></a><br />
Quick aside: I tried to download the official source code from NetGear. If you follow the links to the source on <a href="http://support.netgear.com/app/answers/detail/a_id/2649/~/gpl-open-source-code-for-programmers">their support site</a> you get to <a href="http://support.netgear.com/app/answers/detail/a_id/19333">this page</a>. If you email them, you get this response:<br />
<span class="Apple-style-span" style="background-color: white;"></span><br />
<blockquote style="font-family: arial, sans-serif; font-size: 13px;"><b><span style="color: #000066; font-family: Arial; font-size: small;">Delivery has failed to these recipients or groups:</span></b><a href="mailto:opensourcesw@netgear.com" style="color: #1c51a8;" target="_blank">opensourcesw@netgear.com</a><i><span class="Apple-style-span" style="background-color: white; font-style: normal;"><span style="color: black; font-family: Arial; font-size: small;">Your message can't be delivered because delivery to this address is restricted.</span></span><span class="Apple-style-span" style="font-family: arial, sans-serif; font-size: x-small;"> </span></i></blockquote>Is this a poor attempt to avoid the GPL to me by making users jump through hoops to nowhere? At the very least it's extremely poor after-sale service.<br />
<i><br />
</i><br />
On to business... <i><b>I take no responsibility for you frying your box, etc, etc.. </b></i><br />
<br />
Before beginning, I wanted to back up the existing firmware in case things go horribly wrong. This router has a debug mode that will enable telnet access by visiting <u><span class="Apple-style-span" style="color: blue;"><a href="http://192.168.0.1/setup.cgi?todo=debug">http://192.168.0.1/setup.cgi?todo=debug</a></span></u>:<br />
<blockquote><span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace; font-size: x-small;">Debug Enable!</span> </blockquote>Done! Now we can telnet in:<br />
<blockquote><span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace; font-size: x-small;">$ telnet 192.168.0.1<br />
Trying 192.168.0.1...<br />
Connected to 192.168.0.1.<br />
Escape character is '^]'.<br />
login: admin<br />
Password:<b><your password></b><br />
<br />
BusyBox v1.00 (2009.08.03-11:30+0000) Built-in shell (ash)<br />
Enter 'help' for a list of built-in commands.</span></blockquote><blockquote><span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace; font-size: x-small;"># dmesg<br />
Linux version 2.6.8.1 (root@suzhou-build-server) (gcc version 3.4.2) #1 Mon Aug 3 18:12:28 CST 2009<br />
Parallel flash device: name AM29LV320MB, id 0x2200, size 4096KB<br />
96348W3 prom init<br />
CPU revision is: 00029107<br />
Determined physical RAM map:<br />
memory: 00fa0000 @ 00000000 (usable)<br />
On node 0 totalpages: 4000<br />
DMA zone: 4000 pages, LIFO batch:1<br />
Normal zone: 0 pages, LIFO batch:1<br />
HighMem zone: 0 pages, LIFO batch:1<br />
Built 1 zonelists<br />
Kernel command line: root=31:0 ro noinitrd<br />
brcm mips: enabling icache and dcache...<br />
Primary instruction cache 16kB, physically tagged, 2-way, linesize 16 bytes.<br />
Primary data cache 8kB 2-way, linesize 16 bytes.<br />
PID hash table entries: 64 (order 6: 512 bytes)<br />
Using 120.000 MHz high precision timer.<br />
Dentry cache hash table entries: 4096 (order: 2, 16384 bytes)<br />
Inode-cache hash table entries: 2048 (order: 1, 8192 bytes)<br />
Memory: 13696k/16000k available (1454k kernel code, 2284k reserved, 222k data, 84k init, 0k highmem)<br />
Calibrating delay loop... 239.20 BogoMIPS<br />
Mount-cache hash table entries: 512 (order: 0, 4096 bytes)<br />
Checking for 'wait' instruction... unavailable.<br />
NET: Registered protocol family 16<br />
Total Flash size: 4096K with 71 sectors<br />
File system address: 0xbfc30100<br />
No flash for scratch pad!<br />
Can't analyze prologue code at 8017a074<br />
devfs: 2004-01-31 Richard Gooch (rgooch@atnf.csiro.au)<br />
devfs: boot_options: 0x1<br />
PPP generic driver version 2.4.2<br />
NET: Registered protocol family 24<br />
IMQ starting with 2 devices...<br />
IMQ driver loaded successfully.<br />
Hooking IMQ before NAT on PREROUTING.<br />
Hooking IMQ after NAT on POSTROUTING.<br />
Using noop io scheduler<br />
bcm963xx_mtd driver v1.0<br />
kernel_addr == 0xbff73100 rootfs_addr == 0xbfc30100<br />
Physically mapped flash: Found 1 x16 devices at 0x0 in 16-bit bank<br />
Amd/Fujitsu Extended Query Table at 0x0040<br />
number of CFI chips: 1<br />
cfi_cmdset_0002: Disabling erase-suspend-program due to code brokenness.<br />
Creating 6 MTD partitions on "Physically mapped flash":<br />
0x00030100-0x00373100 : "fs"<br />
mtd: partition "fs" doesn't start on an erase block boundary -- force read-only<br />
0x00030000-0x00400000 : "tag+fs+kernel"<br />
0x00000000-0x00010000 : "bootloader"<br />
0x00020000-0x00030000 : "nvram"<br />
0x00000000-0x00010000 : "bootloader"<br />
0x00010000-0x00020000 : "DPF_file"<br />
brcmboard: brcm_board_init entry<br />
SES: LED GPIO 0x8022 is enabled<br />
Serial: BCM63XX driver $Revision: 3.00 $<br />
ttyS0 at MMIO 0xfffe0300 (irq = 10) is a BCM63XX<br />
Broadcom BCMPROCFS v1.0 initialized<br />
NET: Registered protocol family 2<br />
IP: routing cache hash table of 512 buckets, 4Kbytes<br />
TCP: Hash tables configured (established 512 bind 1024)<br />
ip_conntrack version 2.1 (125 buckets, 0 max) - 384 bytes per conntrack<br />
ip_conntrack_h323: init<br />
ip_conntrack_rtsp v0.01 loading<br />
ip_nat_h323: initialize the module!<br />
ip_nat_rtsp v0.01 loading<br />
ip_tables: (C) 2000-2002 Netfilter core team<br />
NET: Registered protocol family 1<br />
NET: Registered protocol family 17<br />
NET: Registered protocol family 8<br />
NET: Registered protocol family 20<br />
VFS: Mounted root (squashfs filesystem) readonly.<br />
Mounted devfs on /dev<br />
Freeing unused kernel memory: 84k freed<br />
Algorithmics/MIPS FPU Emulator v1.5<br />
bcm_enet: module license 'Proprietary' taints kernel.<br />
Broadcom BCM6348B0 Ethernet Network Device v0.3 Aug 3 2009 18:11:24<br />
Config Ethernet Switch Through MDIO Pseudo PHY Interface<br />
ethsw: found bcm5325e!<br />
dgasp: kerSysRegisterDyingGaspHandler: eth0 registered<br />
eth0: MAC Address: 30:46:9A:2A:10:28<br />
blaadd: blaa_detect entry<br />
adsl: adsl_init entry<br />
netfilter PSD loaded - (c) astaro AG<br />
ipt_random match loaded<br />
device eth0 entered promiscuous mode<br />
BcmAdsl_Initialize=0xC00733A8, g_pFnNotifyCallback=0xC008C2A4<br />
AnnexCParam=0x7FFF7E68 AnnexAParam=0x00003987 adsl2=0x00000003<br />
pSdramPHY=0xA0FFFFF8, 0xFFFFFDFF 0xFFFFFFFF<br />
AdslCoreHwReset: AdslOemDataAddr = 0xA0FFA4D4<br />
AnnexCParam=0x7FFF7E68 AnnexAParam=0x00003987 adsl2=0x00000003<br />
dgasp: kerSysRegisterDyingGaspHandler: dsl0 registered<br />
ATM proc init !!!<br />
PCI: Setting latency timer of device 0000:00:01.0 to 64<br />
PCI: Enabling device 0000:00:01.0 (0004 -> 0006)<br />
wl: srom not detected, using main memory mapped srom info (wombo board)<br />
wl0: wlc_attach: use mac addr from the system pool by id: 0x776c0000<br />
wl0: MAC Address: 30:46:9A:2A:10:28<br />
wl0: Broadcom BCM4322 802.11 Wireless Controller 4.174.64.12.cpe1.1<br />
dgasp: kerSysRegisterDyingGaspHandler: wl0 registered<br />
br0: port 1(eth0) entering learning state<br />
br0: topology change detected, propagating<br />
br0: port 1(eth0) entering forwarding state<br />
AnnexCParam=0x7FFF7E88 AnnexAParam=0x00003987 adsl2=0x00000003<br />
ATM proc init !!!<br />
ADSL G.994 training<br />
ADSL G.992 started<br />
ADSL G.992 channel analysis<br />
ADSL G.992 message exchange<br />
ADSL link down<br />
ADSL G.994 training<br />
ADSL G.992 started<br />
ADSL G.992 channel analysis<br />
ADSL G.992 message exchange<br />
ADSL link up, interleaved, us=1022, ds=16200</span></blockquote><div>First backup the original flash contents:</div><div><blockquote><span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace; font-size: x-small;"># dd if=/dev/mtdblock/1 of=/tmp/mtd1.bin<br />
7808+0 records in<br />
7808+0 records out<br />
# cd /tmp<br />
# mini_httpd -p 1080</span></blockquote> Download <a href="http://192.168.0.1:1080/mtd1.bin">http://192.168.0.1:1080/mtd1.bin</a>.<br />
<blockquote><span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace; font-size: x-small;"># rm mtd1.bin<br />
# dd if=/dev/mtdblock/0 of=/tmp/mtd0.bin<br />
6680+0 records in<br />
6680+0 records out</span></blockquote>Download <a href="http://192.168.0.1:1080/mtd0.bin">http://192.168.0.1:1080/mtd0.bin</a>.<br />
<blockquote><span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace; font-size: x-small;"># rm mtd0.bin<br />
# dd if=/dev/mtdblock/2 of=/tmp/mtd2.bin<br />
128+0 records in<br />
128+0 records out<br />
# dd if=/dev/mtdblock/3 of=/tmp/mtd3.bin<br />
128+0 records in<br />
128+0 records out<br />
# dd if=/dev/mtdblock/4 of=/tmp/mtd4.bin<br />
128+0 records in<br />
128+0 records out<br />
# dd if=/dev/mtdblock/5 of=/tmp/mtd5.bin<br />
128+0 records in<br />
128+0 records out</span></blockquote></div>Download <a href="http://192.168.0.1:1080/mtd2.bin">http://192.168.0.1:1080/mtd2.bin</a>, <a href="http://192.168.0.1:1080/mtd3.bin">http://192.168.0.1:1080/mtd3.bin</a>, <a href="http://192.168.0.1:1080/mtd4.bin">http://192.168.0.1:1080/mtd4.bin</a>, <a href="http://192.168.0.1:1080/mtd5.bin">http://192.168.0.1:1080/mtd5.bin</a>.<br />
<div><br />
</div><div>This router uses <a href="http://en.wikipedia.org/wiki/Common_Firmware_Environment">CFE</a>. A glance over these flash files shows:</div><div><ul><li>mtd0 contains a squashfs filesystem of some kind (~3MB). </li>
<li>mtd1 contains a squashfs filesystem image in CFE format (~3.5MB). The string "SeCoMm" at the end of this file makes me suspect this is just a rebadged <a href="http://www.sercomm.com/SWI/GUI/getProductDetail.html">Secomm device</a> - yet another reason to steer clear of this device (...if you needed another one).</li>
<li>mtd2 contains what looks like a bootloader and/or arguments (64KB).</li>
<li>mtd3 contains local system settings (64KB).</li>
<li>mtd4 contains a backup of mtd2 (64KB).</li>
<li>mtd5 is empty (0xff...) (64KB)</li>
</ul></div><div>As others have reported, this is similar to the DG834GT device. We've got BCM6348B0 ethernet, broadcom wifi and ADSL. So time give the Openwrt DG834GT firmware a spin! Download the openwrt trunk and build a custom DG834GT firmware: </div><blockquote><span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace; font-size: x-small;">$ svn co svn://svn.openwrt.org/trunk</span></blockquote><blockquote><span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace; font-size: x-small;">$ make menuconfig<br />
Select BCM63xx target<br />
Select Image builder.<br />
Select BCM6348B0 network (built-in)<br />
$ make<br />
$ ls bin/bcm63xx<br />
...<br />
openwrt-DG834GT_DG834PN-jffs2-128k-cfe.bin<br />
openwrt-DG834GT_DG834PN-jffs2-64k-cfe.bin<br />
openwrt-DG834GT_DG834PN-squashfs-cfe.bin<br />
...</span></blockquote><br />
<div>I've run out of time tonight and don't want to brick my modem before bed so more firmware flipping fun tomorrow. Fingers crossed!<br />
<br />
<b>Edit: </b>Sadly my modem died before I got a chance to finish this (R.I.P you P.O.S.) and given the low build quality I was not interested in replacing it with the same model. My next steps were going to be to attempt to flash the squashfs-cfe.bin file to the device. If that worked OK and I didn't screw up the ethernet driver options, then look at getting wifi and adsl working. Best of luck and if you give it a go I'd love to hear how you get on.<br />
<br />
<b>Edit (20111007):</b> My shiny new Linksys (Cisco) WAG160Nv2 looks to be yet another crappy Secomm device! This time I get poor Wifi performance and random reboots in addition to running very hot. This is better than a hard lock-up but not by much.. Sigh... I've switched back to my faithful DLink DIR-600 WiFi AP running OpenWRT and PPPoE. The Linksys is just running in bridged mode as a modem. Now I have another brand to Boycott. Seriously Cisco, I wish you could explain why you bought a decent consumer brand and turned it into a steaming pile of crap... My ancient WRT54G was a brilliant, rock solid device.</div>Gaijinhttp://www.blogger.com/profile/13150968867429127703noreply@blogger.com2tag:blogger.com,1999:blog-5619309154309075735.post-42495692607654243932011-08-24T13:55:00.000+10:002011-08-24T13:55:57.316+10:00Python's Global LockFor years I've heard (and occasionally been part of) religious Perl vs Python language debates. I've always been firmly on the Python side of the fence. While I have made some great use of perl in my time, I love being able to actually re-read my code without having to decipher it first. Python has always seemed like a much easier language to work in to me.<br />
<br />
So, given my firm stance in this debate, I'm a little shocked after seeing<br />
<a href="http://blip.tv/carlfk/mindblowing-python-gil-2243379">Carl Karsten's video</a>. Want to know why Ctrl-C doesn't work in threaded python apps? Why threading in python can actually slow you down? Why not all Python C bindings are not born equal? If you have even a general interest in python, I greatly recommend checking out this talk. It'll shake your faith. :)Gaijinhttp://www.blogger.com/profile/13150968867429127703noreply@blogger.com0tag:blogger.com,1999:blog-5619309154309075735.post-51974040543120267802011-04-26T10:23:00.000+10:002011-04-26T10:23:31.540+10:00Blog movedMaintaining my own wordpress instance was getting tiresome so I've switched to blogger. Article information should still work just fine but comments are missing. My apologies!Gaijinhttp://www.blogger.com/profile/13150968867429127703noreply@blogger.com0