It took over a week of pdfcrack running on my desktop, and it finally found the 6 character password that was set on a PDF Before we changed from Kaiser to Blue Shield, I had Owen’s medical record produced for us. It was not cheap either, and it came with a password set on the PDF. Well, I had saved the PDF to my google drive, but, I apparently did not unset the password.

fedora and broadcom nics


I think I have come to terms with the fact that Dell servers ship with exceptionally poor onboard devices. The cheaper broadcom ethernet controller have had a less than stellar track record, across both BSD and Linux systems here. Before I Google it yet again, this is the boot option required to install Fedora 20 onto a Dell server with a broadcom controller: modprobe.blacklist=cnic,bnx2i This will not load a kernel module during install, which crashes the Anaconda installer.

Flipping to Amazon


A good friend of mine has been nice enough to host this site for me. When I moved to the beautiful redwoods, it became apparent that with the fluctuating power situation, I would no longer have the same kind of uptime. With the transition, Christopher was also nice enough to let me host my site over https. I am one of those people that sort of believe in encryption everywhere, even though my site does not transmit any user data.

thermal grease


Before and After After rebuilding my home server, my primary fear was death by over heating. The first utility I setup was Munin, as well as loading the thermal sensor KLD’s: root@server:~ # kldload coretemp root@server:~ # kldstat Id Refs Address Size Name 1 12 0xffffffff80200000 179ddd8 kernel 2 1 0xffffffff8199e000 4a50 coretemp.ko 3 1 0xffffffff81a11000 1f1698 zfs.ko 4 1 0xffffffff81c03000 3331 opensolaris.ko root@server:~ # dmesg Jan 24 16:35:20 server kernel: coretemp0: <CPU On-Die Thermal Sensors> on cpu0 Jan 24 16:35:20 server kernel: coretemp1: <CPU On-Die Thermal Sensors> on cpu1 Jan 24 16:36:21 server kernel: ichsmb0: <Intel 82801GB (ICH7) SMBus controller> port 0x4000-0x401f irq 19 at device 31.

ceph ocalypse


This story began with upgraded Ceph from Hammer to Infernalis, thinking it was the next stable release. The upgrade was a little rough, there were some major changes to ceph, a big portion of it was that is switched file permissions from root to ceph, an unpriviledged account. With some downtime, we ran a recursive chown across all OSD volumes (we spun un a tmux session for each drive, 34 drives in each server)

new workshop


When we first moved to our place, there was an existing shed on the property: Old Shed... It was in pretty bad shape: a tree branch had crushed part of the roof, the plywood foundation had rotted away and most of the organic material inside was moldy. I suited up in the nice and humid mid-june weather and took it apart (which is also when I found out how bad the mosquitoes are in the area…), leaving a giant mess of sheet metal, tiny nuts and bolts, and decomposed wood.

sump pump


There is something strange about moving to a new area. Everything is different, even the bugs in this area are drastically different that what I grew up with. It can be unsettling even though it is mostly harmless (except termites, they are not harmless). Other areas of difference order on the side of natural catastrophes. In the east bay, Antioch specifically, the weather did not seem to destroy anything. Pipes froze one cold winter, and the hot summers were tolerated because we all had AC

zfs failed array recovery


I have had a nice ZFS RAIDZ in operation for the last 7 years now, 4 1TB Drives. The 2.6TB or overall storage I had was very sufficient, and while the drives were starting to show their age and performing less and less, I had no reason to replace them yet. After all, I don’t need MORE storage. Then one evening, my desktop failed to mount a mapped drive. Specifically to the storage path.

ceph cluster

2015-05-22 | #Ceph #Storage

One thing I have leared over the past few years: if you do not have a solid data management policy, at some point, there will be disasters. At work, we currently have two different types of storage. One I call “Tier 1”, is a beefy ZFS file server with very fast 900GB SAS drives. While the amount of storage is relatively small (~9TB), it is not meant for long term storage.

the chernobyl of boulder creek


todo Okay, so its not exactly a nuclear reactor encased in a sarcophagus in our beautiful neighborhood. It is however, a closed down public school. The Redwood Elementary School of Boulder Creek CA was shut down around 2003/2004 when it merged with Boulder Creek Elementary. That is according the the SLV school district notes. It is in very good condition for being effectively boarded up and shut down for over a decade.