Despite being aware of the open source movement since the mid-80’s when RMS spoke at my university, I’ve never bothered to pay more than a couple of minutes of attention to the licenses. I guess I’ve just never been in the position of releasing any of my code to the public before, oddly enough. Anyway, now I am releasing some small programs so I thought I should take a more detailed look. The first thing I noticed is that much of the code I was using specifies GPLv2, so I started with that and was immediately dug in on the differences between GPLv2 and GPLv3. It stands out however that many people explicitly specify GPLv2, so I was curious about why that might be. However when I asked the question on twitter all I got was: “Why not BSD”. Sigh…
So now I’m reading the arguments behind GPL vs. BSD. The articles listed below are all helpful and instructive reads. They are all high quality enough to have caused my opinion to sway back and forth.
I’m still not sure where this leaves me. I may end up going with GPL simply because a large percentage of code out there is under GPL. In particular the code I’m releasing is dependent on GPL code. Please comment if you have constructive advice.
I sent Paul Sobczak a VidiSynth a couple of weeks ago and he’s been running with it. He built a custom enclosure, figured out a good way to wire it all up and also has been experimenting with different ways to build the sensors using various suction cups for attachment to the video source (much cleaner than using tape which is my standard method).
He’s got a flickr set devoted to his work on the VidiSynth, check it out.
The VidiSynth is a circuit with multiple oscillators that are controlled with light sensors attached to a video screen.
Video + Synthesizer = Vidi Synth
The light sensors create interesting and complex sounds based on the intensity of different areas of the screen. I also learned from experimentation that if the sensors are attached to an LCD screen you get relatively normal square wave tones but if you use a CRT screen (TV or monitor) you get extra noisy and buzzy goodness because of the refresh.
I’ve been tinkering with electronics for just over 2 years and VidiSynth has been a huge part of my learning experience. It all started with early tinkering with 555 oscillators and my first optical theremin inspired by a video from Michael Una. From there I came up with the idea for VidiSynth and prototyped it using individual 555s.
That prototype actually required an audio mixer to combine the different channels because I hadn’t yet learned how to mix the signals. I lived with that prototype for quite a while, even building an interactive project around it for an exhibit at Twin Cities Maker during the annual Minne-Faire. Eventually an Electrical Engineer friend from the hackerspace urged me to evolve it into a PCB for potential sale as a kit. This opened up whole new areas for me to learn. Fortunately I have a couple of verypatient EE friends that were a huge help along the way.
The design was done entirely in Kicad which is a nice tool once you climb a bit of a learning curve. I eventually worked my way through all the steps of laying out the schematic which involved creating some custom components, mapping the components to footprints (more custom work) and then finally laying out the PCB. Kicad’s autorouting features worked great for my simple design and along the way I learned details about vias, ground plains and trace parameters for power versus signal. The final step was to learn about all of the various layers that must be sent to the PCB manufacturer and making them all look the way I wanted. Finally I sent my design off to BatchPCB, ordered the components and waited.
When my PCBs arrived I eagerly populated the first one, and IT WORKED!
The PCB version of the VidiSynth was born.
VidiSynth went through various prototypes before its current incarnation pictured above. It uses two 556 chips to implement four oscillators, the output of the four oscillators is mixed through a set of resistors into a 1/8” mono jack. On each oscillator the resistive element that normally controls frequency is terminated on a terminal block in order to allow different options for controlling the frequency. This can be achieved with photo-resistors as originally conceived, with potentiometers for more direct control or anything else that allows control of resistance. You could also add complexity by switching the channels with transistors.
Here is a list of a few interesting ways to use the VidiSynth that I’ve discovered:
1. As originally conceived you can connect photo-resistors randomly to a video screen and play your favorite movie or any old thing you have lying around to get interesting sounds. Film Noir is particularly dramatic.
2. Pipe the feed from a video camera or do display on a TV or monitor and you have an interactive instrument. I recently had a conversation about using Skype video conferencing in conjunction with this in order to facilitate a remote performance using VidiSynth.
5. I have written a midi driven Processing script that displays grayscale blocks on the screen based on the midi commands. This allows sequenced control of all 4 channels. I plan on releasing this in the future once I finalize it.
6. Another method of sequencing I have used is to run a couple channels through Mikey Delp’s Bender Sequencer that was created for sequence circuit bent toys.
The possibilities are endless once you start combining different input methods. You could even mix multiple VidiSynths for more fun.
Below are a couple video demonstrations of the how VidiSynth can be used. It’s a fairly simple circuit, but with that simplicity comes a flexibility that allows for some fun experimentation.
Before I move on to the demonstrations, here is a link to the schematic for the project:
Finally I would like to recognize Paul Sobczak who encouraged me to enter the 555 contest. He’s a smart and humble guy who is infinitely generous when it comes to inspiring people to do cool things and is always there to lend a helping hand. Also thanks to my two tremendously smart and helpful friends Mike Hord and Adam Wolf.
I’ve been trying to gather the materials for the Zoom H2 digital recorder that I have. I found this video that gives great instructions but it’s been a challenge to find the right fabric. My helpful mother-in-law found something that she thought might work and wrapped it up for me for Christmas. I also found some high density foam at the local craft store. So the time has come. The video referenced above gives instructions for a more durable version than what I’m going for here. I have a surplus of materials so may try that at some point in the future.
First step is to cut out a reasonable chunk of foam.
Next I joined the seam with hot glue to make the foam into a tube.
I sealed one opening and rounded the corners.
I bought a rotary cutter for cutting fiberglass, it worked pretty well for this, much better than scissors.
Finally I wrapped the fabric around the foam, securing it with hot glue as I went. Here’s the recorder nestled in the finished screen.
And here’s the finished product in all it’s glory.
I’ll work on recording some tests once the temperature outside becomes more habitable.
My primary Arduino (I really new a few more of those) is currently allocated to another application so I dug out one of my breadboarded Arduinos to start a new project. It’s been a while since I used it so I had to scrape off some rust (from my brain, not the Arduino).
I’m not convinced I have it right as there is major drop in volume with the buffer vs. without the buffer which wasn’t present on the breadboard. I can’t decide if it matters how the input and output leads are wired.
Also bask in the wonder that are my Sugru embellished plugs from a previous project.
I breadboarded the FET Buffer circuit that I mentioned a couple days ago, I also rigged a quick and dirty AB switch so I could do a simple demonstration of the results which you can listen too below. Even this crude demonstration shows that there is a definite difference, the FET Buffer clearly removes the tinny quality inherent in piezo pickups. I’ll build a more official contact mic including the buffer so I can play with it some more.
See me previous post FET Buffer Links for a schematic and other details about the circuit.
As I expect is generally the case with RGB LEDs, the BlinkM needs a diffuser to properly mix colors. I’ve used a variety of things for diffusers in the past, usually something within arms reach. They tended to be relatively large and fragile.
This time I wanted to come up with something more compact but that still used whatever materials I have on hand. One of the first things I came across was a bag full of jewel cases from software long obsolete. Primary material search complete.
My idea was to stack some small chunks of jewel case plastic, glued with super glue, drill a hole in the bottom so that it would sit on the rounded top of the BlinkM LED. I thought perhaps it would be necessary to sand the layers to add diffusion.
So that’s what I did. I using a dremel cutoff wheel to slice up 4 similar sized chunks around 3/8”x1/2”. I used some gel type super to stick them together. Then I clamped the stack in my PCB vise and rounded the edges with a dremel round sander. Finally I found a drill bit close the diameter of the BlinkM dome and carefully drilled a hole. An added bonus was that the super glue fogged the plastic so no extra sanding was required.
The pictures above show the BlinkM set to yellow with and without the diffuser. It’s a bit tough to capture the intensity and color in an image, but you get the idea. I think it could use another couple layers of plastic and perhaps some more creative shaping, but isn’t too bad for a first attempt.