I recently bought a bambulab X1C 3d printer. This was inspired by how long it takes to machine many small parts for hobbies that could be adequately done with a 3d printer for much less effort. Design is much easier when you can iterate with 5 minutes of work per iteration and a few hours wait.
Work has been really busy recently so I decided to take a break from hobby engineering and do less ambitious fun things in my nights. I’ve been interested in learning to post-process 3d prints the way people do for props, and also for better 3d printed molds. This was kind of an experiment in that – I used a well recommended filler primer from Rust-oleum to try and hide layer lines and turn the 3d print surface into a good surface for painting. It seemed to work fine, but when I tried to make a nice layer-less surface for casting into it was clear how bad PLA is for sanding. There’s a Bondo product I’m going to try next.
Anyway, I like mushrooms and I found a guy on cults3d selling very nice mushroom models (https://cults3d.com/en/users/gazzaladra/3d-models), so I bought a bunch, printed them and spent a night painting them with my SO. Painting is harder than I remember; I haven’t painted since I was a child and mixed colors always turned out different from how I expected. However these are mushrooms so it’s realistic if they look a bit gross.
I get the impression that things are much less decorated than they used to be. This has been explained to me as because of the increased cost of labor, which has some issues but to me wins out over other explanations. Some other explanations I’ve heard are that people are overstimulated in life and don’t buy decorative stuff so they have a place to recover, that plainness and lack of decoration is the new class signalling, and that a different civilization is responsible for all the nice stuff we see historically but don’t see built any more. These may later explanations may be more correct but they are unverifyable, while the first points to a specific effect that is definitely true. An issue with the first explanation is that the ultimate cost of e.g. nice decals, impressions onto stamped metal, has also been made lower due to technology, but maybe less so than the thing itself. Regardless, I appreciate nice looking things and soulful things and want to live in a world with more of them.
Costs can be split into manufacturing costs and design costs. Something neat to me about additive manufacturing is that for a lot of methods, decorations are free, or atleast very low cost. The decorations can also be different for each object produced. For example, Desktop Metal has a binderjet printer that prints wood (sawdust+binder) and dyes the material to resemble different types of wood (Forust). Because it’s a binderjet printer, small model changes for e.g. fake engravings don’t affect print time or cost, and because the material is already volumetrially dyed having custom colorations and patterns and images should also come free. This is the best example, but all binderjet printers and resin LCD printers should have basically free 3D decorations, and many 3D printer methods that allow color have the same print time cost regardless of the color used. Of course, all these methods rely on 3D printing actually being a competitive way to design a mass produced product, but in my impression that’s already what a lot of 3D printing companies are betting on anyway. There are also many cases where literally the same object is sold from different places with several fold differences in cost so I think it’s very possible that most people would be willing to pay a small premium for custom items.
This leaves design costs. While I think there’s some human intervention required, there’s a lot of neat generative deep learning work that could be used to take the load off of human artists. For example, given a model of an object to print, a human probably needs to label visible surfaces vs. critical surfaces (surfaces which must remain unchanged for function). At the lowest end of engineering difficulty, aesthetic patterns or simple random textures/tilings can be applied. There are manufacturing methods which produce results like this already, where every customer gets a slightly different object in a nice way. At higher levels of engineering difficulty, generative deep learning models could produce stylized and themed decorations according to the desires of the consumer. I’m still thinking this side of everything through, so I’m collecting examples of technologies and art. While I have some experience making and training generator deep learning models, it’s not something where I’ve ever been happy with the result.
I was a bit interested in making a personal stamp (like for sealing letters), so I threw together a design in Blender, printed it, and cast it in silicone. Just a note, I was incredibly impressed with Blender’s sculpt mode-it felt like I was playing with actual clay, and no matter what I did the result was always a manifold mesh. I have plenty of experience where programs can’t even maintain manifoldness after boolean operations on relatively simple shapes.
For the stamp itself, I think I used smooth-on’s ecoflex gel (don’t remember now). Silicones are nice because they are generally non-stick, so no extra mold-release is needed. When cast, they often can transfer details finer than the eye can see and this level of detail is used for some types of lithography.
As 405 nm floodlamps and SLA resin were more common where I did this than waxes, I used resin first. I was pretty happy with the results.
Clear resinResin pigmented with lapis
I also tried beeswax (not show), shellac and polycaprolactone, but wasn’t as happy with either of those.
PCL StampShellac
Traditionally, sealing was was made from shellac and beeswax with pigments. Modern sealing wax is literally hot glue. As I didn’t get anything melted to work well I wonder if it was because this used of a silicone stamp instead of a metal one.
I was thinking this would be neat to turn into a small self-contained device, with stamp material (resin or wax), stamp and activator (405nm light or heater). Maybe wedding planners would be interested in it.
First time seeing leafcutter ants (Austin)Leafcutter ant nests (Austin)Stickbugs seen on drive from Boston to AustinLarge number of daddy-long-legs in a clump (Austin)Biggest slug I’ve seen (Boston)Biggest Centipede I’ve seen (Austin)Stiletto Fly and prey (Austin)Particularly large dragon fly (Austin)(Bloomington)(Bloomington)(Bloomington)(Bloomington)(Bloomington)(Bloomington)(Bloomington)
After a bit of fuddling I’ve gotten some inductively coupled plasmas started. These pictures are each around 1E-1 torr. The two coils are 0.5uF and 5uF respectively. They are driven with a signal generator (@13.56 or 27.12MHz)->100W shortwave amplifier->MFJ 941B tuner->BNC aligator clips. The plasmas were started with ~500V 20kHz AC on different feedthroughs.
Tuning was difficult but was made easier by monitoring radiated power with a SDR. It helped to add a 220pF capacitor in series to move the impedance closer to something that a tuner would normally be used for. A 275pF capacitor would have yielded a resonant circuit at 13.56MHz and 5uF but the real part of the impedance would still have been mismatched. Using the balun tuner output failed, presumably because the 1:9 turns ratio moved the impedance further out of range. I also tried using a homewrapped ~10:1 inductor around a ferrite core (because the real impedance here is tiny compared to 50 ohms) and got some less impressive plasmas.
At higher pressures (not shown) the plasma is much more concentrated around the coil and dimmer.Antenna analyzer fed through the tuner when tuned to achieve a glow.
I’m basically using the setup from here, with a used 700mm focal length telescope mirror. It’s a very clumsy setup and I’m very happy I got any results. Seeing it with your own eyes is kind of magical.
In this setup, light from a point source (LED covered in tin foil with a pinhole) is brought to a image half on a razor blade and the images are taken from behind the razor blade. Something that causes changes in air index of refraction (through heat from candles here) is in the light path. If there were no index of refraction changes in the air, one would see a uniform field of half intensity, but because the thing to be imaged changes the air’s index of refraction, you can see its effects.
Setup with LED, razor blade and mirror.Phone camera, this is close to the appearance just looking with your eyesDLSR camera
<Informal because I have no energy, also partially for my own reference later. I’ve been busy when not blogging.>
The PCBs I designed for the Bluetooth biofeedback thing arrived a few days ago. From Oshpark, these were ~10usd total, with free shipping and took ~2 weeks. I’ve never done any QFN soldering (the leads are entirely under the IC) and I didn’t get a solderpaste stencil. Big mistake. Spent a lot of painstaking time with an Exacto knife, sewing needle and tweezers. Also tried using a hot air station for the first time, but a pan on the stove ended up working better.
Small one is meant to be worn on a strap on the finger, the large one is meant to be worn on a strap on the wrist.SWD pins connected, and it can flash!!!!!!!
Because it will take time anyway for new inductors to arrive, I did a redesign and ordered more boards. I hope to actually be able to wear the next generation around. I’ll put in some filler posts on computing before then.
I don’t use .bib files and just populate bibliographies like:
\begin{thebibliography}{99}
\bibitem{Romanelli1989}
F.~Romanelli, Phys. Fluids B \textbf{1}, 1018 (1989).
%.... (other citations)
One issue is that this can be out of order compared to the order of /cite{} in the text. The following python script goes through the citations in the text and prints the appropriately ordered bibliography. Replacing the bibliography must be done by hand, but it’s safer that way.
f=open("article.tex","rb")
cites=[]
A=f.tell()
for line in f.readlines():
while(True):
a=line.find(r"\cite{")
if(a==-1):
break
else:
line=line[a+5:]
b=line.find("}")
blah=line[1:b]
if(blah!=""):
B=blah.split(",")
for e in B:
if e not in cites:
cites.append(e)
line=line[b:]
refs=[]
f.seek(A)
last_line=""
for line in f.readlines():
if(last_line[:5]==r"\bibi"):
refs.append((last_line,line))
last_line=line
for i in cites:
for j in refs:
end=j[0].find("}")
if(i==j[0][9:end]):
print j[0][:-1]
print j[1]
continue
Very much in the spirit of the 90-90 rule, this took much longer than I assumed.
So my goal is to put the galvinic skin response, hand temperature and SPO2 sensors onto a ring connected to a bluetooth ‘watch’, and have some phone app which can stream the data. To play around with a microcontroller which has bluetooth, I got the STEVAL-IDB008V2 dev board, which has a BlueNRG2 chip. Because I’ve only played with the Nucleos so far, I was overoptimistic about linux compatibility. On the plus side, my setup for development has gotten much more comfortable-I’m back to using emacs and makefiles, and I’ve learned how to use st-util to flash and debug and I’ve also gotten my first experience using logic analyzers with sigrok and pulseview.
The issue which held me up was actually getting the programs onto the chip. While the dev board has a JTAG header, the default application disables those pins so it cannot be programmed in that way. The alternative is to flash it with the UART bootloader, which requires the chip be booted into a specific mode and a different pair of pins.This board has a USB, but unlike the nucleos, it is not connected to a ST-Link. All of the software for flashing the chip was windows-only, and I had a lot of trouble getting that working.
Anyway, after much toil I managed to work out a way to reliably flash the chip using the windows software on a VM, but at this point I had already mostly written a bootloader using a nucleo as an intermediary. A warning on the very slim odds a reader will be stuck on the bootloader like I was. As far as I can tell, the bootloader will respond with a positive acknowledge even to invalid addresses, only to fail when receiving the data to flash. To work the last bugs out of the bootloader I was working on, I compared the output of the logic analyzer for each of them flashing the chip with the same data. This revealed that the address bytes I was using were off allowing me to fix the program.
~10usd logic analyzer that’s already worth itAddress bytes and acknowledge in PulseviewFull writing in pulseview. First is the erase command, followed by blocks of 256 bytes.
The MAX30102 arrived and I spent some time. I’ve also gotten metal disks and will try to make rings with polymer clay to hold the electrodes for GSR on better. As such I’m busy and this will be brief.
The device itself is tiny.
After reading through the datasheets, I wrote something to set registers for SpO2 measurements (IR and red LEDs active, there’s a heart rate mode with only one), and after having bad luck with guessing appropriate settings, eventually found all the recommended settings in the application notes for the demonstration board.
I had a bit of difficulty getting data from it, if just because it’s smart. There’s a 32 sample FIFO circular buffer for the data, which is accessed by repeated reading a single byte register. The read and write pointers are accessible but unnessesary for this, as they both automatically increment. Samples are 2 24 bit numbers (IR and red channel). As is, the program I’m using to read data sets an interrupt enable flag on the MAX30102 to check it for available data and then checks it repeatedly. There is also an almost full interrupt, and I spent some time getting the MAX30102 to trigger the nucleo, but I’m still having trouble reading multiple samples from the ADC.