Light Propagation through Flesh with Blender Scripting

This blog post documents my attempts to set up a Blender pipeline to simulate the path of light through flesh, for studying PPG. I ultimately found out that Blender’s cycles rendering engine makes some unacceptable approximations with lighting.

Why

I’ve been back to working on biomonitoring projects (see https://akmemorph.us/biofeedback-smartwatch-update/, from 2019). Wearable SPO2 sensors/PPGs have become ubiquitous since then, and there have been some improvements in the field. There is research on where to put the photodiode compared to the light source (an advancement has been to use multiple photodiodes), and on different wavelengths of light, but there is also a conspicous absense of research on optics.

It is conceivable to me that controlling the distribution of radiance and the angular sensitivity of the photodiode would give better results. And given that phone cameras work at the size that they are, simple optics are probably possible in a watch formfactor. I’m also working on a wearable multi-sensor chest strap which will be less space constrained than a watch.

From looking at a variety of wearable PPG sensors, it appears like optics are very rare, though my Samsung Fit 3 appears to have a fresnel lens on it.

Plenty of research uses ray tracing to investigate PPG, without considering illumination or sensor optics (e.g. Optical Ray Tracing Simulation by Using Monte Carlo Method for
Reflectance-based Photoplethysmography Sensor in Human Skin and
Fingertip Model
). For this reason I wanted to get a ray tracing simulation set up.

I started using Blender for this project because it seemed like it has all the relevant physics (volumetric, anisotropic scattering and absorption, refraction and fresnel reflections). People have even modeled cameras in Blender for photorealistic lens abberations. Blender scripting is powerful and reasonable friendly, and I’ve used it professionally for digital twins and synthetic datasets for machine learning.

How

The following python script has a table of flesh optical properties from a paper, and adds a series of rectangular layers to an empty blender scene. It sets up a camera and a stand in light source. For multi-spectral work, the script could loop through optical properties at any given wavelength of interest.


# invoke me with                                                                                                                                                                                                                                                                   
# ~> blender -P script.py
# Add -b to run in the background                                                                                                                                                                                                                                                      

import bpy
import math

class OpticalProperties:

    def __init__(self, IoR, scatter_pmm, absorption_pmm, anisotropy):
        self.IoR = IoR
        self.scatter_pmm = scatter_pmm
        self.absorption_pmm = absorption_pmm
  self.anisotropy = anisotropy

class Region:

    def __init__(self, name, thickness_mm, properties):
  self.name = name
        self.thickness_mm = thickness_mm
        self.properties = properties

light_distance_mm = 1.0

# From Optical Ray Tracing Simulation by Using Monte Carlo Method for ... computational and experiemntal research... 2022                                                                                                                                                        
# Uses Henyey-Greenstein for anisotropy                                                                                                                                                                                                                                          
regions = [Region('Air', light_distance_mm,
                  OpticalProperties(1., 0., 0., 0.)),
           Region('Epidermis', 0.2,
                  OpticalProperties(1.433, 10.7, 0.27, 0.79)),
           Region('Dermis_First_Layer', 0.2,
                  OpticalProperties(1.396, 18.7, 0.27, 0.81)),
           Region('Dermis_Plexus_Superficial', 0.2,
                  OpticalProperties(1.396, 19.2, 0.27, 0.82)),
           Region('Dermis_Second_Layer', 0.9,
                  OpticalProperties(1.396, 18.7, 0.27, 0.81)),
           Region('Plexus_Profundus', 0.6,
                  OpticalProperties(1.4, 22.5, 0.27, 0.78)),
           Region('Subcutaneous', 8.0,
                  OpticalProperties(1.37, 5.0, 0.003, 0.75))]

# Reset scene                                                                                                                                                                                                                                                                    
bpy.ops.wm.read_factory_settings(use_empty=True)

# Add flesh layers                                                                                                                                                                                                                                                                
starting_z_mm = 0
for i, region in enumerate(regions):
    bpy.ops.mesh.primitive_cube_add(size=1, location=(0, 0, starting_z_mm - 0.5 * region.thickness_mm), scale = (16, 16, region.thickness_mm))
    layer = bpy.context.view_layer.objects.active
    layer.name = "{:03d}_".format(i) + region.name
    mat = bpy.data.materials.new(name=region.name + "_mat")

    nodes = mat.node_tree.nodes
    links = mat.node_tree.links

    nodes.clear()

    volume_coefficients = nodes.new(type="ShaderNodeVolumeCoefficients")
    glass_bsdf = nodes.new(type="ShaderNodeBsdfGlass")
    output = nodes.new(type='ShaderNodeOutputMaterial')

    volume_coefficients.location = (0, -250)
    output.location = (300, 0)

    absorption = region.properties.absorption_pmm
    scatter = region.properties.scatter_pmm
    volume_coefficients.inputs['Absorption Coefficients'].default_value = (absorption, absorption, absorption)
    volume_coefficients.inputs['Scatter Coefficients'].default_value = (scatter, scatter, scatter)
    volume_coefficients.inputs['Anisotropy'].default_value = region.properties.anisotropy

    links.new(volume_coefficients.outputs['Volume'], output.inputs['Volume'])

    glass_bsdf.inputs['IOR'].default_value = region.properties.IoR

    links.new(glass_bsdf.outputs['BSDF'], output.inputs['Surface'])

    layer.data.materials.append(mat)

    starting_z_mm -= region.thickness_mm

# Add camera                                                                                                                                                                                                                                                                      
bpy.ops.object.camera_add(location=(0, 0, 15))
bpy.context.scene.camera = camera = bpy.context.view_layer.objects.active

# Add light                                                                                                                                                                                                                                                                      
bpy.ops.object.light_add(type='AREA', location=(3, 0, 0))
light = bpy.context.view_layer.objects.active
light.data.shape = 'RECTANGLE'
light.data.energy = 1000
light.data.size = 2.0
light.data.size_y = 1.0
light.data.spread = math.radians(60)

# Render settings                                                                                                                                                                                                                                                                
scene = bpy.context.scene
scene.render.engine = 'CYCLES'
scene.render.filepath = "/tmp/output.png"
scene.render.resolution_x = 512
scene.render.resolution_y = 512

# Render                                                                                                                                                                                                                                                                          
#bpy.ops.render.render(write_still=True)

It sets up a shader block that looks like:

However, I noticed something suspicious. The image is different when the glass shader is missing from when it is there but with an IoR (Index of Refraction) set to 1. This raised the question: how is Blender doing the IoR anyway? A 2D plane with an refractive shader was visible and changed the image viewed through it, which is wrong, but it is conceivable to me that one could have rays remember the IoR of the medium they are in and update when crossing surfaces in a way that would be physically correct but wouldn’t work for non-manifold objects.

A pure refractive cylinder seems to have a plausible shadow and a plausible lensing effect. However, with an IoR of 1, the cylinder disappears (as it should) but leaves a shadow (as it shouldn’t).

I did more research into this, and it appears like it’s just a problem of the approximations Cycle’s path tracing make regarding lighting (see https://docs.blender.org/manual/nb/4.0/render/cycles/optimizations/reducing_noise.html#render-cycles-reducing-noise-glass-and-transp-shadows). There appear to be good hacks around it to improve the appearance in rendering edge cases like this, but no hacks that will give the physically realistic behavior that I want.

I’m moving to a real ray tracer (http://mcx.space/ or https://mitsuba.readthedocs.io/en/stable/index.html), but that’s a future post.