20 March 2023

Omni-Pi-tent and Dynamic Self-repair Project Summary

The Omni-Pi-tent and Dynamic Self-repair projects were finished in autumn of 2021. 

 

Most of the projects goals were achieved including:

  • Docking between moving modular robots without need or external guidance infrastructure


  • Dynamic self-assembly with real hardware 


  • Simulation of Dynamic Self-repair



  • Development of a convenient new data structure ("Quadruplets") for defining modular robotic structures

The one thing for which there was not quite time was completing the full Dynamic Self-repair procedure with the hardware, a few months more could perhaps have been enough.


The following publications provide further details, particularly the thesis:

R.H.Peck, Self-repair during continuous motion with modular robots, PhD Thesis, 2021 

R.H.Peck, J.Timmis, A.M. Tyrrell, Self-Assembly and Self-Repair during Motion with Modular Robots, MDPI Electronics, 2022


Please don't hesistate to contact me using the commenting feature on this blog if you are interested.



13 February 2023

Project summary posts coming soon

I'll post some project summary posts soon, links to publications and various images and videos.

In the mean time, if you were advised by that autoreply email to comment on here to contact me, please comment on here with your questions. I don't yet know if blogger will let you leave an email address in the comment field, so I suggest commenting with something like:

Question: description....

Email: you can contact me back at name(at)example(dot)com

I'll review such comments, so the email address you leave won't end up visible to the wider internet.

28 May 2021

Compass Calibration for Mobile Robots

A compass would initially seem a good way, perhaps the obvious way, to give a robot a global rotational co-ordinate frame. Relying on the Earth's magnetic field would appear to give a robot navigational aid without depending on external infrastructure, if you do want to class Earth's magnetic field as internal infrastructure then just be glad it is a lot less likely to go down than any remote server farms which some robots rely on.

In practice though the magnetic field as read by a robot's magnetometer is strongly distorted by magnetic and metallic parts within the robot. A normal hand compass placed on a robot will not reliably track north as the robot rotates, it will swing wildly back and forth as different items within the robot cause contributions to the measured magnetic field which outweigh those of the Earth's weak field, 49.5μT where we are, with only 18.3μT of this being horizontal.

These distortions come in two types, hard iron and soft iron. Hard iron distortions come from magnetised objects within the robot, soft iron distortions from where metal objects provide a preferential path for magnetic fields.

By considering all 3 components of a measured magnetic frield, X, Y and Z, rather than simply the overall direction we can see that a magnetometer in ideal* circumstances we get X,Y,Z readings which form a sphere when plotted as the magnetometer is rotated about all axes. If one imagines a compass sited at the centre then it has the needle point to the edges of the circle as rotations happen and north is correctly followed.

*in pratice the magnetometer by itself is not ideal either, every device will need somewhat different calibration constants to account for effects such as the stresses on a MEMS device caused by solder joints

Idealised magnetometer readings, an X-Y plot of all readings taken while rotating in a sphere

Hard iron distortions shift this sphere away from the centre. A normal compass, or performing basic trigonometry on raw magnetometer readings to get directions, would in the example below point within just a ±45° region for most of a rotation. If the hard iron distortion were more severe it could give identical readings when facing in directions 180° apart.

Hard iron distortions

 

Soft iron distortions turn the sphere in to an ellipse, causing the measured strength to reduce in some directions and increase in others. They are less common in largely plastic robots such as Omni-Pi-tent but do occur.

Soft iron distortions
 

With both effects in play the ideal sphere can become something like this, note how a compass needle in this example could give the same reading for orientations 180° apart given that the direction from the centre to the edge of the ellipse is the same on opposite sides of the point cloud. 

 

Combined hard and soft iron magnetometer distortions, note the arrow showing how the same direction is read even for rotations on opposite sides of the point cloud

Calibration lets us convert the off-centre distorted ellipsoid back to a centred sphere. Each reading can be passed through a series of linear equations to remove the hard and soft effects and give an idealised reading. This is done by rotating the robot about all axes while taking a constant stream of magnetometer readings to produce a table of X,Y,Z values.

At this point we can turn to a very useful C implementation of Li's least squares ellipsoid specific fitting algorithm, written in 2013 for maritime purposes by  M.Boulanger (Bermerlin) , which measures the ellipsoid shape of the measured point cloud and calculates a series of transforms by which it can be converted to a centred sphere. The C code compiles under gcc on linux, although benefits from minor modifications to the filepath and data reading format. sscanf(buf, "%lf,%lf,%lf", &x, &y, &z);, is more applicable to most data files than the \t separators in the original code.

The C code outputs a series of 3 numbers for the hard iron compensation and a matrix with 9 elements for the soft iron compensation. These are then used as follows to convert later magnetometer readings:

 X1 = RawX - (8.032211);// #X-axis removal of hard distortions, values as printed from the C code are always subtracted
 Y1 = RawY - (-16.737712);//#Y-axis
 Z1 = RawZ - (-23.744605);// #Z-axis
 
 Xout =  1.176539*X1 + 0.002427*Y1 +(-0.011834)*Z1;//#X-axis removal of soft distortions, values are used with the sign they are printed with
 Yout =  0.002427*X1 + 1.222201*Y1 + 0.006130*Z1;
 Zout =  (-0.011834)*X1 + 0.006130*Y1 + 1.275629*Z1;

The next step is to make appropriate use of those Xout, Yout and Zout calibrated magnetometer readings to calculate a heading. At latitudes any distance away from the equator the magnetic field vector has a strong downward component, which means a magnetometer placed at any angle other than exactly upright can give inaccurate headings if you simply attempt trigonometry with the X and Y components. Some attempts to solve this involve masochistic attempts to carry out trigonometry in multiple reference frames with angular separations between them, these methods also tend to have singularities and gimbal lock for certain angular combinations. Instead, as inspired by Pololu's compass compensation library , we can use some beautifully straightforward vector products to get first a West vector from the partly downward (here in the Northern hemisphere) magnetic field vector and downward gravity vector**, then a North vector from the gravity vector and the west vector, and then a heading from projections of the West and North vectors.

**if you define gravity as upward, as Pololu's library does, then you get an East not West vector.

The Omni-Pi-tent modules were designed to make use of BNO 055 IMU chips which combine a magnetometer, accelerometer and gyroscope, these looked a good choice early in the design process due to their internal software which performed self-calibration procedures to remove distoprtion effects. Unfortunately this internal software on the BNO 055 was constantly attemtping to optimise its internal calibrations, and therefore lost the true calibration in favour of values calculated over brief time periods in which the robot had performed only a fraction of a rotation. This meant the robots had to be constantly subjected to manual handling, lifting and rotation about multiple axes, to coax the BNO 055 in to recalibrating again with a correct set of constants. To make use of this mroe manual method the BNO 055 had to be put in to a non-smart mode from which plain magnetometer and accelerometer readings could be read. This was done with some modifications to Adafruit's BNO 055 Arduino library, the changes mainly focused on accessing Page 1 of the BNO's addres space while in CONFIG mode, setting the MAG_CONFIG register to 00010101b, and the ACC_CONFIG register to 00001001b, before switching the device back to Page 0 and entering the ACCMAG running mode. This turns the fancy and expensive BNO 055 in to a simple combined magnetometer and accelerometer chip, many other options for which could have been available had the pinout of the BNO 055 not been fixed in to the robot's design at the point when the flaws in the BNO's self-calibration were discovered.

This method worked ideally with the Omni-Pi-tent modules, giving far better accuracy than trusting the BNO 055 IMU's internal self-calibration, and should work well provided three conditions are met:

  • Robots should be designed to keep motors and other magnetised parts as far away as feasible from magnetometer chips, this should ensure that magnetic fields from the motor cannot saturate the magnetometer's reading. So long as hard iron distortions do not take the magnetometer to its limits it should be possible to remove them and centre the point cloud.
  • Robots should keep wire and traces which carry varying amounts of high current well away from the magnetometer. Whilst a large, but completely constant, current could be removed by running the calibration procedure while the robot was drawing fullt current, any large current which varies is not so easy to account for.
  • Robots should avoid having moving magnetic parts within them. Brushless motors where rotor is the permanently magnetised element would give different distortions depending on their position, Omni-Pi-tent's motors are all brushed DC with magnetised stators so do not cause this problem. Where magneticised parts of a robot are on movable joints it may be necessary to perform a full calibration procedure for a whole variety of joint angles and have the calibration constants in the magnetometer compensation routi8hne switched between different values depending on actuator positions. Omni-Pi-tent does have this situation given the presence of wheel and hook motors within Port 4, the port raised on the 2DoF hinge, but it has not caused difficulties at present because all use of accurate compass navigation presently takes place with the hinge in the centre position.

Naturally if your robot is operating in an indoor enviornment where the magnetic field is distorted by metal girders, sewer pipes or electrical mains then the magnetic field will not point to the global north, however the field readings after distortion compensation and tilt compensation will still provide a measure of orientation which two robots close together can agree upon and use for navigation within a smaller area.

We will try to make available the full toolchain for compass calibration and compensation with the BNO 055 at some later point in the project.

Clusterising V-REP, An Easy Guide

In the Dynamic Self-repair project we've been using large numbers of V-REP simulations to gather statistical data with which to compare the performance of different modular robot group behaviours against each other. The numbers of simulations which can be run are, if you get things right, well in excess of those you can run with real hardware, and yet, as it is a high fidelity physically detailed simulator, V-REP is liable to run slower than the real robots. As in many swarm and modular robotics experiments each simulated scenario, long and computationally intensive as it is, leads to effectively one data point for the analysis stage. And as V-REP runs slower than the real hardware, atleast in our scenarios with large numbers of fairly physically detailed robots in scenes, parallelising becomes very important. Getting clusterised V-REP to run took quite a while for someone like myself with a background in physics rather than computer science, so I thought I'd share here how it is done incase anyone else finds themselves needing to do V-REP bulk runs. The video below shows an example of our self-assembly simulations, captured while running on the cluster.

This guide provides a script for running V-REP on clusters with a SLURM workload manager and explains the key points in its operation. This method was developed for V-REP 3.5, which we have been using, but should still be applicable for later versions including CoppeliaSim

We'll start with the .job file for squeue submission:


#!/bin/bash
#SBATCH --job-name=insert_name       # If this script is used please acknowledge "Robert H. Peck" for providing it
#SBATCH --mail-type=END,FAIL             # Mailing events#SBATCH --mail-user=YourEmail@example.com     # Where to send emails 

#SBATCH --mem=2gb                        # Job memory request, 2gb is typically enough for V-REP
#SBATCH --time=47:00:00                  # Time limit hrs:min:sec, always set this value somewhat above the maximum wristwatch time a job will require

#SBATCH --nodes=100 #as many as tasks as nodes, this will give 100 paralle runs in this example
#SBATCH --ntasks=100
#SBATCH --cpus-per-task=1 #V-REP can only use one CPU at a time

#SBATCH --output=name_%j.log        # Standard output and error log
#SBATCH --account=if_applicable_on _system        # Project account
#SBATCH --ntasks-per-core=1 #only 1 task per core, must not be more
#SBATCH --ntasks-per-node=1 #only 1 task per node, must not be more
#SBATCH --ntasks-per-socket=1 #typically won't want multiple instances trying to use same socket

#SBATCH --no-kill #prevents restart if one of the 100 gets a NODE_FAIL

echo My working directory is `pwd`
echo Running job on host:
echo -e '\t'`hostname` at `date`
echo
 
module load toolchain/foss/2018b #depending on the cluster setup other moduels may need to be loaded to support V-REP's dependencies
cd scratch #filepaths will vary
cd V-REP_PRO_EDU_V3_5_0_Linux #V-REP's own folder

chmod +x generic_per_node_script.sh #ensure that the bash script is executable


VariableName1=4 #numerical variable to provide to V-REP
VariableName2="text" #string variable to provide to VREP
VariableName3="[[1,1,4,4],[1,3,4,7]]" #array, or table in lua, to provide to VREP
VariableName4="S5_filename" #filename, as another string, for V-REP to open


srun --no-kill -K0 -N "${SLURM_JOB_NUM_NODES}" -n "${SLURM_NTASKS}" ./generic_per_node_script.sh ${VariableName1} ${VariableName2} ${VariableName3} ${VariableName4} #but where is V-REP I hear you ask?


wait
 
echo
echo Job completed at `date`

The --dependency=afterany: flag should be used if submitting a series of jobs of this kind to a cluster, this ensures that subsequent jobs do not share nodes with, and conflict with, earlier batches of simulations.

So where is V-REP running? Well it isn't quite yet, because whilst there are ways to launch V-REP directly from the job script much more can be achieved if the job script instead launches a bash script on each node which then handles V-REP's functionality.

This is especially useful for video recording. If you want, on occasion, to get a visualisation of a simulation, this method also lets you take a screen captured video from the simulation. V-REP natively has a video recording function, but this can only be launched by pressing the button within the simulator's GUI, it cannot be triggered from the command line, this method allows you to record video from clusterised simulations despite that.

#!/bin/bash

#If this script is used please acknowledge "Robert H. Peck" for providing it
ScreenShottingPeriod=10 #defines a period in seconds of wristwatch time at which frames are taken
frame_rate=10 #defines the rate at which the output video will play these frames
TimeNow=$(date +'%d-%m-%Y_%H:%M:%S')
mkdir "vrep_frames_at_${TimeNow}_on_node_${SLURM_NODEID}"
#we have just made a folder within V-REP's folder, and on the scratch part of the cluster filesystem

Arg1=$1
Arg2=$2
Arg3=$3
Arg4=$4
Arg5=$5

if [[ -n "$Arg1" ]]; then #processing of variables from the job script, setting to default values if none are supplied
Var1=$Arg1
else
Var1=5
fi

if [[ -n "$Arg2" ]]; then
Var2=$Arg2
else
Var2=10
fi

if [[ -n "$Arg3" ]]; then
Var3=$Arg3
else
Var3="[[1,4,3,5],[1,3,1,8]]"
fi

if [[ -n "$Arg4" ]]; then
Var4=$Arg4
else
Var4="S1_filename"
fi

xvfb-run -s "-screen 0 1024x768x24" --server-num=92 ./vrep.sh -gvrep_frames_at_${TimeNow}_on_node_${SLURM_NODEID} -g${Var1} -g${Var2} -g${Var3} ${Var4}.ttt -s1200000 -q > /dev/null & #V-REP is launched within an xvfb environment to provide it with a graphical frontend, without which it cannot operate, variables 1,2 and 3 are supplied to the simulation and the fourth variable is used to select which simulation file to open. The server number should be set differently for different users of the cluster to avoid conflict between multiple xvfb users.
#now that we have launched V-REP we shift to the node's local filesystem

cd /tmp #this saves temorary files to the node's own storage ratehr than stressing interconnects with multiple small writes to the scratch directory
#and make a copy of the images directory on here, with the same name as the scratch copy
mkdir "vrep_frames_at_${TimeNow}_on_node_${SLURM_NODEID}"
cd /tmp/vrep_frames_at_${TimeNow}_on_node_${SLURM_NODEID}
#echo "local dir made on ${SLURM_NODEID}"
sleep 20 #this sleep is crucial, it ensures v-rep actually is running and has a pid by the time the next command runs
VREPpidof=$(pgrep -f "YourUsername.*Linux.*vrep")
#echo "vrep on ${SLURM_NODEID} 's pid is ${VREPpidof}"
PSPanswering=$(ps -p ${VREPpidof})
#echo "ps -p says ${PSPanswering}"
CountingVar=1 #counter to handle file names
while ps -p ${VREPpidof} > /dev/null; do #monitor V-REP's pid, until V-REP ends keep doing this loop
    TimeNow2=$(date +'%d-%m-%Y_%H:%M:%S')
    PicFileNameNow=$(printf "image-%0.5i.png\n" $CountingVar) #$(echo "image-${CountingVar}.png")
    #the PicFile goes in the /tmp local filesystem
    xwd -display :92 -root -silent | convert xwd:- ${PicFileNameNow} #takes the screenshot on screen 92, this 92 should be changed for different individuals using a cluster, as should it when launching xvfb
   
    sleep ${ScreenShottingPeriod} #sleep until we want the next image capture
    CountingVar=$((CountingVar+1)) #iterate the file numbering counter
done

wait #don't finish things off until V-REP is finished

#echo "vrep finished on ${SLURM_NODEID}, processing vid"

outputVidName=$(echo "${Var1}_${Var2}_${Var3}_V-REP_recording_${TimeNow2}_on_${SLURM_NODEID}")

module load vis/FFmpeg/4.1-foss-2018b #adds the ffmpeg module, afor each person to use can help.gain may vary on different systems
ffmpeg -r ${frame_rate} -start_number 1 -i %*.png -c:v libx264 -vf fps=${frame_rate} -pix_fmt yuv420p ${outputVidName}.mp4 2> /dev/null #creates the video file ensure this happens on the local filesystem, redirect text on command line output to trash

sleep 50 #ensure that the vid has saved properly

cp ${outputVidName}.mp4 ~/scratch/V-REP_PRO_EDU_V3_5_0_Linux/vrep_frames_at_${TimeNow}_on_node_${SLURM_NODEID}/${outputVidName}.mp4
#copy the video in to the scratch directory copy of "vrep_frames_at_${TimeNow}_on_node_${SLURM_NODEID}"

#now remove all the old png files in this directory so as to save some space
rm *.png #ensure this happens on the local file system


#delete the video and folder from the local filesystem
cd /tmp/vrep_frames_at_${TimeNow}_on_node_${SLURM_NODEID}
rm *.mp4
cd /tmp
rm -r vrep_frames_at_${TimeNow}_on_node_${SLURM_NODEID}
#echo "done on ${SLURM_NODEID}"

This script launches V-REP and takes a series of screenshots as it runs, these are saved in temporary folders on each node where a job is running. Once V-REP has completed, either the simulations run to the maximum 1200 seconds of simulated time specified, or an earlier ending condition occurs within a V-REP simulation, the images are combiend in to a video and the video transferred to the central scratch filesystem, where it is placed in the same folder as output files from the same node's V-REP simulation.

One should also note a further useful tip, as per [Joaquin Silveira's advice here], when multiple users needs to run V-REP simulations on a cluster, each should try editing portIndex1_port in remoteApiConnections.txt in their V-REP folder, on scratch, to an alternative value negotiated so as not to clash with the value used by anyone else. Assigning a unique port number in this way acts to prevent V-REP instances from different users, which may share the same node, from conflicting.

Summary of the Dynamic Self-repair and Omni-Pi-tent projects in 2020/2021

Its been a while since this blog was last properly updated,the last post was in January of 2020 with many modules on the point of being produced. As the first of these was assembled a number of bugs were discovered in the mechanical design, exacerbated by the changes in part tolerances caused by switching from Robox printers to Stratasys.

The first assembled of the main batch of robots

 

Screwing together and then testing the first of the production modules found it scarcely able to drive, despite no significant change in module weight compared to Robot 2. And with two full robots, this one and the second prototype, a proper test of the full hinge system was now possible, it failed abysmally, stalling under MUCH lower torque than calculated. And so, just at the point when a redesign and reprint of certain major elements was becoming necessary, the government panicked. R.H.Peck, for one, considers that Spring 2020 saw disproportionate and disasterous damage to mental health, civil liberties and the economy, all while failing horrifically to provide the protection to the vulnerable who most desperately needed it. A precedented, that is Sweden like, strategy focused on preserving normality amid rigorous hygiene and immense support for the vulnerable, he feels, would have worked rather better, even if not perfectly, both while waiting for effective vaccines and now that they are thankfully available.

 

Thanks to heroic efforts of our Department's Technicians we resumed work in mid-summer 2020, with R.H.Peck, for one, requiring physical presence to continue the project. Summer, Autumn and Winter have been spent working very hard to make up for those 3 months of time lost to the very worst excesses of authoritarian intrusion. Hence, this update comes over a year since the previous one. As R.H.Peck's PhD project, the Dynamic Self-repair and Omni-Pi-tent work also involves the production of a thesis, which has taken further time and exacerbated the delay of this post.

 

A series of posts will examine the various different aspects of this work, both in hardware and in simulation, but to summarise:

  • The wheel mechanisms have been optimised for improved performance
  • The hinge mechanism has been completely redesigned to use involute rather than worm gears, more powerful 12V motors have been fitted and all the electrical alterations this required will be posted about later on
  • The hook mechanism has been optimised for better fit
  • We have a journal paper in progress on Self-assembly strategies
  • Converting self-assembly code from lua in the V-REP sims to C for the hardware has been done, and small-scale demonstrations of multi-robot in-motion docking and self-assembly performed with hardware (more to come)
  • The initial implementation of Dynamic Self-repair has been programmed, and is currently being compared against other self-repair methods in simulation
  • Work is under way on further improvements to Dynamic Self-repair, ensuring it will soon be able to cope with a wider range of failures
  • The compass navigation system has been redesigned to bypass the internal calibration routines of the BNO 055 chip, which were causing problems
  • We have found a way to easily run bulk V-REP simulations on a cluster, this too will get its own post

 

The first production robot, upgraded with new hinge and wheel mechanisms, with the second prototype in the background


 A video, below, shows the new robot running, note the improved manoeuvrability compared to the prototypes.

 

 


04 November 2020

The Project Continues

 Just a quick update to say there will be some significant announcements on here soon. Lots of new hardware and simulation work to discuss soon.


19 January 2020

Producing More Modules

I've spent the last several months producing, single handedly, a further 5 Omni-Pi-tent modules. This has been a time consuming process, but in the few breaks from soldering and part assembly that I've had I have been able to work on our self-assembly and Dynamic Self-repair simulations in V-REP. I'm using this post to provide a few updates on progress, share a few lessons taught by the cruel hand of repeat production, and make a few notes on the topic of work soon to be published.

Port faces for 5 modules being assembled


The 3D printing itself is a pretty hands-off process, while all parts have been designed such that they can be produced by a hobbyist grade printer (indeed the prototype module demonstrated at TAROS 2019 was entirely printed by such a machine, a Robox), the Department now has some Stratasys F170 printers which can run whole batches of parts at once and pretty much never produce a warped print which would require re-printing. With the ivory coloured ABS material they make some wonderfully smooth gears, oddly enough all other colours give rougher textured parts, which mate with minimal friction and seem able to stand up to pretty substantial forces with ease.

Worm and bevel gears printed in the slippery ivory coloured ABS material


The trouble with the F170 printers is the GrabCAD software, it's not too bad software as such but lacks a lot of the controls one would expect in an stl slicer program. Thankfully it recently got an ability, described in software as "body thickness", to let the user adjust the number of perimeter layers around the infill. Extra perimeters are a weight efficient way to significantly strengthen a 3D printed part, plenty of online sources show how several extra perimeters tend to give strengths almost as good as solid for only perhaps a quarter of the mass of plastic. Until this feature got added to GrabCAD it couldn't be trusted to produce certain small cogs, such as those to drive the omniwheels, and a few other parts designed to have minimal mass but subject to potentially a lot of wearing over time. GrabCAD still goes over the top on the amount of support material used, which slows down printing and costs quite a bit of money on support material. With some careful trial and error it became clear that the weight and perimetering profiles we had used on the Robox systems could be reproduced within GrabCAD, once GrabCAD added the "body thickness" options, albeit slightly heavier for most of them.

A comparison of gears printed with and without extra perimeter layers, note the difference between the nozzle path used for the perimeters on the white gear and the infill pattern on the yellow gear.


When a printer job is set up with the GrabCAD slicer to use either an ABS or ASA material profile then should you decide to switch materials before production the slicer reverts the files to default settings for things such as density of infill and number of perimeters (specifically high density but not solid infill and only two perimeters), this can be a pretty dreadful default for the applications some parts are designed for. I would hope that future software versions would keep modified print settings the same when switching between the material to be used. As a final note about GrabCAD, in one or two places, particularly where 3D printed parts are designed such that there is a hole within a sloping surface into which other items slide such as behind the spikes on the outer rim of the docking ports, it tries to thicken thin walled regions. Unfortunately it does this to some sharp edges which would be better cutting off early than made thicker near their tips, thickening them like this prevents other parts slotting into place, hence some hand filing is needed afterwards in a small number of places.

Diagrams showing, on the CAD model of one of the printed parts, the approximate effect of GrabCAD's automatic thickening of thin walls. As another parts slots into the hole with the sharp edge filing was necessary after printing.


PCB soldering has gone well, a mixture of handsoldered through hole parts and hand placed SMD parts with pump-fed needle-applied solder paste. Over time the difference in speed to attach components in the two ways becomes astonishing, from having never SMD soldered before one finds within a few tens of sessions practice that the full process from peeling the plastic backing off the paper reel, turning the component the right way up in tweezer tips if it fell out at an unusual angle and then putting an SMD component into place atop the pasted pads, can be reduced to around 20 seconds per component. Soldering with an iron for through hole parts always seems to take a large fraction of a minute per pin, not counting the time needed to first insert the part into the through holes and sometimes blue-tack it from "above" to keep the part stable while soldering.

The Central and Port PCBs for a single module


Unlike the first prototype the five modules in production now, and the second prototype (mostly white) built in summer, there are no free hanging parts or wires soldered to boards. Instead the boards and components each have headers soldered on which connect to cable assemblies, for the small parts such as microswitches which have to be scattered in very physically specific locations tiny shim PCBs join the switches to header connectors. This has made assembly much faster and much less prone to solder joint breakages.

Charging of the NiMH batteries has also been made a quicker process, the whole lot can now be charegd in situ in a matter of hours, avoiding a laborious battery removal and reinsertion process.

The temperature over time of a module's 8 cell NiMH pack being charged in-situ within the confines of the thermally rather insulating core of the robot, note the only minor rise in temperature. The sharp spike is unexplained, it affected both sensors, shown in blue and red, located at diffferent locations around the battery pack but seems surprisingly brief for a temperature change.


In the project overall we've been running simulations of self-assembly under various circumstances, particularly concentrating on scenarios where the Omni-Pi-tent platform's unique set of capabilities is essential for effective performance. This has involved getting V-REP to run on our university supercomputer (the University of York's Viking system), so we can gather statistical data from repeated runs with randomised strating conditions. Clusterising V-REP is in some ways straightforward and in others extrememly bothersome. V-REP 3.6.2, the latest version during autumn when the cluster work was begun,doesn't seem compatible with any non-debian based linux distro, the supercomputer is indeed not debian based. V-REP 3.5 on the other hand runs, mostly. I am not certain of exactly how it happens, but it appears that multiple V-REP instances running on the same machine have some form of conflict which prevents the lua scripts in more than one of them from writing out to a text file, it also prevents some of the instances running in parallel from terminating their simulation when sim.stopSimulation() is called. Fortunately however this conflict appears not to occur if the instances are run on separate cluster nodes, note that is nodes, not cores, running sims on the same node but separate cores triggers the conflict again. With the number of nodes available we can run about 40 simulations at once in parallel, the whole batch usually completing within under 12 hours. By that method we've gathered data on self-assembly performance for a variety of shapes in arenas of varied sizes, a paper on a new method we have developed should be coming soon.

01 September 2019

Robot 2 Docking and Simulated Self-assembly

With the second robot working we've completed docking tests with it, focusing on repeating the docking to a spare port which the first robot achieved. We expect a slight improvement in the reliability of the docking, being able to ensure a successful docking action without need to retreat and retry from a wider range of initial points of entry to the illuminated 5KHz cone, due to some changes made to the phototransistor orientation angles on robot 2 relative to the phototransistor orientations on the first module produced.

Robot 2 also has improved precision for hinge positioning than the first prototype, made possible by a new FDM printer we've got with accuracies closer to 0.1mm than the 0.2mm we had mostly used earlier, the off-white and dark grey parts of robot 2 were produced on the new printer with ABS rather than the PLA used elsewhere in the modules. This ABS, being smoothery and more slippery to the touch is especially useful for the worm gears and bevels.

Videos of robot 2 docking to the port are shown below.


We will soon build a third robot and perform docking actions between an approaching robot and a moving seed, as the omniwheel drive was designed to enable us to do.

Most of our work right now is focused on simulations related to self-assembly with swarms of Omni-Pi-tent modules. This is to be the subject of a journal paper soon to be published. Below we show a clip from one of those simulations.

17 July 2019

Docking Demonstration Video




TAROS 2019 went well, the demonstration managed to run in an unstructured environment remarkably well, and the conference centre definitely had an unusual magnetic environment for the BNO 055 compass.

Communication in this demonstration works entirely using infrared signals, digital signals at a 38KHz modulation frequency are used to communicate information such as compass bearings between the port and the robot, while the analogue intensity level of a 5KHz beacon signal allows the robot to navigate when lining up within the port's cone of emitted light.

The second robot is also up and running now, and has performed a very similar docking demonstration during a modular robotics workshop here at York.
The final stages of assembling robot 2, just before attaching the faces for ports 1,2 and 3.

The next step will be to perform docking between two robots, rather than using the spare port as a docking target. This will also involve docking to a moving robot, as had been demonstrated in simulation, which will be very important to Dynamic Self-Repair operations. This should be performed in the next few months, however there are some other aspects of simulation work to complete before docking to a moving module can be started in hardware.

04 June 2019

First prototype and TAROS 2019


The first robot driving freely, the motion from the omniwheels bears a remarkable resemblance to the robots in simulation

A timelapse view of hinge motion on the first module, note that it is both rolling and pitching at once

We've had the first robot fully functioning for a while, but found a number of design flaws which prevent the first robot from acting as quickly or reliably as intended. These are mainly problems in the low level I2C communication protocol between the (Linux operating system) Pi Zero W master and the (realtime microcontroller) ATmega slaves, a fix has been worked out to mitigate the ways in which both the Pi's BCM2835 CPU and the ATmega328P chips have problems in their I2C implementations which become especially prominent when using a Pi as an ATmega328P's master. One of the key reasons for this is a well documented bug in the BCM2835's I2C clock stretching features, except under special circumstances the Pi does not respond properly, as per the specifications of the I2C protocol, to a clock stretch attempt by a slave device. Full implementation of our fix however required a redesign of the PCB, some extra traces to enable negotiation between the Pi and each ATmega in advance of I2C communication events, so wasn't possible on the first prototype.

One of the successful I2C data transfers between the Pi and an ATmega328P, this consists of firstly the Pi's write attempt to pass data down to the ATmega, then a request to have the Atmega send data back to the Pi


The effect of the I2C crashes is especially visible when controlling the hinge microcontroller which is programmed to hold the hinge's position when I2C fails and is forbidden from resuming motion until the bus recovers. Although the hinge is fully self-contained in terms of position tracking and can accurately accomplish a move from any given pitch-roll combination to any other without requiring input from the Pi, we decided that while it was acceptable to keep moving in the absence of I2C communication from the Pi the hinge should stop if it detects that the I2C bus has failed, otherwise it may keep moving to a previously desired position while the Pi is attempting to send it to another location but unable to get I2C messages down to it.

Below are some more still images taken from video of the first prototype:








The first prototype demonstrates raising the 2DoF hinge and rolling it

The first prototype shows the torque delivered by the hinge by lifting itself up on its front and rear docking ports




The second prototype is now nearing completion, and also contains some modifications to ease the manufacturing process for modules and avoid vulnerable wire-to-motor solder joints which have dogged the development of the first unit. Videos of that second module, including some where it docks with the first prototype, should be posted here during the weeks after TAROS.

We will be bringing the first prototype to TAROS 2019 to provide a live demonstration of a single module as part of our poster presentation on the Friday morning. We have produced an autonomous docking demonstration in which the prototype wanders randomly until it detects the 5KHz light of a docking beacon (made from a spare docking port) then aligns compasses, based on IR 38KHz messages carrying the recruiting port's compass reading, and moves in to connect hooks and dock. Videos of this docking demonstration sequence will be posted on this blog shortly after TAROS.


In the meantime here is a brief video of the first prototype's early testing, and some images of the first prototype's PCBs wired up outside the structural body during the development and debugging of the electrical design and low level software.




Testing of the 4 port PCBs and port faces together with the central PCB to which the Pi is attached (from below)

A closeup of one of the port PCBs and the internal side of the 3D printed face

Testing the second robot with components spread across the desk to enable easy access for oscilloscope measurements