TLDW logo

How The Field Museum Unlocks New Research Possibilities with Vision AI

By Roboflow

Summary

## Key takeaways - **20 Years → Few Weeks for Imaging**: Field Museum imaged 19,000 specimens in 20 years traditionally; with DrawerDissect, Elizabeth imaged 13,500 in just a few weeks. [00:09], [32:26] - **AI Measurements Match Calipers**: Segmentation masks enable precise length/width measurements with R-squared nearly 1, validated against manual caliper measurements on 70-80 tiger beetles. [20:23], [21:16] - **Verifies Bergmann's Rule in Beetles**: Analysis of thousands of tiger beetles shows strong support for Bergmann's rule—body size increases with latitude—in four genera like Cicindela. [27:09], [29:08] - **Classifies Unknown Species Accurately**: Custom model trained on masked Cicindela images correctly identified most unknown specimens, matching human identifications with few false negatives. [34:27], [35:13] - **Zero Coding to AI Pipeline Expert**: Elizabeth had no Python or AI experience before this project but built DrawerDissect using Roboflow's intuitive interface as a field ecologist. [38:25], [49:38]

Topics Covered

  • Structural color endures centuries undecayed
  • Bergman's rule holds selectively in tiger beetles
  • AI imaging accelerates 20 years of work to weeks
  • AI rivals human species identification
  • Ecologists bootstrap AI without prior coding

Full Transcript

In the last 20 years since, you know, imaging has become a big interest of the Field Museum, we did about 19,000

images in 20 years. I was able to do 13 and a half. So, getting close to that number uh in much less than 20 years. In

fact, in just a few weeks time.

>> Hello. and let's talk about bugs, computer vision, science, and how those things are coming together to unlock brand new possibilities for researchers.

Today, um we are joined by Elizabeth Posta uh of the Field Museum of Natural History in Chicago and we're going to be diving into a project that she along

with many other researchers created recently. It's super cool and I am happy

recently. It's super cool and I am happy to have the chance to present it to everybody. Uh to get us started,

everybody. Uh to get us started, Elizabeth, would you like to introduce yourself?

>> Absolutely. Thank you both um for the many introductions, but just to introduce myself personally. I'm

Elizabeth Posta. Um I'm a post-doal researcher here in Chicago at the Field Museum. Um I've been here about two

Museum. Um I've been here about two years. I uh graduated with my PhD in

years. I uh graduated with my PhD in animal behavior from UC Davis and I'm really obsessed with two things. One of

them is insects and hopefully by the end of this I can uh convince you that insects are are worth your time and are very cool. Um and the second thing is

very cool. Um and the second thing is the ecology and evolution of color traits. And so those two things come

traits. And so those two things come together uh and mean that I need a lot of data because insects are a very diverse group and so AI is really

helping me uh kind of get through those uh data points.

>> All right. Well, thank you so much for joining us. Um, to get us started, I

joining us. Um, to get us started, I have a picture over here that I think it's safe to say like my favorite

picture that I've seen this quarter at least uh this past year.

Can you tell us what's happening in this photo?

>> Absolutely. So, well, first I just wanted to ask a question for the crowd.

If anyone has actually been to the Field Museum in Chicago before, I don't know.

You can do your little hand raise icon.

We'll put it in chat.

Any hand raises? Uh, have none of you guys been to the field? Okay, a few.

Keith yes.

Okay. No would love to. Okay, great.

Yeah, come on by. I'm I'm there up on the third floor. Um but yeah, so the Field Museum and other natural history

museums hold really immense collections of natural history um objects and specimens in particular. And so this is actually a photo of the Smithsonian that

I totally just took off Google because it's a really great demonstration of how insects are stored. um we have these

like big uh collection boxes and they're usually separated into little trays um typically by taxonomic uh order. So you

might have one species or one genus uh per tray and often it's it's also separated by location. So where in the world these things were were collected.

Um, and uh, as uh, Patrick and Jackson both alluded to, uh, some of these collections are huge. So, at the Field Museum, first of all, we actually don't

know the exact number of how much stuff we have, uh, just because we have such a huge collection. Um, but we probably

huge collection. Um, but we probably think at least for pinned insects, this is somewhere around 8 or 9 million

and it's closer to 12 million if you account for all of our wet specimens.

So, things that we store in in jars in of alcohol. Um, and so this

of alcohol. Um, and so this massive amount of data is kind of both a blessing and a curse. Um, it's a blessing because there's a lot you can

learn about the natural world from preserved specimens. Some of these are

preserved specimens. Some of these are decades or even hundreds of years old.

Um, one of the oldest ones we have in the collection was actually collected by Charles Darwin himself. It has his signature on it. Um, so there's a lot of cool stuff in there. Um, and if you're

an evolutionary ecologist like me, I'm really interested in exactly what you're seeing on the screen, which is a whole bunch of different shapes, a lot of

different sizes. Um, and most

different sizes. Um, and most importantly for me, all sorts of different colors. Um, and museum

different colors. Um, and museum specimens are nice, insects in particular, because most of them are structurally colored. And so what that

structurally colored. And so what that means is that the color patterns on their outside of their body are not from a pigment. It doesn't degrade over time.

a pigment. It doesn't degrade over time.

So some of those like blue morphos you see in the bottom um of the picture, those may be decades old and they look exactly how they would have looked, you know, when they were collected. Um and

so that's really handy for me. Um, and

so historically, the way you would image and digitize these collections, um, would be to pull out a drawer, bring it

up to the imaging lab, take each individual specimen off um, uh, from the unit tray or from the

drawer, and then you'd pull the labels that are carefully arranged on the pin, and those labels contain a lot of the metadata. So things like where it was

metadata. So things like where it was collected, who it was collected by, when it was collected, um, and some other identifying information. More recently,

identifying information. More recently, you might find a QR code on there so we can keep track of individuals. Um, but

all of that information is useful. And

so, normally you take those off, transcribe it, put it in a museum database. We use something called Emu.

database. We use something called Emu.

Um, it's very low tech relative to the kind of stuff you guys are probably uh used to working with. I think it hasn't changed since it was developed. It looks

like Windows like in the '9s. It's

pretty crazy. I don't have any photos of it, but just believe me, it's pretty funny. Um, and so you do that part and

funny. Um, and so you do that part and then you take a photo of it. Um, usually

you'd get a dorsal photograph, maybe the underside if there's some important identifying features. Um, and why do you

identifying features. Um, and why do you guys think it might be useful to have images and metadata digitally available?

Any ideas?

>> We see uh Zach mentioned quicker than going to the drawer.

>> Yes, absolutely. And so especially quicker if you are a researcher who lives in say um Germany. We actually

just had a visiting researcher from Berlin. Um it's definitely a lot quicker

Berlin. Um it's definitely a lot quicker than a hymn flying all the way here and pulling out a drawer. And so one thing is to make collaborations across

countries across institutions much easier. Um a huge you know benefit of

easier. Um a huge you know benefit of these collections is it allows people to reference the taxonomic identity. So if

you're collecting a new species or what you think might be a new species, you may want to check the existing species that are already here to make sure, oh, these has charact, you know, these have

characteristics that are similar or not.

Maybe I do have a new species. So that's

one thing. We get image requests all the time. We just got a new collections

time. We just got a new collections manager and that's like one of the main things uh she's dealing with. Um so

that's one thing. just logistically for curation purposes, it's really useful to have those things digitally available.

Um, and then Charlotte mentioned to quickly search for features, colors. So,

yeah, once you have images, you start to be able to do morphological analyses.

So, you can start to get things like color, size, shape, um, all of that kind of stuff. Um, and then I'll I'll fill in

of stuff. Um, and then I'll I'll fill in one other thing that I think is really pertinent now is if we know what was there when it was collected in a

particular location, um, it's really helpful for things like conservation.

So, um, we have definitely endangered and, uh, you know, recently extinct insects in our collection and we can look there and say, how has their population changed over time? Have they

moved to different ranges? have they

stayed in the same place? What's going

on? So, there's a lot of things you can learn uh from museum collections and the easier those data are to access, the the easier it is to do that kind of research. Um so, yeah, that's kind of

research. Um so, yeah, that's kind of the background of of collections and why digitizing is important.

>> Yeah. You know, before we move on, could you also um perhaps talk a little bit about um if we take a step back, you know,

like why would we want to quickly search for different features and colors across like a broad set of different specimens?

Were there were there sort of like any kind of challenges to doing research that you could talk about um that you

had previously? Yeah, absolutely. So, um

had previously? Yeah, absolutely. So, um

the thing with studying morphological traits in something like insects is that you have a bunch of different levels of

variation. So, on one hand, a single

variation. So, on one hand, a single species may be really polymorphic and that just means it might look a lot of different ways. You might have redder

different ways. You might have redder ones, greener ones, so on and so forth.

next level up um you know at the family level. So a little bit higher up that

level. So a little bit higher up that taxonomic chain you're also going to have variation you know between a some kind one kind of beetle and another

closely related beetle. Um you go one more up to something like order. So all

butterflies or all beetles for example that's suddenly you get these compounding layers of variation and ultimately evolutionary ecologists often

have to choose uh what level of complexity they want to get at. So, if you're looking across an

get at. So, if you're looking across an entire order and you want to know, hey, like when did this uh blue color pattern evolve for example, um normally you

would basically have to say, well, I'm not going to get all of the data from everyone. So, you have to choose maybe

everyone. So, you have to choose maybe like one image for like, you know, a hundred representative families. So you're not really getting a

families. So you're not really getting a very finecaled, you know, detailed analysis. You're just getting the very

analysis. You're just getting the very top layer of that variation. Um, the

other way you could do it is just go, okay, I want to know how this trait evolved in this single species and then you look at all of the specimens for

that species to quantify variation that way. So there's kind of a trade-off in

way. So there's kind of a trade-off in how broad you can go um versus how uh how much you can characterize variation on that that lower level. Um and I guess

even taking a step back like why do I care about color patterns in the first place? Maybe that's what Patrick is

place? Maybe that's what Patrick is really getting at. Um is that color is really important in ecological terms. It plays a lot of roles. And so it's one of

the main ways that insects communicate with predators. So think of things like

with predators. So think of things like camouflage or if you've ever seen a ladybug, it's bright red and black. It's

saying, "I'm toxic. Don't eat me." Um,

and there's a lot of other different crazy things like mimicking broken sticks or bird poop or leaves. So

there's a lot of different strategies that insects use to avoid getting eaten.

Um and you know color, shape, morphology, all of those things are important. Um some of the other things

important. Um some of the other things that color does is thermorreulation. So

for you know coldblooded things like insects, uh being dark is actually probably pretty good if you're in a colder environment because it allows you to heat up faster. And for us, you know,

we can throw on a coat. For them, they really rely on those physiological processes. And so there's all these

processes. And so there's all these different axes that color is kind of interfacing with, you know, physiology, communication,

sometimes sexual communication, sometimes predator defense. Um, and so being able to know how those traits have evolved over time gives us a really cool

insight into whether there are bigger patterns that shape those systems. So hopefully that makes sense.

>> Awesome. Yeah. Well, let's um let's start taking a look at um the solution that you and all these other researchers at the Field Museum and other um

organizations worked on. Uh I have a few a few slides here with some images that you shared with me ahead of time. Um

from my perspective, it feels like there's like a kind of couple different things you've solved with computer vision. one being you know uh

vision. one being you know uh understanding the traits the attributes of the specimens and then there's going to be another thing about transcribing data. I think here this is maybe like

data. I think here this is maybe like part of the the first kind of solution.

Could you could you just introduce the kind of drawer dissect the software that you created and then what we're seeing here?

>> Yeah, absolutely. So, just to reel it back even a little bit further, um I got involved with this project um just as a

new curator of insects came in. So, this

is Bruno de Maderas. He's basically my adviser for this project. Um and he uh you know joined the Field Museum just a

year ahead of me and pretty immediately secured funds to get um a really cool imaging rig called the Giga Macro. And

so what this helps in terms of imaging, that's sort of one pipeline that needs to be optimized, is now we don't have to take

every single specimen out to image, which takes a very long time. Instead,

we can basically plop our drawer or a set of trays down on a um on a platform, and the camera will move automatically

to boundaries that we've set. Um, and so for that you do, you know, the corners of where all your specimens are. Um, and

it takes photos, uh, at each station. It

also, uh, focal stack, so you get the Z-axis. Everything is nice and in focus.

Z-axis. Everything is nice and in focus.

We have this big as heck tentric lens on it. Um, and that just means that there's

it. Um, and that just means that there's no warping or parallax. And so you end up with this like hilariously large photo. Um, it's about

photo. Um, it's about seven or eight gigabytes. Uh the TIFF form of that. So that's like the size of a a small like Steam indie game

basically. Um uh and uh and it's great

basically. Um uh and uh and it's great because you have everything in one place. You have, you know, potentially

place. You have, you know, potentially hundreds of specimens, all really high quality, high resolution. You can zoom in and it's as if you you took a photo of just, you know, of a single specimen.

Um, but of course that doesn't actually solve our problem because ending up with a huge image with a bunch of stuff on it is cool, but it doesn't actually get you

to the specimen level. And for that, um, that's really where I came in and where Rooflow comes in. Um because all of the

kind of computer vision models like detection and segmentation are exactly what you need to break down that whole drawer image into individual trays. So

you can actually see right here, this is what I mean by a unit tray. And you can see at the top there, we've actually added these um special labels that pop up when we're imaging and then they fold

back down when we put them back into the tray. Um and so for example, these are a

tray. Um and so for example, these are a stick spotted um tiger beetle. Uh this

family is really beautiful and um iridescent.

And these are actually in the uh Near Arctic regions. So that does mean that

Arctic regions. So that does mean that they've been collected probably in the United States. So these guys are running

United States. So these guys are running around. Um many of you have probably

around. Um many of you have probably walked past them before. Um, and so one of the philosophies of Jordan is to

basically break down steps like as finely as possible. And so I don't just go from drawer straight to specimen. We

go down to unit tray first because it has a lot of uh information that are going to apply to all of those specimens. And then you can see here um

specimens. And then you can see here um this object detection model uh bug finder aptly named is doing exactly what it sounds like it will do. So it's

finding all of the insects in a tray uh putting a bounding box around it and um numbering those specimens which is useful for later if we want to associate

it with metadata or with a QR code or or things like that. Um, and then, um, I didn't want to just stop there. Like

just a single dorsal image is great, but pretty much any kind of analysis of features of an insect

is going to involve um, painstakingly outlining the body or different anatomical features uh, on the insect.

And so segmentation, you know, at this point, everyone on their phone has this where you just, you know, press on an object and outlines it for you. And so,

same principle here. We're using a segmentation model, uh, bug masker. I

know I'm really creative with these names. Um, to get sort of the main chunk

names. Um, to get sort of the main chunk of the body, um, ignoring the legs and antenna for now, just because of how variably positioned they are. So, I'm

just getting that main head, thorax, abdomen. Um, and this is trained on

abdomen. Um, and this is trained on gosh, at this point probably thousands of different insects. We started

initially just with tiger beetles because that's what we, you know, started imaging, but at this point we've expanded to uh a bunch of different species. I think maybe there's a photo

species. I think maybe there's a photo showing segmentation on on different things coming up. But

>> yeah, probably. Yeah, before we look at that, >> maybe this is a good point to I think like here this might be related to

what's happening after or perhaps this is answering the question of why segment the bodies.

>> I love charts. I don't know what's happening in this chart, but could you is this connected? Could you could you let us know about that?

>> Yeah. Um so one of the first things that's useful about segmentation is you can create masks of the the body and basically say hey like you know beetles

are roughly ovalshaped. Uh if you want to get the length and the width measurement, you can just do that because you can essentially just say, "Hey, what's the longest line between two points along

this contour?" And then for the width,

this contour?" And then for the width, usually it's pretty accurate to just say, "Hey, what's, you know, the longest perpendicular uh line to that initial length measurement?" Um, and I actually

length measurement?" Um, and I actually verified this. So me and one of my

verified this. So me and one of my interns uh took some old school like calipers and went in and measured probably like 70 or 80 tiger beetles by

hand which took a very long time. Ro uh

Rooflow combined with Dorisk does this very fast by comparison. Um and so the charts you're looking at here are just that validation. So on the um x-axis you

that validation. So on the um x-axis you have the manual length uh in millimeters and on the y- axis you have the uh length output by jordisct. Um, and so if

you look at that line of correlation, it's very tight. Um, you can see we have some really big ones. Those are

mantacora. They're really cool. I'll see

if I can find an image of them for you guys later. Um, um, but more importantly

guys later. Um, um, but more importantly that our squared values, so how good the fit of those two measurements is very high. It's almost a oneto one. Um, and

high. It's almost a oneto one. Um, and

so we're getting very accurate measurements um of, you know, thousands of of specimens at once when it would normally take, you know, hours to do

this kind of work.

>> Beautiful. All right. And actually, so I see we had a question from Charlotte about um some of the insects being pinned over their descriptions and kind

of stuff like that. That's a little bit related to actually maybe the next slide. I'll say we can of course return

slide. I'll say we can of course return to that question more in depth later >> but um here what we're looking at I think this is part of in my mind what

seems to be kind of like the second problem which is related to the pins the labels I see there's numbers there's printed text sometimes you're dealing

with handwritten text which at this moment I'm choosing to believe this is Charles Darwin's handwriting until somebody proves me wrong. Um

could you uh let us know what are we seeing here? What is this solution that

seeing here? What is this solution that you you created with this these AI models that we're seeing here?

>> Absolutely. Yeah. So, um as you guys kind of saw in that tray image that I showed you already, um there's a lot of information in these images basically at

every level. Um, and so as much as

every level. Um, and so as much as possible, I'm trying to like squeeze as much data, whether it's text or features of

the specimen or, you know, other things like that. How many specimens are in a

like that. How many specimens are in a tray, as much information as I possibly can um, from these images. And so here you're seeing the pre-processing that I

do to get text information specifically.

And so, as I mentioned, um, a convention of insect drawers is that they have labels that describe, um, the species or whatever the lowest taxomic unit it's

described to. Um, so in this case,

described to. Um, so in this case, you're seeing cisendella formosa. Uh,

and on the right, you're seeing this eomoth, this eo. Um, and then for the field museum more specifically, we also have things like the broad biodraphic

region it was collected. So again here this is NEA, this is Near Arctic. We

also have things from you know NEO like the neotropics. And then on the bottom

the neotropics. And then on the bottom left or rather yeah the left side in in the purple is a barcode that we've generated to keep track of individual

trays. So ultimately at this point from

trays. So ultimately at this point from the field museum perspective all we actually need is the barcode because the other information is already associated.

Um, but on the right you can see that I want other people to be able to use this too that may have much less standardized data. So in this case this is like a

data. So in this case this is like a handwritten label not by Charles Darwin.

I'm sorry Patrick. This was actually by a really cool collaborator of mine um Lucy uh Gorani.

She's a graduate student at Ohio State who uh we actually just met at a a an entomology conference. I did a talk

entomology conference. I did a talk describing this work and she came up to me and was like, I have 17 drawers of moths. Can we do this? Like, can I do

moths. Can we do this? Like, can I do this with these moths? I was like, sure.

So, she, you know, took her 17 drawers in a car and came to uh came to the Field Museum from Columbus and we imaged all of them and she's she's actually finishing up her dissertation with those

images and that research right now. Um,

but that's a bit of a tangent, but sorry, not Charles Darwin, but a very cool researcher. Anyways, um and so, uh

cool researcher. Anyways, um and so, uh at this point, uh this is a pretty simple detection model saying, "Hey, find these features of text that you

might see, you know, in a drawer or in a unit tray." Um and then I end up using

unit tray." Um and then I end up using large language models with specific prompts um to transcribe that information. I find it's just easier to

information. I find it's just easier to give large language models as little um noise as possible because you know if I

gave them the you know the entire tray that I'm sure it would just pick up all sorts of random stuff it might think that the insect legs are letters like stuff like that. So as much as you can

be like this is a single image of text and I also kind of up the contrast you know put it in black and white do some pre-processing to make it really really easy um and then that returns in you

know a nicely formatted um spreadsheet basically and and that's a nice nice feature of of drawuct is all of the spreadsheet outputs at the final step

get merged together into a single well actually into three data sheets um that are compatible with emu our databasing system and so they can be uploaded

associated with the images and so on.

>> All right, that's uh I couldn't have explained the need to first use one model to isolate text and then pass it

to like a OCR or language model um better. I'm going to be uh using some of

better. I'm going to be uh using some of that phrasing in the future.

>> Yeah.

>> Um there's one more chart here. I love

this chart as well, even though I don't know what's going on. Um, could you let us know what we're seeing here?

>> Sure. Yeah. So, right here, um, is basically a demonstration of the fun things you can do with thousands of

specimens worth of data. And so, um, just for, uh, hopefully tapping into some like introbio class knowledge, um,

if anyone remembers a thing called Bergman's rule. It's a broad biological

Bergman's rule. It's a broad biological principle. I know some of us were more

principle. I know some of us were more recently in, you know, undergrad bio than others, if if at all. Um but

Bergman's rule is essentially this observation that things tend to increase in size uh as they move away from the equator. So as latitude increases

equator. So as latitude increases there's this sort of general observation that you see things increase in size.

This has a lot to do with thermodynamics. So you know reducing

thermodynamics. So you know reducing your surface area to volume ratio helps you keep warm. Um and then the opposite

for warmer climates. Um, but it's really just a principle. It's an observation um that people have tried to kind of

validate or disprove over time. And so

one cool thing you can do with huge data sets like this is actually look at those like basic like 101 kind of rules or observation that you learn in class and

say, "Hey, is this actually true?" And

so that's something, you know, I was able to actually do here. And so if you look at the the x- axis there, it's going to be latitude. So this is absolute latitude. So going from 0 to 60

absolute latitude. So going from 0 to 60 or 0 to negative 60 if you're in the southern hemisphere. Um and then on the

southern hemisphere. Um and then on the left is uh just a logarithmically transformed body size just because there's a lot of variation in body size.

So it accounts for that kind of non-normal data. Um and then on the top

non-normal data. Um and then on the top in the italics you can see uh the names of different genera. So, Cisendella is like clearly one we have a lot of. You

can see a ton of data points. Um, and

then going down from there. So, these

were some of our, you know, like most common genre that we have in in the tiger beetle collection. And so, you can actually see um in four of the the genre

there is quite strong and actually statistically significant support for this idea that body size increases with latitude. So, that's really cool. Um,

latitude. So, that's really cool. Um,

you know, this is pretty quick and dirty. So, there's some bias in terms of

dirty. So, there's some bias in terms of like probably you're going to see more stronger correlations and things where you have more observations. Um, but if

you look at these other ones, um, it's not across the board. So, loyalism,

all of the ones with the black lines are actually pretty weak associations. and

ellipsoptera, you may actually start to see a trend of of the opposite direction and hypaththeia as well. Um, so I'm not

going to go too far into the like crazy, you know, biology and evolutionary reasons for this. Um, all this is to say is that once you have a lot of data,

there's a lot you can do with it.

>> I am convinced that I will someday see a postma rule in some textbook going forward. now just on this short

forward. now just on this short conversation.

All right, this is one other Oh, actually, so before we keep talking about maybe some some results of surveys and then what we see here, um something

that perhaps we haven't covered yet is uh how many specimens have you analyzed?

like how many kind of drawers or sort of what's the scope of the sort of like the results that you've seen from having this drawer dissect solution? And I

think that might help us kind of lead into this slides conversation. I'm

hoping.

>> No, I think it's totally well. So, just

in tiger beetles alone, um it's 44 drawers. And so, as far as insects go,

drawers. And so, as far as insects go, that's a pretty small family. It was

kind of a nice like bite-sized group to work with. They're also really beautiful

work with. They're also really beautiful and have a lot of interesting um color patterns and, you know, uh features of their natural history. Um and so we

ended up being able to image 44 drawers in about two or three weeks. Um which is insane. Like that is not the usual rate

insane. Like that is not the usual rate at all. Um and so for context, those 44

at all. Um and so for context, those 44 drawers were about three and a half thousand specimens. again like three and

thousand specimens. again like three and a half thousand in a couple weeks with um some really cool interns I had through the women in science program at

the field. Um uh and then since then I

the field. Um uh and then since then I think we've gotten to you know imaging on and off something like 150 drawers.

Um and so that's getting up to gosh whatever 150 times like probably two or 300 uh per drawer let's say. So, if

anyone's really quick with math, that's getting, you know, into the, you know, tens of thousands of specimens. Um, and

just for reference, in the last 20 years since, you know, imaging has become a big interest of the Field Museum, um, we

have about we did about 19,000 images in 20 years. I was able to do 13 and a half. So getting close to that

number uh in much less than 20 years in fact in just a few weeks time. And so

you can see how the two-pronged approach so both high throughput imaging and high throughput image processing you start to get like truly crazy levels of data and

images like really fast.

>> Awesome. All right, man. Um I'm just still kind of thinking about this number of um 20 years versus two 3 weeks a couple weeks. Um all right with with

couple weeks. Um all right with with that said what are we looking at on this slide? I think we had we had spoken once

slide? I think we had we had spoken once previously and uh you had told me a couple stories about identification identifying different specimens. Is this

related to to that story that you had told me once upon a time? Yes, that is directly related. Um, yeah. So, one

directly related. Um, yeah. So, one

thing about doris is that the results of it, you have these nicely masked out specimens where the only information on

the image is information pertinent to that specimen. And so, if I'm sure some

that specimen. And so, if I'm sure some of you guys have worked with classification models before, um, they tend to get really tripped up by things like shortcut learning. So they'll cue

into stuff that isn't actually relevant to try and give you the category that it thinks it belongs to. And so the less information you can give or rather the more specific information you can give

uh the better your classification models are are going to be. And so um for our case I think museum specimens are going to become a huge repository for training species

identification models or you know identification models across entire families or orders even and so we actually tried that out. So we use these

mass version of images and um so this is for the entire genus cisendella and we had you know I went in and found this

group uh at the end of the cabinet uh this tray of cisendella that had not been identified to species like it was unknown in our collection and so I went

through and you know used traditional methods a dichomous key I looked through and did my end identifications And you can see that um in the table those are

my results under human identification.

Uh and then we train a model. So we

actually didn't use Rooflow for this. Um

I forget the exact model architecture we were using but it's up on a hugging face um page associated with the preprint. So

you can go check that out if you want.

Um and basically uh found that our classification model did a really good job. So, the things with the bold

good job. So, the things with the bold outline are things that it got right um at least in line with with my identifications. And there were really

identifications. And there were really only two where it didn't mis identify per se. It just said I don't know. It

per se. It just said I don't know. It

just said I'm not going to answer because I don't have a good idea of what this is. And one of them I actually um

this is. And one of them I actually um don't fault it too much for because it was this kind of obscure Japanese species that even I had trouble knowing

if I got it exactly to the right subspecies. So I'm not surprised we

subspecies. So I'm not surprised we didn't have any of it in our collection.

So it was not trained on this. So

obviously it's not going to like find the right thing if we don't even have it in our collection. Um the other one uh the silver cola was just a true kind of

false negative. Um and so not perfect

false negative. Um and so not perfect but pretty good. Um so you can see how valuable these kinds of images especially once you get thousands of them to to train really good ID models.

Yeah, that reminds me of some other sort of like industrial use cases I've seen where um

you have people doing these, how should I say it, like um subjective tasks where they need to use their

eyeballs and minds to look at some data and then produce like a judgment. And if

you can, you know, give that person a little bit of assistance, it um I think it reduces a lot of the like mental fatigue for those people where they

don't have to do all of the identification and all the judgment calls on their own. It's nice to have a friend who's like, "Well, hey, I looked at the images and I kind of got it most

of the way there. you might want to still review this and take a look at it, but at least we reduced some of that mental fatigue along the way.

>> Yeah, I think that probably a good goal that I can see for species ID models is not not to replace taxonomist because a lot of things you need to actually physically handle the specimen. Um, but

I think it it gets through a lot of the common species that it would probably, you know, not take that long for a human person to identify it anyways, but because there's so much of it, it takes

a while. Um, and so for these things

a while. Um, and so for these things like this, you know, ocelada ocelada, there's a bunch of these. They're really

common in our collection. It's nice to be able to just go through, you know, an unsorted sample and be able to say, "Okay, we know what all of those are."

and then get to the things like the trans by calica tapenzis that we're like ah we might actually have to go in and like look at some more defined features.

>> I think that um you know we had one more slide um about like kind of going into some of the behind the scenes stuff that's happening. Elizabeth, you you had

that's happening. Elizabeth, you you had mentioned having a like a live demo that you might be able to share. Do you still want to do that? If so, I can stop sharing my screen.

>> Sure. I can just do like a really quick little um kind of just showing like what it actually looks like in practice to deploy these. Um okay, so let me share

deploy these. Um okay, so let me share my video. Okay, so here's the main

my video. Okay, so here's the main structure. Now disclaimer, um I had no

structure. Now disclaimer, um I had no coding or programming experience whatsoever before starting this project.

So if it looks really messy and clunky and taped together, forgive me. it works

and that's the most that an ecologist needs. Um, and it and it does work. So,

needs. Um, and it and it does work. So,

uh, essentially what you can see here is, um, we've got it set up where there's sort of a main processing script and then all of my configurations are in

this YAML file. And so that includes things like API keys, um, model versions, uh, all of that kind of stuff, as well as the prompts for text

transcription. And all of those are

transcription. And all of those are editable. you can go in there and and

editable. you can go in there and and you know mess with um you know mess with some of the memory settings if you want to do parallel processing. There's a lot of features uh and you can dig into that

more if you look at the GitHub too. And

so the main structure here is that if you go into this drawers, um you can see that there's a place for unsorted. And

so that's where you would put that initial full drawer image. And then once you set it to run, you can actually see I've already sneakily done this in advance. Um it's already done the first

advance. Um it's already done the first step in the process, which is to resize that image because it's really big. I

probably can't even open it on this computer or my computer will like break.

I'm just on my like crummy laptop right now. Um, and it automatically puts it in

now. Um, and it automatically puts it in this folder and then makes a ton of different places for where all of those outputs of the pipeline are going to live. Um, so if we go here, we have that

live. Um, so if we go here, we have that fullsize image and then hopefully this resized image I can actually open and kind of show you a little bit more of Yeah. So this is kind of what the

Yeah. So this is kind of what the fullsize image looks like. Um, obviously

it's compressed, so the resolution isn't as good. Um, but this is like the start

as good. Um, but this is like the start of of the pipeline here. Um, and so there's a couple nice features, um, just like quality of life stuff that I've

built in over time. One is that you can do different steps in different sequences. Um, so you can combine them

sequences. Um, so you can combine them into unique workflows or just do one or you could just do Python process images

and all. And so if you wanted to just go

and all. And so if you wanted to just go all the way down the line, process everything, you would input that. Um, in

this case, I'd probably get to something like find trays, uh, and then prop trays. So let's see if that works here.

trays. So let's see if that works here.

And then if I had multiple different drawers, I could put this flag here and say only do, you know, this FM&H system 3457.

At the moment, I just have one test image in there. Um, but you can just run things on individual drawers and you can, you know, combine workflows. For

single drawers, you can rerun stuff. So

if you've updated your model and you don't want to just go back through and delete everything, you can say, "Hey, rerun this drawer with these steps with this new model that I've put in the the config file." So let's see if this

config file." So let's see if this works. Since it's a live demo, it

works. Since it's a live demo, it probably won't, but that's fine.

And so it's thinking and yeah, so you can see aha error. Error. Excellent. Um,

that's fine. Um, I think probably because I didn't put a I think I didn't put a version number in this. I didn't. Okay. So, here you can

this. I didn't. Okay. So, here you can kind of like sneakily look behind the scenes and see um all of the different model endpoints. For all you Rooflow

model endpoints. For all you Rooflow people, this should look really familiar. You can put in the confidence

familiar. You can put in the confidence overlap, all of the stuff that you guys naturally output. Okay. So, let's go

naturally output. Okay. So, let's go ahead and use maybe let's use version I'll use version eight.

Okay, let's try that again.

>> We are very familiar with uh the demo gods not smiling with us during our weekly sessions like this. Totally

normal.

>> And so you can already say see it's loading those API keys. It's found 16 trays in this image in about 4 seconds.

Um and then now it's going ahead and resizing those images and cropping them.

And so the resizing step is usually the one that takes the longest, especially on my computer. Um, but if we were to go in here and look in this coordinates tab, you can now see that it's found a

bunch of trays and given me the coordinates for all of those trays. And

then it scales it back up to that original image to crop them out into individual trays. So let's see if we can

individual trays. So let's see if we can actually see it doing that. This step

usually takes a while because again it's taking that full image. It would be faster on my work computer, but here we are. Um, yeah, you can kind of see the

are. Um, yeah, you can kind of see the >> broad logic. And so if you were to run all of these steps, it would basically go through and do each of these with the

output of one thing being the input for another.

So, I think that's all I really had to show for now because >> beautiful.

>> I don't know. It's probably going to take a while to crop these trays out.

>> Yeah, we have a couple other things we could look at and maybe if we want to we we might be able to come back and take a peek later on.

>> Yes.

>> Yeah. Let me Yeah, let me share my screen again. I actually have a few

screen again. I actually have a few slides that might be related to some additional questions that we got from the audience as well. Um

now for like kind of reference and actually so there was one a question from uh Louis um about like for your body size,

length, width measurements, do you get those by calibrating to a certain ratio each time or is it a preset from the distance camera rig something like that?

Um this is actually related to one of my questions as well which is um so this is the camera that you mentioned the tray uh one of them and I also wanted to ask

about these little these little doggies that we have down down here. Um and I think this might be also related to Louis's question as well. You know from a photography background this does look

like some kind of control. I see a couple different colors um to handle uh things like white balance, making sure you get the colors rendered the same way

each time. But yeah, could you could you

each time. But yeah, could you could you tell us a little bit about capturing imagery, making sure that things are calibrated?

How do you handle that?

>> Yeah, so um in terms of size for for the chat question, um it's really easy for us. We have a telescentric lens and what

us. We have a telescentric lens and what that means is that through some kind of unknown physics and light magic that I cannot describe to you um but that

others might be able to um no matter how far the camera away is from the subject or how close it has the same field of view. So it's a set distance and

view. So it's a set distance and basically a set pixel to millimeter ratio. And so that's really nice for the

ratio. And so that's really nice for the big telescentric lens we use. It's about

105 pixels per millimeter. So that's

something you can just incorporate um automatically into your measurements. Um

but if if you don't have that, you know, in like a metadata file or whatever, it just returns the pixel values and then you can figure out your millimeter to

pixel ratio later through things like image J. Um, but most places will have

image J. Um, but most places will have some kind of scale bar or some kind of ratio uh for their whole drawer imaging.

And for the the little color standard, that's exactly right. Um, so we take this whole drawer into Lightroom and calibrate it color-wise using this color standard. We've actually switched to a

standard. We've actually switched to a little nano one, which is much nicer because has a smaller footprint. Um, and

the doggies. The doggies and cute stickers. Um, those aren't just for fun.

stickers. Um, those aren't just for fun.

Uh those are a weird quirk of the Giga Macro Flow software that we use to stitch those massive images together.

Um, one thing about drawers and trays that they have a lot of blank space and stitching programs like PTUI is what

what uh we use really hates like vast indistinct white backgrounds and it will like vort like infinitely vortex things

together in improbable combinations and like crash your computer. And so we put these like filler objects both into the trays and on any place where there's like regular or blank space uh to help

the stitching process. We're trying to figure out a way to get around that because it's kind of annoying and it makes the picture look kind of funny.

But this is our best solution for that right now.

>> No, please keep the dogs that protect your computer from crashing. That sounds

very important. You know, I'll take a quick moment here to just while we're talking about measurements and calibration for this audience question

um really quickly in some other situations where perhaps you don't have a magical camera lens that always gives you the same output. If you're working

with um some other cameras, there's a couple things that people tend to worry about. Um, one is going to be like lens

about. Um, one is going to be like lens distortion and then yeah, making sure that you have the same measurements each time. Um, I do think there's a lot of

time. Um, I do think there's a lot of industries or a lot of use cases where people can make sure they have the right the same measurement each time using some kind of reference like on screen.

And I actually see that there's already a little bit here. Maybe we don't see it completely because you don't need it in this situation. Um, and then when it

this situation. Um, and then when it comes to lens distortion, um, you'll see a lot of people use those like checkerboard.

It looks like a chest set check. Maybe

it even is. There's these calibration mats that you get and then you take an image using that. And then actually even in Rooflow, we have some blocks in

workflows. They're used for uh creating

workflows. They're used for uh creating a rule to always apply the same lens distortion. correction afterwards. Um,

distortion. correction afterwards. Um,

with that, I was going to move on. You

know, uh, quick time check. We're we're

going a little bit over time today.

Elizabeth, are you okay? Um, do you need to stop at the hour or is it okay if we go a little bit over?

Great. Cool. Um, nice. I think this might be one of my final slides that we had today. You sort of mentioned it

had today. You sort of mentioned it earlier, but um my question for you was like are you a machine learning expert?

Did you have a lot of experience creating custom computer vision models before this? Um tell me tell me about

before this? Um tell me tell me about that journey of you know post-doal researcher working on bugs to you're an

expert in computer vision now.

>> Well, thank you. Um yeah so no to answer your question I was I had done zero AI or Python for that matter uh the the

most coding I had done as a graduate student was in R doing very simple analysis of predation I was basically a field ecologist. So what I was doing in

field ecologist. So what I was doing in grad school was basically the opposite of what I do now. Um it was running around in the woods like you know measuring plants looking up predation

all of that stuff. Um, and I found that ultimately, um, I wanted to get some more quantitative skills in my my toolbox as I start applying for things like faculty positions. And so, this was

a really great opportunity. And I felt like, you know, I I learn pretty quick.

I can just figure this out as I go. Um,

and I ran into this this idea of of AI and I I talked to my adviser about it.

And so I started pretty small with things like fast AI and like starting to figure out pietorrch and stuff. Um

the question I had was like how do I even labels like how do you actually like label something or an just like the the barebones basics of like how do you

do like how do you train a model like I have no idea what is even going on. Um,

and so I Googled like how to annotate vision model and roofflow. Clearly your

your SEO is very good because Rooflow and maybe a few other things popped up.

Um, and this to me was the most intuitive interface. So, um, it it was

intuitive interface. So, um, it it was very familiar to me as someone who has done like digital art and I've worked with some image data before. It just had um things organized in a way that I

could I could understand as as someone who is pretty new to it. Um, and I feel like once you get that foothold and once you have a very clear idea of what the

steps need to be, um, it makes it a lot easier to ramp up and, you know, learn as you go. And so that's kind of what happened in my case. And, you know, five years ago, if you told me I was like

writing, you know, Python scripts that link AI models together, I'd be like, what are you talking about? But here I am.

>> Cool. All right. And um my other question here is about um kind of like annotation labeling and then about some of your data sets. I guess I have a

couple questions that are all getting lumped together. Um you mentioned having

lumped together. Um you mentioned having thousands of labeled images. I think

some of those are also public if other people want to use your data sets and train models on their own. Right.

>> Absolutely. So, one of my main goals as a researcher and as someone in a museum is like we want people to use these tools. It's not just we're not trying to

tools. It's not just we're not trying to hoard this tech and just process our awesome collection. Um, so,

awesome collection. Um, so, you know, I've actually ended up, and this kind of answers some uh Zach's earlier question about like other

institutions. So, I've been working

institutions. So, I've been working pretty um directly with two people, one at the American Natural History Museum and one at the Australian National

Insect Collection, who have had whole drawer images for like almost a decade, a little more than a decade because a lot of museums

got this im drawer imaging rig rigs, these drawer imaging tech, you know, kind of things and um then didn't really know what to do with the output because,

you know, computer vision was not as accessible um in 2012. Um and and also CIA to folks who are leaving, thanks for I realize I'm going a little over time.

Um but essentially um yeah, the these are publicly available tools. Um I think even if you just have a free Rooflow account, you can do this. You have some

inferences and so I think for you know other institutions, this is something they can use to start start processing those backlogs for sure. Beautiful. All

right. Yeah. Um I think we have a couple more things that we could go through and then we might be able to wrap it up. Um

there's perhaps this one kind of final behind the scenes look. Um I think this is related to the whole pipeline. Some

of this we already covered though earlier in the presentation about just uh taking some detections, cropping it, things like that, post-processing,

feeding it into a a language model to get back some of the results we saw. So

maybe we'll skip over that for today.

Um, a sort of like final question for you, Elizabeth, is like people who are interested in this topic in drawer dissect, what are the kind of actions

that you hope they could take or if they're interested in learning more, where should they go?

>> Yeah, so the first thing is to read the preprint uh that I have out on this work. It's actually um currently in

work. It's actually um currently in review, so stay tuned for the fully published version at some point.

Hopefully in the near future, but you never know how these things go. Um but

at least for now, you can kind of go into the nitty-gritty of of how the the pipeline works, why we think it's impactful and how people can use it. Um,

and then there's also a a GitHub associated with it, and I can drop that um link into the chat if people are interested. Oh, you already have them

interested. Oh, you already have them already. Thank you, Jessa.

already. Thank you, Jessa.

Um, yeah, and we didn't really get to the transcription part of this as much.

Uh what I failed to say is that um once you get down to the specimen image and I think maybe Charlotte was was asking

about this earlier um in a lot of cases um you can partially see where it was collected from that top label in the image. And so again, in the effort of,

image. And so again, in the effort of, you know, good but not perfect, just trying to squeeze as much as you can out of all the images you're getting, um, I

came up with this multi-step, uh, process. You can kind of see it in that,

process. You can kind of see it in that, um, uh, location transcription module in the middle of this image. Basically,

what I end up doing is flipping that uh, image on its side. Usually, specimens

are vertical, and so you want text reading horizontal. And you can kind of

reading horizontal. And you can kind of make out like it's like three miles northwest of somewhere uh Garfield something. And as a as someone who's worked now in collections

for two years, I can pretty much immediately tell you where that is just from the context clues um and sort of the common abbreviations. And so I figured I could teach a large language model to do that as well. So the first

step is basically it takes the verbatim text just what exact letters it can see.

And then then I give it some context about museum abbreviation conventions.

Um I'm hoping to actually give it later a reference library of collections that I've already gotten uh to give it a better idea and then I kind of have it go back and validate. So given the

vertic batum text and your initial assessment of where this is from, is this a reasonable guess? So I kind of have it check its homework. And and

ultimately the last thing I'll say um sort of on this is that all of these steps also including labor of prepping the drawers for imaging um merging the

data everything uh it ends up at about 41 seconds per specimen uh on a on a decent Windows computer which is uh pretty amazing for all the stuff you get out of it.

>> Stupendous. Yeah, that's a really impressive result. Yeah, this three

impressive result. Yeah, this three miles northn northwest of Rifle Garfield, Colorado County, I guess.

Colorado, USA. That's that's amazing.

That's super cool. All right, so that I think wraps up all the material that we prepared. Um, my quick little break

prepared. Um, my quick little break here. As always, if you're watching a

here. As always, if you're watching a recorded version of this somewhere on YouTube and you want to join a live session in the future, please head on over to rooflow.com/webinar.

So, if you have any other additional questions, feel free to drop those in the chat. Um, Elizabeth, I do want to go

the chat. Um, Elizabeth, I do want to go back to I'd say, let me take a quick look here.

>> Um, >> oh, yeah, please. Yeah, go for it. If we

already found one, >> it's a great question. Regarding masking

from the 2D image, when looking at digital specimen photos in an online database, would this process still work?

And yes, so basically all you would have to do is instead of starting at the whole drawer image, you would start by making your own drawer folder having a

place to put specimens and then doing those masking steps. And so I have um support for specimen only um uh masking.

And I've used that actually for a few different things.

One of them was for some old um like UV images uh of butterflies that I was working on for a previous project. I

actually ended up masking all of those so I could get, you know, color and size information. Um, and then the other

information. Um, and then the other thing I've been using that for and just a more general like just cool collaboration I want to shout out is with Kelton Welch over at ECDIS which is

a awesome uh nonprofit that works with farms across America to do biodiversity surveys. And so he has essentially a

surveys. And so he has essentially a version of what I'm doing um but for uh wet specimens. And so they collect these

wet specimens. And so they collect these specimens in alcohol vials. He dumps

them out, takes an image of all of them together, and he's, you know, figuring out some workflows for how to uh process them. And so I actually just shared the

them. And so I actually just shared the weights from my segmentation model with him so that he could use that to help segment out uh you know, insects from

from the background. Um and so this is ultimately, if you check out the co-authors list, this is a very collaborative project. I'm working with

collaborative project. I'm working with people who all have basically the same problem. And you know, everyone here who

problem. And you know, everyone here who has, you know, worked in industry knows that there's this issue of we need to process a lot of stuff. Well, what if we take a big picture of it? Okay. Well,

now we have to, you know, figure out how to parse the picture. And that's the general workflow, you know, I've been working through.

>> Um, I have another actually I want to go back to Charlotte's question earlier. Um

this is actually relevant to something I was working on recently as well where um the question was about do you also detect detect body parts other than the

main body or do the legs and the letters underneath look similar for a good detection? I mentioned this because the

detection? I mentioned this because the other day I was working on a model to look it's like drone footage of a parking lot and there are these parking

stall numbers and then I was detecting the parking stall and then passing that to OCR cuz I wanted to be able to like read all the different parking stall

numbers and then like the asphalt next to the number is a little bit lighter and then when I crop the OCR model kept thinking the the asphalt was a

And I was like, "Ah, okay. So, I need to I need to crop in a little bit so I stop getting that extra little asphalt there that thinks it's like I always add another one in there." So, maybe this is

about um body parts, is about OCR and improving those results.

>> Totally. So,

the one thing um with specimens is that um they're really variably positioned. And

my goal in the imaging step is to do as little fussing with them as possible. I

want, you know, Drock to be able to take everything kind of as it is, whatever skew it's at, whatever angle, you know, so on and so forth, and just grab everything. Um, and for the masking

everything. Um, and for the masking step, um, at the moment, I don't have something to segment the legs. I think

for more specific anatomical features, like if you want to get like antenna length or I don't know, the length of the tarsus, you're just going to need more specialized models um that I

haven't developed. I'm working on color

haven't developed. I'm working on color features. Legs are often not even

features. Legs are often not even visible for a lot of these specimens, especially the bigger ones. And so I've kind of focused on like what is the most

detailed chunk I can get without getting into territory where all of these extensions and holes basically so non-ontinuous portions. Um how can I

non-ontinuous portions. Um how can I avoid that as much as possible? Now I

would love to go back now with some of these and segment out certain features.

I think one place to do that in our lapodoptra, so moths and and butterflies is to get like specific wings. Um, for

example, like maybe you want to get just the length of the the four-wing. Um, and

that's something that, you know, people other people have have certainly been working on. Um, in terms of uh the

working on. Um, in terms of uh the descriptions, yeah, I think that's part of what what mixes things up. The other

thing I'll mention is that um pale or transparent things um are still like tricky and so you know if it's a really

light moth or beetle or whatever on a really light background if it's trying to find edges it's going to potentially have trouble doing that. So the way I solve for that is by overloading it with

things that it's bad at. So I'll find stuff in the collection that are very pale and I will weight the model with those things that it's not that great at. And so you can kind of do targeted

at. And so you can kind of do targeted improvements or say I'm introducing a new tax on for example that looks very different than the stuff it's already trained on. You just need to add a lot

trained on. You just need to add a lot of that until it performs similarly.

>> H nice. Yeah. I wonder if there's also some even just on the image capture like even before you go to the model um some

things you would be able to do there.

Just I'm thinking about like factories where they want to train AI to be able to see like gas leaks. They don't use

you know traditional imagery. They use

imagery that is able to see the gas leak and then it's very specialized imagery.

It doesn't look like what the human eye would usually see and then they train computer vision models specifically on that specialized imagery. So perhaps

that's something to investigate or I'm not sure if you've already tried doing that changing contrast stuff like that.

Yeah, I mean I do a lot of augmentations in my models, so there's definitely contrast and grayscale and all of that kind of stuff, which I think has improved performance on some of those

lighter edges.

>> We've taken up a lot of your time today already, Elizabeth. This was fantastic

already, Elizabeth. This was fantastic session. Um, I loved hearing about this

session. Um, I loved hearing about this the story and the work that you and everybody else at the Field Museum and some other organizations have been doing

to create this really awesome solution for understanding this uh collection of of insect specimens and then uh speeding

up some of your research. Um, if anybody has some remaining questions, I see there was one from uh Luis about uh dealing with some like underwater distortion. If you have some questions,

distortion. If you have some questions, feel free to send me an email. It's just

patrickofflow.com.

Um, >> yeah. Oh,

>> yeah. Oh, >> I just threw it in the chat. So, get in touch if you're curious about bugs or chaining models together or what have you.

>> Wonderful. All right. Thank you so much for for our audience today and and thank you so much Elizabeth for for joining.

Maybe we can wrap things up there today.

>> It was my pleasure. Um I guess I did want to share one more really quick thing.

>> Please please >> that. Let's see. It did it. So just to

>> that. Let's see. It did it. So just to prove >> Oh yeah, sorry. I completely forgot to go back to your uh I will stop sharing.

Yes.

>> Dropped in all of the trays. Uh I'll try and open one. So there you go. So, um,

it was able to find this tray and crop it all out. Anyways, just wanted to show you that and it outputs all of them in this sort of nicely numbered order.

>> Yay. Cool. And then that's like, sorry, that's like the maybe first step, right?

Because you start off with a giant image and uh, one drawer and then within the drawers there are multiple trays, right?

And then I'm imagining there's like 20 more steps after that.

The full list is on the GitHub.

>> Cool. All right. You Okay. Well, maybe

we'll end there. Check out the GitHub.

Yes.

>> The Field Museum.

>> Yes, please. All right. Thank you so much.

Loading...

Loading video analysis...