Five years of research: a summary

Originally posted on the Student Engager blog on 3 July 2017.

A PhD often feels like an unrewarding process. There are setbacks, data failures, non-significant results, and a general lack of the small successes that (I hear) make general worklife pleasant: “I got that promotion!” “Everyone applauded my presentation!” “I moved to the desk near the window!” PhD life is one giant slog until the end, a nerve-wracking hours-long session where you’re grilled by the only people who know more about your field than you.

I survived.

Hopefully some of you have been following my research here, starting from astronauts and moving on to runners and foraging patterns. It all ties together, I promise. I recently gave a talk at the Engagers’ event “Materials & Objects” summarizing my research, which I can now tell you about in its full glory! I’m pleased to announce: I had significant findings.

The lowdown is that (as expected) there are differences in the shape of the tibia (shin bone) between nomads and farmers in Sudan. Why would this be? Well, if you’ve been following along, bones change shape in response to activity, particularly activities performed during adolescence. The major categories of tibial shape were those that indicated long-distance walking, doing activity in one place, and doing very little activity. Looking at the distribution, the majority of the nomadic males had the leg shape indicating long-distance walking, and some of the agricultural males had the long-distance shape and others had the staying-in-place shape. This makes sense considering the varying types of activity performed in an agricultural society, particularly one that also had herds to take care of: some individuals would be taking the herds up and down along the Nile to find grazing land while others stayed local, tending farms. While it’s unclear how often a nomadic group needs to move camp to be considered truly nomadic, in this case it seems like they were walking a lot – enough to compare their tibial shape to that of modern long-distance runners. These differences in food acquisition are culturally-adapted responses to differing environments: the nomads live in semi-arid grassland and can travel slowly over a large area to graze sheep and cattle, while the farmers are constrained to a narrow strip of fertile land along the Nile banks, limiting how many people can move around, and how often.

Perhaps the most important finding is the difference between males and females. In addition to looking at shape, I also conducted tests to show how strong each bone is regardless of shape, a result called polar second moment of inertia (and shortened to, unexpectedly, J). The males at each site had higher values for J – thus, stronger bones – than the females. However, the nomadic females had higher J values than some of the males at the agricultural sites! This is in spite of most females from both sites having the tibial shape indicating “not very much activity”. This shape may be the juvenile shape of the tibia, which females have retained into adulthood despite performing enough activity to give them higher strength values than male farmers. Similar results have actually been noted in studies examining different time periods – for instance, the Paleolithic to Neolithic – and found much more similarity between females than between males. Researchers often interpret this as evidence of changing male roles but female roles remaining the same, which strikes me as unlikely considering the time spans covered. I instead conclude that females build bone differently in adolescence, and perhaps there are subtleties in bone development that don’t reveal themselves as differences in shape. As females have lower rates of testosterone, which builds bone as well as muscle, they may have to work harder or longer than males to attain the same bone shape and strength. I’m using this to argue that the roles of women in archaeological societies – particularly nomadic ones – have been unexamined in light of biological evidence.

Of course, the best conclusion for a PhD is a call for more research, and mine is that we need to examine male and female adolescent athletes together to see when exactly shape change occurs. If we can pin down the amount of activity necessary for women to have bones as strong as those of their male peers, we can more accurately interpret the types of activities ancient people were performing without devaluing the work of women.

My examiners found all this enthralling, and I’m pleased to say I passed! The work of this woman is valued in the eyes of the academe.

A History of Legs in 5 Objects

Originally posted April 11, 2017 on the Student Engager blog.

My research focuses on the tibia, the largest bone in the lower leg. You probably know it as the shin bone, or the one that makes frequent contact with your coffee table resulting in lots of yelling and hopping around; that’s why footballers often wear shinguards. The intense pain is because the front of the tibia is a sharp crest that sits directly beneath the skin. There are a lot of leg-related objects in UCL Museums, so here’s a whirlwind tour of a few of them!

One of the few places you can see a human tibia is the Petrie’s pot burial. This skeleton from the site of Badari in Egypt has rather long tibiae, indicating that the individual was quite tall. The last estimation of his height was made in 1985, probably using regression equations based on the lengths of the tibia and femur (thigh bone): these indicated that he was almost 2 meters tall. However, the equations used in the 80s were based on a paper from 1958, which used bone lengths from Americans who died in the Korean War. There are two problems that we now know of with this calculation: height is related to genetics and diet, and different populations have differing limb length ratios.

Pot burial from Hemamieh, near the village of Badari UC14856-8

Pot burial from Hemamieh, near the village of Badari (UC4856-8).

The Americans born in the 1930s-40s had a vastly different diet from predynastic Egyptians, and the formulae were developed for (and thus work best when testing) white Americans. This is where limb length ratios come into play. Some people have short torsos and long legs, while others have long legs and short torsos. East Africans tend to have long legs and short torsos, and an equation developed for the inverse would result in a height much taller than he actually was! Another thing to notice is the cartilage around the knee joint. At this point in time, the Egyptians didn’t practice artificial mummification – but the dry conditions of the desert preserved some soft tissue in a process called natural mummification. Thus you can see the ligaments and muscles connecting the tibia to the patella (knee cap) and femur.

The Petrie also has a collection of ancient shoes and sandals. I think the sandals are fascinating because they show a design that has obviously been perfected: the flip flop. One of my favorites is an Amarna-period child’s woven reed sandal featuring two straps which meet at a toe thong. The flip flop is a utilitarian design, ideal for keeping the foot cool in the heat and protecting the sole of the foot from sharp objects and hot ground surfaces. These are actually some of the earliest attested flip flops in the world, making their appearance in the 18thDynasty (around 1300 BCE).

An Egyptian flip-flop. UC769.

An ancient Egyptian flip-flop (UC769).

 

Another shoe, this time from the site of Hawara, is a closed-toe right leather shoe. Dating to the Roman period, this shows that flip flops were not the only kind of shoe worn in Egypt. This shoe has evidence of wear and even has some mud on the sole from the last time it was worn.  This shoe could have been worn with knit wool socks, one of which has been preserved. However, the Petrie Collection’s sock has a separate big toe, potentially indicating that ancient Egyptians did not have a problem wearing socks and sandals together, a trend abhorrent to modern followers of fashion (except to fans of Birkenstocks).

 

Ancient Egyptian shoe (UC28271) and sock (UC16767.

Ancient Egyptian shoe (UC28271).

 

sock UC16767

Ancient Egyptian sock (UC16767).

The Grant Museum contains a huge number of legs, but only one set belonging to a human. For instructive purposes, I prefer to show visitors the tibiae of the tiger (Panthera tigris) on display in the southwest corner of the museum. These tibiae show a pronounced muscle attachment on the rear side where the soleus muscle connects to the bone. In bioarchaeology, we score this attachment on a scale of 1-5, where 5 indicates a really robust attachment. The more robust  – attachment, the bigger the muscle; this means that either the individual had more testosterone, which increases muscle size, or they performed a large amount of activity using that muscle. (We wouldn’t score this one because it doesn’t belong to a human.) In humans, this could be walking, running, jumping, or squatting. Practice doing some of these to increase your soleal line attachment site!

The posterior tibia of a tiger.

The tibia of a tiger.

Moving to the Art Museum, we can see legs from an aesthetic rather than practical perspective. A statue featuring an interesting leg posture the legs is “Spinario or Boy With Thorn”, a bronze statue produced by Sabatino de Angelis & Fils of Naples in the 19th century. It is a copy of a famous Greco-Roman bronze, one of very few that has not been lost (bronze was frequently melted down and reused). The position of the boy is rather interesting: he is seated with one foot on the ground and the opposite foot on his knee as he examines his sole to remove a thorn. This is a very human position, and shows the versatility of the joints of the hip, knee, and ankle. The hip is adducted and outwardly rotated, the knee is flexed, and the ankle is everted. It’s rare for the leg to be shown in such a bent position in art, as statues usually depict humans standing or walking.

Spinario, or Boy With Thorn.

Spinario, or Boy With Thorn.

Bipedalism, or walking on two legs, is one of the traits we associate with being human. It’s rare in the animal world. Hopefully next time you look at a statue, slip on your flip flops, or go for a jog, you’ll think of all the work your tibiae are doing for you – and keep them out of the way of the coffee table.

(OK, I know that was six objects… but imagine the sock inside the shoe!)

 

Sports in the Ancient World

Originally published on Student Engagers on January 24, 2017.

I’ve written previously here about the antiquity of running, which was one of the original sports at the ancient Greek Olympics, along with javelin, archery, and jumping. These games started around 776 BC in the town of Olympia. What came before, though? What other evidence do we have of ancient sports?

Running is probably the most ancient sport; it requires no gear (no matter how much shoe companies make you think you need it) and the distances are easily set: to that tree and back, to that mountain and back. Research into the origins of human locomotion focus on changes to the foot, which needed to change from arboreal gripping to bipedal running and bearing the full weight of the body. A fossil foot of Ardipithecus ramidus, a hominin which lived 4.4 million years ago, features a stiffened midfoot and flexible toes capable of being extended to help push off at the end of a stance, but has the short big toe typical of great apes. Australopithecus sediba, which lived only 2 million years ago, had an arched foot like modern humans (at least not the flat-footed ones) but an ankle that turned inwards like apes. Clearly our feet didn’t evolve all the features of bipedal running at once, but rather at various intervals over the past 4-5 millennia. Evidence of ancient humans’ distance running is equally ancient, as I wrote about previously. Researchers Bramble & Lieberman have posed the question “Why would early Homo run long distances when walking is easier, safer and less costly?” They posit that endurance running was key to obtaining the fatty tissue from meat, marrow, and brain necessary to fuel our absurdly large brains – thus linking long-distance running with improved cognition. In a similar vein, research into the neuroscience of running has found that it boosts mood, clarifies thinking, and decreases stress.

Feats of athleticism in ancient times were frequently dedicated to gods. Long before the Greek games, the Egyptians were running races at the sed-festival dedicated to the fertility god Min. A limestone wall block at the Petrie depicts King Senusret (1971 BCE) racing with an oar and hepet-tool. The Olympic Games, too, were originally dedicated to the gods of Olympus, but it appears that as time went on, they became corrupted by emphasizing the individual heroic athletes and even allowed commoners to compete. There were four races in the original Olympics: the stade (192m), 2 stades, 7-24 stades, and 2-4 stades in full hoplite armor. It should be mentioned that serious long-distance running, like the modern marathon, was not a part of the ancient games. The story of Pheidippides running from the battlefield at Marathon to announce the Greek victory in Athens is most likely fictional, although the first modern marathon in 1896 traced that 25-mile route. The modern distance of just over 26 miles was set at the 1908 London Olympics, when the route was lengthened to go past Buckingham Palace.

2

Limestone wall-block showing King Senusret I running the sed-festival race before the god Min. Courtesy Petrie Museum.

Wrestling might be equally ancient. It’s basically a form of play-fighting with rules (or without rules, depending on the type – compare judo to Greco-Roman to WWF), and play-fighting can be seen not only in human children but in a variety of mammal species. In Olympic wrestling, the goal was to get one’s opponent to the ground without biting or grabbing genitals, but breaking their fingers and dislocating bones were valid. Some archaeologists have tried to attribute Nubian bone shape – the basis of my thesis – on wrestling, for which they were famed. Another limestone relief in the Petrie shows two men wrestling in loincloths. Boxing is a similar fighting contest; original Olympic boxing required two men to fight until one was unconscious. Pankration brutally combined wrestling and boxing, but helpfully forbid eye-gouging. It may be possible to identify ancient boxers bioarchaeologically by examining patterns of nonlethal injuries. Some of these are depressions in the cranial vault (particularly towards the front and the left, presuming mostly right-handed opponents), facial fractures, nasal fractures, traumatic tooth loss, and fractures of the bones of the hand.

1

Crude limestone group depicting two men wrestling. Courtesy Petrie Museum.

Spear or javelin throwing has also been attested in antiquity. Although we have evidence of predynastic flint points and dynastic iron spear tips, it’s unclear whether these were used for sport (how far one can throw) or for hunting. Actually, it’s unclear how the two became separate. Hunting was (and continues to be) a major sport – although not one with a clear winner as in racing or wrestling – and the only difference is that in javelin the target isn’t moving (or alive). In the past few years, research has been conducted into the antiquity of spear throwing. One study argues that Neanderthals had asymmetrical upper arm bones – the right was larger due to the muscular activity involved in repeatedly throwing a spear. Another study used electromyography of various activities to reject the spear-thrusting hypothesis, arguing that that the right arm was larger in the specific dimensions more associated with scraping hides. Spear throwing is attested bioarchaeologically in much later periods. A particular pathological pattern called “atlatl elbow”: use of a tool to increase spear velocity caused osteoarthritic degeneration of the elbow, but protected the shoulder.

3

Fragment of a Roman-period copper alloy spearhead. Courtesy Petrie Museum.

A final Olympic sport is chariot racing and riding. Horses were probably only domesticated around 5500 years ago in Eurasia, so horse sports are really quite new compared to running and throwing! It’s likely that horses were originally captured and domesticated for meat at least 1000 years before humans realized they could use them for transportation. The Olympic races were 4.5 miles around the track (without saddles or stirrups, as these developments had not yet reached Greece), and the chariot races were 9 miles with either 2 or 4 horses. Bioarchaeologists have noted signs of horseback riding around the ancient world – signs include degenerative changes to the vertebrae and pelvis from bouncing as well as enlargement of the hip socket (acetabulum) and increased contact area between the femur and pelvis from when they rub together. In all cases, more males than females had these changes, indicating that it was more common for men to ride horses.

Of course, there are many more sports that existed in the ancient world – other fighting games including gladiatorial combat, ritualized warfare, and games with balls and sticks (including the Mayan basketball-esque game purportedly played with human skulls). Often games were dedicated to gods, or resulted in the death of the loser(s). However, many of these, explored bioarchaeologically, would result in similar musculoskeletal changes and injury patterns discussed above. Many games have probably been lost to history. Considering the vast span of human activity, it’s likely sports of some kind have always existed, from the earliest foot races to the modern Olympic spectacle.

ball

Limestone ball from a game. Courtesy Petrie Museum.

Sources

Bramble, D.M. and Lieberman, D.E. 2004. Endurance running and the evolution of Homo. Nature 432(7015), pp. 345–352.

Carroll, S.C. 1988. Wrestling in Ancient Nubia. Journal of sport history 15(2), pp. 121–137. Available at:

Larsen, C.S. 2015. Bioarchaeology: Interpreting Behavior from the Human Skeleton. Cambridge: Cambridge University Press.

Lieberman, D.E. 2012. Those feet in ancient times. Nature 483, pp. 550–551.

Martin, D.L. and Frayer, D.W. eds. 1997. Troubled Times: Violence and Warfare in the Past. illustrated. Psychology Press.

Perrottet, T. 2004. The Naked Olympics: The True Story of the Ancient Games. Random House Publishing Group.

 

 

Normativity November: Defining the Archaeological Normal

This post is part of QMUL’s Normativity November, a month exploring the concept of the normal in preparation for the exciting Being Human events ‘Emotions and Cancer’ on 22 November and ‘The Museum of the Normal’ on 24 November, and originally appeared on the QMUL History of Emotions Blog on 22 November 2016.

The history of archaeology in the late 19th and early 20th centuries can be read as the history of European men attempting to prove their perceived place in the world. At the time, western Europe had colonized much of the world, dividing up Africa, South America, and Oceania from which they could extract resources to further fund empires. Alongside this global spread was a sincere belief in the superiority of the rule of white men, which had grown from the Darwinian theory of evolution and the subsequent ideas of eugenics advanced by Darwin’s cousin Francis Galton: not only were white men the height of evolutionary and cultural progress, they were the epitome of thousands of years of cultural development which was superior to any other world culture. According to their belief, it was inevitable that Europeans should colonize the rest of the world. This was not only the normal way of life, but the only one that made sense.

In modern archaeology, we let the data speak for itself, trying not to impose our own ideas of normality and society onto ancient cultures. One hundred years ago, however, archaeology was used as a tool to prove European superiority and cultural manifest and without the benefit of radiocarbon dating (invented in the 1940s) to identify which culture developed at what time, Victorian and Edwardian archaeologists were free to stratify ancient cultures in a way that supported their framework that most European=most advanced. “European-ness” was defined through craniometry, or the measurement and appearance of skulls, and similar measurements of the limbs. Normality was defined as the average British measurement, and any deviation from this normal immediately identified that individual as part of a lesser race (a term which modern anthropologists find highly problematic, as so much of what was previously called “race” is culture).

In my research into sites in Egypt and Sudan, I’ve encountered two sites that typify this shoehorning of archaeology to fit a Victorian ideal of European superiority. The first is an ancient Egyptian site called Naqada, excavated by Sir William Matthew Flinders Petrie in the 1890s. Petrie is considered the founder of modern, methodological archaeology because he invented typology – categorizing objects based on their similarity to each other. As an associate and friend of Galton and others in the eugenics circle, he applied the same principle to categorizing people (it’s likely that his excavations of human remains were requested by Galton to diversify his anthropometric collection). Naqada featured two main types of burials: one where the deceased were laid on their backs (supine) and one where the deceased were curled up on their side (flexed). Petrie called these “Egyptian” and “foreign” types, respectively. The grave goods (hand-made pottery, hairpins, fish-shaped slate palettes) found in the foreign tombs did not resemble any from his previous Egyptian excavations. The skeletons were so markedly different from the Egyptians – round, high skulls of the “Algerian” type, and tall and rugged – that he called them the “New Race”. Similarities, such as the burnt animal offerings found in the New Race tombs, present in Egyptian tombs as symbolic wall paintings, were obviously naïve imitations made by the immigrants. However, the progression of New Race pottery styles pointed to a lengthy stay in Egypt, which confused Petrie. Any protracted stay among the Egyptians must surely have led to trade: why then was there an absence of Egyptian trade goods? His conclusion was that the New Race were invading cannibals from a hot climate who had completely obliterated the local, peaceful Egyptian community between the Old and Middle Kingdoms.

Of course, with the advent of radiocarbon dating and a more discerning approach to cultural change, we now know that Petrie had it backwards. The New Race are actually a pre-Dynastic Egyptian culture (4800-3100 BC), who created permanent urban agricultural settlements after presumably thousands of years of being semi-nomadic alongside smaller agricultural centres. Petrie’s accusation of cannibalism is derived from remarks from Juvenal, a Roman poet writing centuries later. It also shows Petrie’s racism – of course these people from a “hot climate” erased the peaceful Egyptians, whose skulls bear more resemblance to Europeans. In actuality, Egyptian culture as we know it, with pyramids and chariots and mummification, developed from pre-Dynastic culture through very uninteresting centuries-long cultural change. Petrie’s own beliefs about the superiority of Europeans, typified by the Egyptians, allowed him to create a scientific-sounding argument that associated Africans with warlike-invasion halting cultural progression.

The second site in my research is Jebel Moya, located 250 km south of the Sudanese capital of Khartoum, excavated by Sir Henry Wellcome from 1911-1914. The site is a cemetery that appears to be of a nomadic group, dating to the Meroitic period (3rd century BC-4th century AD). The site lacks the pottery indicative of the predominant Meroitic culture, therefore the skulls were used to determine racial affiliation. Meroe was seen as part of the lineage of ancient Egypt – despite being Sudanese, the Meroitic people adopted pyramid-building and other cultural markers inspired by the now-defunct Egyptian civilization. Because many more female skeletons were discovered at this site than male, one early hypothesis was that Jebel Moya was a pagan and “predatory” group that absorbed women from southern Sudanese tribes either by marriage or slavery and that, as Petrie put it, it was “not a source from which anything sprang, whether culture or tribes or customs”. Yet, the skulls don’t show evidence of interbreeding, implying that they weren’t importing women, and later studies showed that many of the supposed female skeletons were actually those of young males. This is another instance of British anthropologists drawing conclusions about the ancient world using their framework of the British normal. If the Jebel Moyans weren’t associating themselves with the majority Egyptianized culture, they must be pagan (never mind that the Egyptians were pagan too!), polygamous, and lacking in any kind of transferrable culture; in addition, they must have come from the south – that is, Africa.

M0019726 Sir Henry Wellcome at the Jebel Moya excavations

Sir Henry Wellcome at the Jebel Moya excavations. Credit: Wellcome Library, London.

These sites were prominent excavations at the time, and the skeletons went on to be used in a number of arguments about race and relatedness. We now know – as the Victorian researchers reluctantly admitted – that ruggedness of the limbs is due to activity, and that a better way to examine relatedness is by examining teeth rather than skulls. However, the idea of Europeans as superior, following millennia of culture that sprung from the Egyptians and continued by the Greeks and Romans, was read into every archaeological discovery, bolstering the argument that European superiority was normal. Despite our focus on the scientific method and attempting to keep our beliefs out of our research, I wonder what future archaeologists will find problematic about current archaeology.

Stress in non-human animals

Originally published on Student Engagers on October 14, 2015, in association with our exhibition Stress: Approaches to the First World War.

This post is associated with our exhibit Stress: Approaches to the First World War, open October 12-November 20.

A pig’s skull may not be the first thing that comes to mind when thinking of stress. You may not think of non-human animals at all. However, humans are not the only animals that experience stress and related emotions. Many of the behaviors associated with human psychological disorders can be seen in domestic animals. Divorced from the dialogue of consciousness and cognition, animals have been seen exhibiting symptoms of depression, mourning, and anxiety. Wild animals in captivity ranging from elephants to wolves have exhibited signs of post-traumatic stress disorder; this is also an argument for why orcas in captivity suddenly turn violent. According to noted animal behaviorist Temple Grandin, animals that live in impoverished environments or are prevented from performing natural behaviors develop “stereotypic behaviors” such as rocking, pacing, biting the bars of their enclosure or themselves, and increased aggression. Many of these bear similarities to individuals with a variety of psychological conditions, and (most interestingly) when given psychopharmaceuticals, the behaviors cease.

The First World War unleashed horrors on human soldiers, resulting in shell shock (now called PTSD). However, many animals were also used, including more than one million horses on the Allied side, mostly supplied by the colonies – but 900,000 did not return home. Mules and donkeys were also used for traction and transport, and dogs and pigeons were used as messengers. (Actually, the Belgians used dogs to pull small wagons.) Since the advent of canning in the 19th century, armies no longer had to herd their food along, but apparently the Gloucestershire Regiment brought along a dairy cow to provide fresh milk, although she may have served as a regimental mascot as well – some units kept dogs and cats too.

Horses in gas masks. Sadly, they often confused these with feed bags and proceeded to eat them. Credit Great War Photos.

Horses in gas masks. Sadly, they often confused these with feed bags and proceeded to eat them. Credit Great War PhotosGreat War Photos.

The RSPCA set up a fund for wounded war horses and operated field veterinary hospitals. They treated 2.5 million animals and returned 85% of those to duty. 484,143 British large animals were killed in combat, which is roughly half the number of British soldiers killed. Estimates place the total number of horses killed at around 8 million.

The horses in particular had a strong impact on the soldiers. Researcher Jane Flynn points out that a positive horse-rider relationship was imperative for both on the battlefield. She cites a description of the painting Goodbye Old Man: “Imagine the terror of the horse that once calmly delivered   goods   in   quiet   suburban   streets   as, standing hitched to a gun­carriage amid the wreck and ruin at the back of the firing line, he hears above and all around him the crash of bursting shells. He starts, sets his ears back, and trembles; in his wondering eyes is the light of fear. He knows nothing of duty, patriotism, glory, heroism, honour — but he does know that he is in danger.”

"Goodbye, Old Man" used in a poster. Credit RSPCA.

“Goodbye, Old Man” used in a poster. Credit RSPCA.

Historical texts tend to consider horses and other animals used in war as equipment secondary to humans, and even the RSPCA only covers their physical health. Horses don’t only have relationships with their riders, but with the other horses nearby and with the environment. They can easily be frightened by loud noises, not to mention explosions, ground tremors from trench cave-ins, and other things that scared humans sharing their situation. Many horse owners (many pet owners, in fact) argue that their horses have and express human-like emotions. Even if we can’t verify this scientifically, we can observe that horses experience fear, rage, confusion, gain, loss, happiness and sadness. Grandin argues that horses have the capacity to experience and express these simple emotions as well as recall and react to past experiences, but are unable to rationalize these emotions: they simply feel. It’s impossible to say whether that makes it more frightening for a horse or a human to wade through a field of dead comrades. In Egypt, I took a horse ride around the pyramids. The trail led us through what turned out to be an area of the desert where stable owners execute their old horses, resulting in a swath of rotting corpses. I was shocked, and my horse displayed all the signs of fear: ears pinned back, wide eyes, tensed muscles. He recovered after we’d left the area, but I wondered what psychological impact having that experience day after day would cause. If they are able to remember frightening experiences, they might be able to experience post-traumatic stress and be as shell-shocked as the returning soldiers. British soldiers reported that well-bred horses experienced more “shell-shock” than less-pedigreed stock, bolting, stampeding, and going berserk on the battlefield – all typical behaviors of horses under duress, – but did not elaborate on the long-term consequences of this behavior. It would be interesting to explore accounts of horses that survived the war (and were returned to their original owners instead of being sold in Europe or slaughtered) to see whether they exhibited more stereotypical behaviors of stress and shell-shock just as human soldiers did.

Sources

Thanks to Anna Sarfaty for advice.

Animals in World War One. RSPCA.org.

Bekoff, Mark. Nov 29, 2011. Do wild animals suffer from PTSD and other psychological disorders? Psychology Today (online).

Flynn, Jane. 2012. Sense and sentimentality: a critical study of the influence of myth in portrayals of the soldier and horse during World War One. Critical Perspectives on Animals in Society: Conference Proceedings.

Grandin, Temple and Johnson, Catherine. 2005. Animals in Translation: Using the Mysteries of Autism to Decode Animal Behavior. New York: Scribner.

Shaw, Matthew. ND. Animals and war. British Library Online: World War One. 

Tucker, Spencer C. (ed.) 1996. The European Powers in the First World War: An Encyclopedia. New York: Garland.

Question of the Week: What is that object?

Originally published on Student Engagers on February 18, 2015.

One of the most frequent questions I’m asked isn’t about history or osteology. It’s “can you tell me what that thing is?” Many objects in the UCL Museums don’t have explanatory labels, so it’s understandable that visitors don’t know. However, it’s usually the case that we don’t know either! In archaeology, a number of excavated items are recorded with detailed descriptions of size, weight, material, but no conclusion as to the purpose of the object. The Petrie houses a number of smooth pebbles from predynastic-era graves. When those people had the technology to make wheel-thrown pottery and intricately carved stone vessels, why be buried with a simple stone? The anthropological answer is that it served a ritualistic purpose; the humanistic answer is that somebody saw a smooth stone they liked, one that felt good to keep in the hand and rub, and it became important to them. I have stones that remained in coat pockets for years, getting smoother and smoother from my touch. It doesn’t necessarily have to be “totemic”. Other artifacts are confusing because they look like modern items. One visitor asked me about a clay object that looked like a cog.

UC18527

UC18527. Image courtesy Petrie Catalogue.

I had no idea what it was! We do have various sorts of cogs from ancient times, like waterwheels and the Antikythera mechanism, but in this case I thought I could solve the mystery quite easily. The object had a UC number, indicating its place in thePetrie catalogue. I looked it up on the web (the catalogue is open-access) and found out it’s actually an oil lamp: if you look closely, you can see traces of burning in the centre. The same goes for the Grant Museum’s catalogue – if you can find the specimen’s number, you can look up the name. Then it’s fun to Google the animal and see what it looked like with all its fur on – the tenrec is my favourite example. With only the skeleton it looks like any other small mammal, but when complete it’s like a cross between a hedgehog and a fiery caterpillar.

If you’d like to know what something is, please do ask! We may not know, but love to learn about all the amazing objects around us.

Did we evolve to run?

Originally published on Student Engagers on January 15, 2015.

A few years ago, spurred by my research on just how deleterious the sedentary lifestyle of a student can be on one’s health, I decided to start running. Slowly at first, then building up longer distances with greater efficiency. A few months ago, I ran a half-marathon. At the end, exhausted and depleted, I wondered: why can we do this? Why do we do this? What makes humans want to run ridiculous distances? A half-marathon isn’t even the start – there are people who do full marathons back-to-back, ultra-marathons of 50 miles or more, and occasionally one amazing individual like Zoe Romano, who surpassed all expectations and ran across the US and then ran the Tour de France.[1] Yes, ran is the correct verb – not cycled.

I’ve met so many people who tell me they can’t run. They’re too ungainly, their bums are wobbly, they’re worried about their knees, they’re too out of shape. Evolution argues otherwise. There are a number of researchers investigating the evolutionary trends for humans to be efficient runners, arguing that we are all biomechanically equipped to run (wobbly bums or not). If you have any question whether you can or can not run, just check out the categories of races in the Paralympic Games. For example, the T-35 athletics classification is for athletes with impairments in ability to control their muscles; in 2012, Iiuri Tsaruk set a world record for the 200m at 25.86s, which is only 6 seconds off Bolt’s world record at 19.19 and 4 seconds off Flo-Jo’s womens record (doping aside). 2012 also saw the world record for an athlete with visual impairment: Assia El Hannouni ran 200m in 24.46.[2] You try running that fast. Now try running with significant difficulty controlling your limbs or seeing. If you’re impressed, think about these athletes the next time you say you can’t run.

Paralympic_athlete

Paralympian Scott Rearden. Credit Wikimedia Commons.

Let’s think about bipedalism for a bit. Which other animals walk on two legs besides us? Birds, for a start, although flight is usually the primary mode of transport for all except penguins and ostriches. On the ground, birds are more likely to hop quickly than to walk or run. Kangaroos also hop. Apes are able to walk bipedally, but normally use their arms as well. Cockroaches and lizards can get some speed over short distances by running on their back legs. However, humans are different as we always walk on two legs, keep the trunk erect rather than bending forward as apes do, keep the entire body relatively still, and use less energy due to stored kinetic energy in the tendons during the gait.[3] Apparently we can group our species of strange hairless apes into the category “really weird sorts of locomotion” along with kangaroos and ostriches.

Following this logic, Lieberman et al point out that a human could be bested in a fight with a chimp based on pure strength and agility, can easily be outrun by a horse or a cheetah in a 100m race, and have no claws or sharp teeth: “we are weak, slow, and awkward creatures.”[4]We do have two strokes in our favor, though – enhanced cognitive capabilities and the ability to run really long distances. Our being awkwardly bipedal naked apes actually helps more than one would think. First, bipedalism decouples breathing from stride. Imagine a quadruped running – as the legs come together in a gallop, the back arches and forces the lungs to exhale like a bellows. Since humans are upright, the motion of our legs doesn’t necessarily affect our breathing pattern. Second, we sweat in order to cool down during physical exertion. (In particular, I sweat loads.) Panting is the most effective way for a hairy animal to cool down, as hair or fur traps sweat and doesn’t allow for effective convection (imagine standing in a cool breeze while covered in sweat – this doesn’t work for a dog.) But it’s impossible to pant while running. So not only are humans able to regulate breathing at speed, but we can cool down without stopping for breath.

From a purely skeletal perspective, there is more evidence for the evolution of running. Human heads are stabilized via the nuchal ligament in the neck, which is present only in species that run (and some with particularly large heads), and we have a complex vestibular system that becomes immediately activated to ensure stability while running. The insertion on the calcaneus (heel bone) for the Achilles tendon is long in humans, increasing the spring action of the Achilles.[5] Humans have relatively long legs and a huge gluteus maximus muscle (the source of the wobbly bum). All of these changes are seen in Homo erectus, which evolved 1.9 million years ago.[6]

H. erectus skeleton with adaptations for running (r) and walking (w). From Lieberman 2010.

Diagram of a Homo erectus skeleton with adaptations for running (r) and walking (w). From Lieberman 2010.

The evolutionary explanation for this is the concept of endurance or persistence hunting. In a hot climate, ancient Homo could theoretically run an animal to death by inducing hyperthermia. This is also where we come full circle and bring in the cognitive capabilities of group work. A single individual can’t chase an antelope until it expires from heat stroke because it’ll keep going back into the herd and then the herd will scatter. But a team of persistence hunters can. If persistence hunting is how humans (or other Homo species) evolved to be great at long distance running, that’s also the why humans developed larger brains: the calories in meat generated an excess of calories that allowed nourishment of the great energy-suck that is the brain. However, persistence hunting is a skill that mostly went by the wayside as soon as projectile weapons (arrowheads and spears) were invented, possibly around 300,000 years ago. Why? Because humans, due to our large brains, are very inventive, but also very lazy. Any expenditure of energy must be made up for by calories consumed later, at least in a hunting and gathering environment – so less energy output means less energy input; a metabolic balance. Thus we have the reason why humans can run, but also why we don’t really want to. (As an aside, some groups such as the Kalahari Bushmen practiced persistence hunting until recently, although they had projectile weapon technology, probably because of skill traditions and retaining cultural practices. Humans are always confounding like that.)

Which brings up another point: gathering. As I’ve written before, contemporary hunter-gatherers like the Hadza rely much more on gathering than hunting. Additionally, it is possible that the first meat eaten by Homo species was scavenged rather than hunted. There is no such evolutionary argument as endurance gathering. If ancient humans spent much more time gathering, why would we evolve these particular running mechanisms? As with many queries into human evolution, these questions have yet to be answered. Either way, it’s clear that humans have a unique ability. Your wobbly bum is, in fact, the key to your running. Another remaining question is why we still have the desire to continue running these ridiculous distances – a topic for a future post, perhaps.

Sources

[1] http://www.zoegoesrunning.com

[2] Check out all the records at http://www.paralympic.org/results/historical

[3] Alexander, RM. Bipedal Animals, and their differences from humans. J Anat, May 2004: 204(5), 321-330.

[4] Lieberman, DE, Bramble, DM, Raichlen, DA, Shea, JJ. 2009. Brains, Brawn, and the Evolution of Human Endurance Running Capabilities. In The First Humans – Origins and Early Evolution of the Genus Homo (Grine, FE, Fleagle, JG, Leakey, RE, eds.) New York: Springer, pp 77-98.

[5] Raichlen, DA, Armstrong, H, Lieberman, DE. 2011. Calcaneus length determines running economy: implications for endurance running performance in modern humans and Neandertals. J Human Evol 60(3): 299-308.

[6] Lieberman, DE. 2010. Four Legs Good, Two Legs Fortuitous: Brains, Brawn, and the Evolution of Human Bipedalism. In In the Light of Evolution (Jonathan B Losos, ed.) Greenwood Village, CO: Roberts & Co, pp 55-71.

Movement Taster – Movement in Premodern Societies

Originally published on Student Engagers on May 14, 2014, to advertise our event Movement.

The following is a taster for the Student Engagers’ Movement event taking place at UCL on Friday 23 May. Stacy, a researcher in Archaeology, will be discussing movement through the lens of biomechanics.

Imagine you’re in the grocery store. You start in the produce section, taking small steps between items. You hover by the bananas, decide you won’t take them, and walk a few steps further for apples, carrots, and cabbage. You then take a longer walk, carefully avoiding the ice cream on your way to the dairy fridge for some milk. You hover, picking out the semi-skimmed and some yogurt, before taking another long walk to the bakery. This pattern repeats until you’re at the checkout.

What you may not realize is that this pattern of stops and starts with long strides in between may be intrinsic to human movement, if not common to many foraging animals. A recent study of the Hadza, a hunting and gathering group in Tanzania, shows that they practice this type of movement known as the Lévy walk (or Lévy flight in birds and bumblebees). It makes sense on a gathering level: you’ve exhausted all your resources in one area, so you move to another locale further afield, then another, before returning to your base. When the Hadza have finished all the resources in an area, they’ll move camp, allowing them to regrow (for us, this is the shelves being restocked). This study links us with the Hadza, and the Hadza with what we can loosely term “ancient humans and their ancestors”.

Diagram of a Levy walk.

Diagram of a Levy walk. Credit Leif Svalgaard.

It’s unsurprising that the Hadza were used to examine the Lévy walk and probabilistic foraging strategies. As they are one of the few remaining hunter-gatherer groups on the planet, they are often used in scientific studies aiming to find out how humans lived, ate, and moved thousands of years ago, before the invention of agriculture. The Hadza have been remarkably amenable to being studied by researchers investigating concepts including female waist-to-hip ratios, the gut microbiome, botanical surveys, and body fat percentage. Tracking their movement around the landscape using GPS units is one of the most ingenious!

Much of the theoretical background to my work is based on human movement around the landscape. The more an individual moves, the more his or her leg bones will adapt to that type of movement. Thus it is important to examine how much movement cultures practicing different subsistence strategies perform. The oft-cited hypothesis is that hunter-gatherers perform the most walking or running activity, and the transition to agriculture decreased movement. An implicit assumption in this is that males, no matter the society, always performed more work requiring mobility than females. This has been upheld in a number of archaeological studies: between the Italian Late Upper Paleolithic and the Italian Neolithic, individuals’ overall femoral strength decreased, but the males decreased more; over the course of the Classical Maya period (350-900 AD), the difference in leg strength between males and females decreased, solely due a reduction in strength of the males. The authors posit that this is due to an economic shift allowing the males to be free from hard physical labour.

However, I take issue with the hypothesis that females always performed less work. The prevailing idea of a hunting man settling down to farm work while the gathering woman retains her adherence to household chores and finding local vegetables is not borne out by the Hadza. First, both Hadza men and women gather. Their resources and methods differ – men gather alone and hunt small game while women and children gather in groups – but another GPS study found that Hadza women walk up to 15 km per day on a gathering excursion (men get up to 18 km). 15 km is not exactly sitting around the camp peeling tubers. Another discrepancy from bone research is the effect of testosterone: given similar levels of activity, a man is likely to build more bone than a woman, leading archaeologists to believe he did more work. Finally, hunting for big game – at least for the Hadza – occurs rarely (about once every 30 hunter-days, according to one researcher) and may be of more social significance than biomechanical, and berries gathered account for as many calories as meat; perhaps we should rethink our nomenclature and call pre-agricultural groups gatherer-gatherers or just foragers.

For a video of Hadza foraging techniques, click here.

For a National Geographic photo article, click here.

Sources:

Marchi, D. 2008. Relationships between lower limb cross-sectional geometry and mobility: the case of a Neolithic sample from Italy. AJPA 137, 188-200.

Marlowe, FW. 2010. The Hadza: Hunter-Gatherers of Tanzania. Berkeley: Univ. California Press.

O’Connell, J and Hawkes, K. 1998. Grandmothers, gathering, and the evolution of human diets. 14th International Congress of Anthropological and Ethnological Sciences.

Raichlen, DA, Gordon, AD, AZP Mabulla, FW Marlowe, and H Pontzer. 2014. Evidence of Lévy walk foraging patterns in human hunter–gatherers. PNAS 111:2, 728-733.

Wanner, IS, T Sierra Sosa, KW Alt, and VT Blos. 2007. Lifestyle, occupation, and whole bone morphology of the pre-Hispanic Maya coastal population from Xcambó, Yucatan, Mexico. IJO17, 253-268.