Really interesting happenings in English historical linguistics:
– The Great Vowel Shift, or the beginning of “Modern English”.
The Great Vowel Shift? One of the many reasons that English spelling is so difficult to master. The printing press in the 1400s preserved the spellings of Middle English, while the Great Vowel Shift happened after these spellings were already printed. Nobody wanted to re-print the books, so we kept a nonsensical spelling system for Modern English.
– H-loss, which occurred sometime between 1400-1600. the “gh” in many English words, pronounced /x/, became unpronounced such that “taught” and “taut” are homophones. This is an example of cheshirization, named after the cat in Alice’s Wonderland, since it disappears (the sound) but leaves a trace (the spelling / “gh”).
– Initial cluster reductions: 1400-1600. The words “know” and “no” now sound alike. Cluster reductions do not just belong to kids with phonological disorders. The whole of the English language cluster reduced.
– Grimm’s Law: How all of the Germanic consonants differ from the other IE consonants in an organized fashion. So the Latin “pes, pedis” is cognate with English “foot” because p>f, d>t.
- bʰ > b > p > ɸ
- dʰ > d > t > θ
- gʰ > g > k > x
- gʷʰ > gʷ > kʷ > xʷ
Worth reading if you don’t want to watch:
Based on epidemiological data, we know that one of the causes, or one of the associations, I should say, is advanced paternal age, that is, increasing age of the father at the time of conception. In addition, another vulnerable and critical period in terms of development is when the mother is pregnant. During that period, while the fetal brain is developing, we know that exposure to certain agents can actually increase the risk of autism. In particular, there’s a medication, valproic acid, which mothers with epilepsy sometimes take, we know can increase that risk of autism. In addition, there can be some infectious agents that can also cause autism.
And when you look at those concordance ratios, one of the striking things that you will see is that in identical twins, that concordance rate is 77 percent. Remarkably, though, it’s not 100 percent. It is not that genes account for all of the risk for autism, but yet they account for a lot of that risk, because when you look at fraternal twins, that concordance rate is only 31 percent. On the other hand, there is a difference between those fraternal twins and the siblings, suggesting that there are common exposures for those fraternal twins that may not be shared as commonly with siblings alone.
As we did this, though, it was really quite humbling, because we realized that there was not simply one gene for autism. In fact, the current estimates are that there are 200 to 400 different genes that can cause autism. And that explains, in part, why we see such a broad spectrum in terms of its effects.
How are we going to intervene? It’s probably going to be a combination of factors. In part, in some individuals, we’re going to try and use medications. And so in fact, identifying the genes for autism is important for us to identify drug targets, to identify things that we might be able to impact and can be certain that that’s really what we need to do in autism. But that’s not going to be the only answer. Beyond just drugs, we’re going to use educational strategies. Individuals with autism,some of them are wired a little bit differently. They learn in a different way. They absorb their surroundings in a different way, and we need to be able to educate them in a way that serves them best. Beyond that, there are a lot of individuals in this room who have great ideas in terms of new technologies we can use, everything from devices we can use to train the brain to be able to make it more efficient and to compensate for areas in which it has a little bit of trouble, to even things like Google Glass.
Join the interactive autism network.
Here is the proof:
Most other languages have some sort of one-to-one correspondence, whether morpheme-to-meaning (Chinese, Japanese) or grapheme-to-phoneme (Korean, Russian, Italian). English has neither. English requires memorization because there is no way you could sound out English words and always arrive at the correct pronunciation or spelling.
TED Talk given by Mary Lou Jepsen: Could future devices read images from our brains?
Starting at 5:33, the most exciting part for people with aphasia. Taken from the transcript at TED. You have to watch the video to see why this is so exciting because the transcript doesn’t show the results of the computer’s interpretation of neural activity.
Next let me share with you one other experiment, this from Jack Gallant’s lab at Cal Berkeley. They’ve been able to decode brainwaves into recognizable visual fields. So let me set this up for you. In this experiment, individuals were shown hundreds of hours of YouTube videos while scans were made of their brains to create a large library of their brain reacting to video sequences. Then a new movie was shown with new images, new people, new animals in it, and a new scan set was recorded. The computer, using brain scan data alone, decoded that new brain scan to show what it thought the individual was actually seeing. On the right-hand side, you see the computer’s guess, and on the left-hand side, the presented clip. This is the jaw-dropper. We are so close to being able to do this. We just need to up the resolution. And now remember that when you see an image versus when you imagine that same image, it creates the same brain scan.
This is incredible. Could we train a machine to interpret thoughts into images? And then to interpret images into words? Could we make that machine small and inexpensive? This has been the trajectory of all technology. Soon people with aphasia are going to be communicating without speech therapy if this technology or a similar one becomes widely available. Technology is replacing the field of speech pathology, and that is exciting because it means that more people will be able to communicate well.
TED Talk. Rupal Patel. Amazing.
Go, donate your voice.