Your new post is loading...
We present the latest updates on ChatGPT, Bard and other competitors in the artificial intelligence arms race. We present the latest updates on ChatGPT, Bard and other competitors in the artificial intelligence arms race. Full Transcript LAUREN LEFFER: At the end of November, it’ll be one year since ChatGPT was first made public, rapidly accelerating the artificial intelligence arms race. And a lot has changed over the course of 10 months. SOPHIE BUSHWICK: In just the past few weeks, both OpenAI and Google have introduced big new features to their AI chatbots. LEFFER: And Meta, Facebook’s parent company, is jumping in the ring too, with its own public facing chatbots. BUSHWICK: I mean, we learned about one of these news updates just minutes before recording this episode of Tech, Quickly, the version of Scientific American’s Science, Quickly podcast that keeps you updated on the lightning-fast advances in AI. I’m Sophie Bushwick, tech editor at Scientific American. LEFFER: And I’m Lauren Leffer, tech reporting fellow. [Clip: Show theme music] BUSHWICK: So what are these new features these AI models are getting? LEFFER: Let’s start with multimodality. Public versions of both OpenAI’s ChatGPT and Google’s Bard can now interpret and respond to image and audio prompts, not just text. You can speak to the chatbots, kind of like the Siri feature on an iPhone, and get an AI-generated audio reply back. You can also feed the bots pictures, drawings or diagrams, and ask for information about those visuals, and get a text response. BUSHWICK: That is awesome. How can people get access to this? LEFFER: Google’s version is free to use, while OpenAI is currently limiting its new feature to premium subscribers who pay $20 per month. BUSHWICK: And multimodality is a big change, right? When I say “Large language model,” that used to mean text and text only. LEFFER: Yeah, it’s a really good point. ChatGPT and Bard were initially built to parse and predict just text. We don’t know exactly what’s happened behind the scenes to get these multimodal models. But the basic idea is that these companies probably added together aspects of different AI models that they’ve built—say existing ones that auto-transcribe spoken language or generate descriptions of images—and then they used those tools to expand their text models into new frontiers. BUSHWICK: So it sounds like behind the scenes we’ve got these sort of Frankenstein’s monster of models? LEFFER: Sort of. It’s less Frankenstein, more kind of like Mr. Potato head, in that you have the same basic body just with new bits added on. Same potato, new nose. Once you add in new capacities to a text-based AI, then you can train your expanded model on mixed-media data, like photos paired with captions, and boost its ability to interpret images and spoken words. And the resulting AIs have some really neat applications. BUSHWICK: Yeah, I’ve played around with the updated ChatGPT, and this ability to analyze photos really impressed me. LEFFER: Yeah, I had both Bard and ChatGPT try to describe what type of person I am based on a photo of my bookshelf. BUSHWICK: Oh my god, it’s the new internet personality test! So what does your AI book horoscope tell you? LEFFER: So not to brag, but to be honest both bots were pretty complimentary (I have a lot of books). But beyond my own ego, the book test demonstrates how people could use these tools to produce written interpretations of images, including inferred context. You know, this might be helpful for people with limited vision or other disabilities, and OpenAI actually tested their visual GPT-4 with blind users first. BUSHWICK: That’s really cool. What are some other applications here? LEFFER: Yeah, I mean, this sort of thing could be helpful for anyone—sighted or not—trying to understand a photo of something they’re unfamiliar with. Think, like, bird identification or repairing a car. In a totally different example, I also got ChatGPT to correctly split up a complicated bar tab from a photo of a receipt. It was way faster than I could’ve done the math, even with a calculator. BUSHWICK: And when I was trying out ChatGPT, I took a photo of the view from my office window, asked ChatGPT what it was (which is the Statue of Liberty), and then asked it for directions. And it not only told me how to get the ferry, but gave me advice like “wear comfortable shoes.” LEFFER: The directions thing was pretty wild. BUSHWICK: It almost seemed like magic, but, of course… LEFFER: It’s definitely not. It’s still just the result of lots and lots of training data, fed into a very big and complicated network of computer code. But even though it’s not a magic wand, multimodality is a really significant enough upgrade that might help OpenAI attract and retain users better than it has been. You know, despite all the new stories going around, fewer people have actually been using ChatGPT over the past three months. Usership dropped by about 10% for the first time in June, another 10% in July, and about 3% in August. The prevailing theory is that this has to do with summer break from school—but still losing users is losing users. BUSHWICK: That makes sense. And this is also a problem for OpenAI, because it has all this competition. For instance, we have Google, which is keeping its own edge by taking its multimodal AI tool and putting it into a bunch of different products. LEFFER: You mean like Gmail? Is Bard going to write all my emails from now on? BUSHWICK: I mean, if you want it to. If you have a Gmail account, or even if you use YouTube or Google, if you have files stored in Google Drive, you can opt in and give Bard access to this individual account data. And then you can ask it to do things with that data, like find a specific video, summarize text from your emails, it can even offer specific location-based information. Basically, Google seems to be making Bard into an all-in-one digital assistant. LEFFER: Digital assistant? That sounds kind of familiar. Is that at all related to the virtual chatbot pals that Meta is rolling out? BUSHWICK: Sort of! Meta just announced it’s not introducing just one AI assistant, it’s introducing all these different AI personalities that you’re supposedly going to be able to interact with in Instagram or WhatsApp or its other products. The idea is it’s got one main AI assistant you can use, but you can also choose to interact with an AI that looks like Snoop Dogg and is supposedly modeled off specific personalities. You can also interact with an AI that has specialized function, like a travel agent. LEFFER: When you're listing all of these different versions of an AI avatar you can interact with, the only thing my mind goes to is Clippy from the old school Microsoft Word. Is that basically what this is? BUSHWICK: Sort of. You can have, like, a Mr. Beast Clippy, where when you're talking with it, it does – you know how Clippy kind of bounced and changed shape – these images of the avatars will sort of move as if they're actually participating in the conversation with you. I haven't gotten to try this out myself yet, but it does sound pretty freaky. LEFFER: Okay, so we've got Mr. Beat, we've got Snoop Dogg. Anyone else? BUSHWICK: Let's see, Paris Hilton comes to mind. And there's a whole slew of these. And I'm kind of interested to see whether people actually choose to interact with their favorite celebrity version or whether they choose the less anthropomorphized versions. LEFFER: So these celebrity avatars, or whichever form you're going to be interacting with Meta’s AI in, is it also going to be able to access my Meta account data? I mean, there's like so much concern out there already about privacy and large language models. If there's a risk that these tools could regurgitate sensitive information from their training data or user interactions, why would I let Bard go through my emails or Meta read my Instagram DMs. BUSHWICK: Privacy policies depend on the company. According to Google, it’s taken steps to ensure privacy for users who opt into the new integration feature. These steps include not training future versions of Bard on content from user emails or Google Docs, not allowing human reviewers to access users’ personal content, not selling the information to advertisers, and not storing all this data for long periods of time. LEFFER: Ok, but what about Meta and its celebrity AI avatars? BUSHWICK: Meta has said that, for now, it won’t use user content to train future versions of its AI…but that might be coming soon. So, privacy is still definitely a concern, and it goes beyond these companies. I mean, literal minutes before we started recording, we read the news that Amazon has announced it’s training a large language model on data that’s is going to include conversations recorded by Alexa. LEFFER: So conversations that people have in their homes with their Alexa assistant. BUSHWICK: Exactly. LEFFER: That sounds so scary to me. I mean, in my mind, that's exactly what people have been afraid of with these home assistants for a long time, that they'd be listening, recording, and transmitting that data to somewhere that the person using it no longer has control over. BUSHWICK: Yeah, anytime you let another service access information about you, you are opening up a new potential portal for leaks, and also for hacks. LEFFER: It's completely unsettling. I mean, do you think that the benefits of any of these AIs outweigh the risks? BUSHWICK: So, it's really hard to say right now. Google's AI integration, multimodal chat bots, and, I mean, just these large language models in general, they are all still in such early experimental stages of development. I mean, they still make a lot of mistakes, and they don't quite measure up to more specialized tools that have been around for longer. But they can do a whole lot all in one place, which is super convenient, and that can be a big draw. LEFFER: Right, so they’re definitely still not perfect, and one of those imperfections: they’re still prone to hallucinating incorrect information, correct? BUSHWICK: Yes, and that brings me to one last question about AI before we wrap up: Do eggs melt? LEFFER: Well, according to an AI-generated search result gone viral last week, they do. BUSHWICK: Oh, no. LEFFER: Yeah, a screenshot posted on social media showed Google displaying a top search snippet that claimed, “an egg can be melted,” and then it went on to give instructions on how you might melt an egg. Turns out, that snippet came from a Quora answer generated by ChatGPT and boosted by Google’s search algorithm. It’s more of that AI inaccuracy in action, exacerbated by search engine optimization—though at least this time around it was pretty funny, and not outright harmful. BUSHWICK: But Google and Microsoft – they’re both working to incorporate AI-generated content into their search engines. But this melted egg misinformation struck me because it’s such a perfect example of why people are worried about that happening. LEFFER: Mmm…I think you mean eggs-ample. BUSHWICK: Egg-zactly. [Clip: Show theme music] Science Quickly is produced by Jeff DelViscio, Tulika Bose, Kelso Harper and Carin Leong. Our show is edited by Elah Feder and Alexa Lim. Our theme music was composed by Dominic Smith. LEFFER: Don’t forget to subscribe to Science Quickly wherever you get your podcasts. For more in-depth science news and features, go to ScientificAmerican.com. And if you like the show, give us a rating or review! BUSHWICK: For Scientific American’s Science Quickly, I’m Sophie Bushwick. LEFFER: I’m Lauren Leffer. See you next time! ABOUT THE AUTHOR(S) Sophie Bushwick is an associate editor covering technology at Scientific American. Follow her on Twitter @sophiebushwick Credit: Nick Higgins Lauren Leffer is a tech reporting fellow at Scientific American. Previously, she has covered environmental issues, science and health. Follow her on Twitter @lauren_leffer
United Nations language staff come from all over the globe and make up a uniquely diverse and multilingual community. What unites them is the pursuit of excellence in their respective areas, the excitement of being at the forefront of international affairs and the desire to contribute to the realization of the purposes of the United Nations, as outlined in the Charter, by facilitating communication and decision-making. United Nations language staff in numbers The United Nations is one of the world's largest employers of language professionals. Several hundred such staff work for the Department for General Assembly and Conference Management in New York, Geneva, Vienna and Nairobi, or at the United Nations regional commissions in Addis Ababa, Bangkok, Beirut, Geneva and Santiago. Learn more at Meet our language staff. What do we mean by “language professionals”? At the United Nations, the term “language professional” covers a wide range of specialists, such as interpreters, translators, editors, verbatim reporters, terminologists, reference assistants and copy preparers/proofreaders/production editors. Learn more at Careers. What do we mean by “main language”? At the United Nations, “main language” generally refers to the language of an individual's higher education. For linguists outside the Organization, on the other hand, “main language” is usually taken to mean the “target language” into which an individual works. How are language professionals recruited? The main recruitment path for United Nations language professionals is through competitive examinations for language positions, whereby successful examinees are placed on rosters for recruitment and are hired as and when job vacancies arise. Language professionals from all regions, who meet the eligibility requirements, are encouraged to apply. Candidates are judged solely on their academic and other qualifications and on their performance in the examination. Nationality/citizenship is not a consideration. Learn more at Recruitment. What kind of background do United Nations language professionals need? Our recruits do not all have a background in languages. Some have a background in other fields, including journalism, law, economics and even engineering or medicine. These are of great benefit to the United Nations, which deals with a large variety of subjects. Why does the Department have an outreach programme? Finding the right profile of candidate for United Nations language positions is challenging, especially for certain language combinations. The United Nations is not the only international organization looking for skilled language professionals, and it deals with a wide variety of subjects, often politically sensitive. Its language staff must meet high quality and productivity standards. This is why the Department has had an outreach programme focusing on collaboration with universities since 2007. The Department hopes to build on existing partnerships, forge new partnerships, and attract the qualified staff it needs to continue providing high-quality conference services at the United Nations. Learn more at Outreach. #metaglossia_mundus
"» Conference 2024 IALC Conference, Wales 2024 Live your language: Increasing the use of minority and official languages The eighth conference of the International Association of Language Commissioners (IALC) will be held in Wales this year on 11 June. Live your language: Increasing the use of minority and official languages The eighth conference of the International Association of Language Commissioners (IALC) will be held in Wales this year on 11 June. The conference will provide an opportunity to explore the real effects of legislating in favour of languages in Wales and beyond. As well as practical sessions sharing the experience of Welsh institutions there will be contributions by the following main speakers: - Raymond Théberge, Commissioner of Official Languages, Canada
- Professor Fernand de Varennes, Former UN Special Rapporteur on minority issues
- Professor Rob Dunbar, UK representative on the Committee of Experts of the European Charter for Regional or Minority Languages
This event is supported by Welsh Government. The Welsh Language Commissioner, Efa Gruffudd Jones, is the current chair of the IALC. The conference was last held in Wales in 2017. Conference Programme Official Launch of the IALC Conference A launch event will be held prior to the conference on 10 June in Cardiff Bay. The launch will be an opportunity to celebrate ten years since the establishment of the IALC in the company of: - Efa Gruffudd Jones, Welsh Language Commissioner and Chair of the IALC
- Delyth Jewell MS, Chair of the Culture, Communications, Welsh Language, Sport, and International Relations Committee
- Professor Fernand de Varennes, Former UN Special Rapporteur on minority issues
Renowned poet Mererid Hopwood will read her specially-commissioned poem to mark the occasion and entertainment will be provided by the musician and composer Gwilym Bowen Rhys and the Ysgol Hamadryad choir. Please note that attendance of the launch is by invitation only. The launch is sponsored by Delyth Jewell MS. A special thanks to Darwin Gray for their generous sponsorship. www.darwingray.com" #metaglossia_mundus
"For those of us who grew up in the wake of the Second Vatican Council—the era of felt banners and guitar Masses—the confusion over what the Catholic Church taught was real. The catechesis of the 1970s became a cautionary tale, a model for what not to do when passing on the faith. The Catechism of the Catholic Church is the first comprehensive document to explain Catholic faith and morals in more than 400 years and has sold about 20 million copies in at least 44 languages. May 29, 2024 Patrick Novecosky Various editions of the Catechism of the Catholic Church. The English-language edition was first published 30 years ago this month. For those of us who grew up in the wake of the Second Vatican Council—the era of felt banners and guitar Masses—the confusion over what the Catholic Church taught was real. The catechesis of the 1970s became a cautionary tale, a model for what not to do when passing on the faith. Our well-meaning teachers told us that “all you need is love,” echoing the Fab Four instead of reaching for the Baltimore Catechism. In 1978, they joked that after Pope John Paul I, we might just get Pope George Ringo. Instead, we got John Paul II. One of the Polish pontiff’s seminal accomplishments was to give us the Catechism of the Catholic Church. The English-language version dropped 30 years ago this week. The Catechism was an instant international best-seller that became a reality only with the intervention of an American businessman. More on that in a moment. John Paul II inherited the arduous task of unpacking Vatican II, arguably the most important religious event of the 20th century. The Council met in four sessions between 1962 and 1965. Pope St. John XXIII, who opened the Church’s 21st ecumenical council, asked bishops to examine how the Church could best proclaim the Gospel in the modern era. Twenty years later, in 1985, John Paul convoked a meeting of bishops to examine how well the Church had implemented the Council. The synod returned with several recommendations, including the suggestion that the Church produce a new, comprehensive universal catechism. Critics harped that the Church didn’t need a new Catechism. Papal biographer George Weigel has noted that opponents of the proposal said that Catholics were no longer interested in “conceptual” approaches to religious education. John Paul II persevered. On May 27, 1994, the pope received the first English-language version of the Catechism. Even though nearly 700,000 copies of that version were on shelves by the end of June, the pope had no idea that he had an international bestseller on his hands. Since its 1992 publication in French, the Catechism has sold about 20 million copies in at least 44 languages. It is the first comprehensive document to explain Catholic faith and morals in more than 400 years. Following the Council of Trent (1545–1563), called in response to the Protestant Reformation, the Vatican published the Roman Catechism in 1566; council fathers found it necessary because both priests and the lay faithful at the time were poorly catechized. John Paul II saw a parallel after the Second Vatican Council. Decades of poor catechesis and the disastrous “Spirit of Vatican II” had the Catholic Church in turmoil. In developing a new universal catechism, John Paul II didn’t primarily intend to squash dissent. Instead, he wanted to put forth a modern, comprehensive, and authoritative teaching document containing all the tenets of the Catholic faith contained in Scripture and Tradition. He tasked a group of 12 bishops with creating the new catechism. They were led by Cardinal Joseph Ratzinger—the future Pope Benedict XVI, then prefect of the Congregation for the Doctrine of the Faith—and Fr. Christoph Schönborn, later archbishop of Vienna. Like all major undertakings, there were hiccups. One significant obstacle was funding. Tight budgets had virtually brought the Vatican’s ambitious project to a grinding halt. The project apparently found an unlikely savior in American pizza tycoon Tom Monaghan. The Domino’s Pizza founder was on a pilgrimage to Rome in the late 1980s and met with Schönborn. Monaghan, who would sell Domino’s for $1 billion in 1999, told the Austrian priest that he would sponsor the research, travel, staff, and equipment necessary to complete the project. In 2003, Schönborn acknowledged that without Monaghan, the Catechism might never have been published. Politically correct opposition demanded gender-neutral language. But the English-language version retained the generic use of “man” and “men” for humanity, both men and women. John Paul II also drew fire for the Catechism’s stance on the death penalty after later revisions. The Catechism’s first edition discouraged authorities from capital punishment. In 1997, John Paul ordered an update of paragraph 2267 limiting the use of the death penalty to circumstances where it was the “only possible way of effectively defending human lives against the unjust aggressor.” That revision was drawn from his 1994 encyclical Evangelium Vitae, in which he wrote that “as a result of steady improvements in the organization of the penal system, such cases are very rare, if not practically non-existent.” In 1999, John Paul told Catholics in St. Louis that “modern society has the means of protecting itself without definitively denying criminals the chance to reform. I renew the appeal I made most recently at Christmas for a consensus to end the death penalty, which is both cruel and unnecessary.” Twenty years later, Pope Francis revised the passage again. “The death penalty is inadmissible because it is an attack on the inviolability and dignity of the person, and the Catholic Church works with determination for its abolition worldwide.” In its 30-year run, the Catechism has proven to be an indispensable resource for catechesis and evangelization. It spells out the “what” and “why” of Church teaching—and distinguishes the Catholic Church from other world religions. After all, what other major faith has published a comprehensive volume of its beliefs and the reasons behind them? Patrick Novecosky is a Virginia-based journalist, author, international speaker, and pro-life activist. He met Pope St. John Paul II five times. His latest book is 100 Ways John Paul II Changed the World." #metaglossia_mundus
"The Master in Language and Mind is a highly interdisciplinary program aimed at providing a comprehensive understanding of language as a human cognitive capacity, combining theoretical perspectives in Linguistics and philosophy with their different and many applications in domains such as Computational Linguistics, Psycholinguistics, Text Analysis; special attention is devoted to the domain of language acquisition in diverse modalities and in different populations. All courses are taught in English, in a lively international environment at the Department of Social, Political and Cognitive Science, a leader in multidisciplinary research. The Master also offers two main specializations through two curricula “Linguistics and Cognition” and “Philosophy and Cognition”, with flexible options to combine teachings from both. In addition, a Double Degree program is active in collaboration with the Université Paris 1 as well as several Erasmus agreements with excellent departments across Europe. Category: LM-39 Class (Linguistics) Duration: 2 years. Credits: 120 Master program brochure Double Degree: University of Siena/Université Paris 1 Panthéon-Sorbonne" #metaglossia_mundus
"Full Transcript Note: Transcripts are generated by machine and lightly edited by humans. They may contain errors. Kevin Cool: If we want to make empowered decisions, then we need to care about accounting. Grant Means: No matter what somebody does for a career, they earn money. If we want to make more empowered decisions, then we need to care about accounting. Unless you’re a CPA or a business owner, you might not want to think about accounting. While it’s true that the average person doesn’t necessarily need to be able to read a corporate balance sheet, Professor Ed deHaan says a deeper understanding of accounting — a greater fluency in the “language of business” — can help everyone better understand their finances and make more empowered decisions. Money Talks: Understanding the Language of Business, with Ed deHaanMoney Talks: Understanding the Language of Business, with Ed deHaan As a professor of accounting at Stanford Graduate School of Business, deHaan teaches and studies financial reporting, corporate governance, household finance, and market regulation. In much of his research, he’s seen how many people, from everyday credit card users to finance industry professionals, don’t have an adequate level of financial literacy. In this episode of If/Then: Business, Leadership, Society, he explores why accounting principles are crucial given money’s centrality across personal and professional domains. While financial decisions around budgeting, investing, debt, and more can feel overwhelming, according to deHaan, education is the first step to preparing people for success with money. “It’s wild that we are not teaching all of our middle schoolers and high schoolers how to manage household finance, understanding things like credit and interest and risk,” he says. When it comes to navigating complex financial products, deHaan cautions that institutions have a systematic advantage over consumers, akin to a casino over gamblers. “You need to go in recognizing the house always wins on average. Assume you’re playing against the smartest poker players, not your neighbor,” he advises. This doesn’t mean services are inherently deceitful, but a lack of transparency, coupled with human tendencies toward irrationality, often leads to predictable wealth transfers away from individuals. “There’s a huge amount of research on the systematic errors that we make,” he says. “We have an overconfidence in the fairness of the system. You need to comparison shop. You need to be skeptical. Read the fine print. There are sharks out there who are just looking for minnows.” As new technology and financial products make it easier than ever to trade stocks, invest, and buy now while paying later, deHaan believes financial savvy is needed now more than ever. “People can trade with a swipe of a finger before they even get out of bed in the morning or after three drinks at night,” he says. “The ubiquity of ways to invest or take on debt, and how quickly this is expanding, is a huge cause for concern.” By proactively fostering financial literacy, deHaan believes we can empower a generation of informed consumers and leaders equipped to harness money as a force for good. To get there, we must learn to speak “the language of business.” Senior Editor, Stanford GSB If/Then is a podcast from Stanford Graduate School of Business that examines research findings that can help us navigate the complex issues we face in business, leadership, and society. Each episode features an interview with a Stanford GSB faculty member. Full Transcript Note: Transcripts are generated by machine and lightly edited by humans. They may contain errors. Kevin Cool: If we want to make empowered decisions, then we need to care about accounting. Grant Means: No matter what somebody does for a career, they earn money. Kevin Cool: Grant Means is a personal finance coach to people of all ages and different professions. He is trying to help Kai, an employee of a biotech startup, make sense of her relatively new and somewhat complicated compensation. Chai: So when I started, they gave me cover money, shares, RSUs. And I’m not entirely certain about all of the exact terms. I just know that was in addition to the base salary and any bonuses when you’re here. Kevin Cool: And to make matters harder, her employer’s stock price goes up and down depending on the breakthroughs it may or may not make. Grant Means: There’s all these terms. It almost feels paralyzing. You don’t know exactly what to do. And then all along the way, there’s this volatility. Kevin Cool: So over a Zoom session, Grant coaches Kai through this struggle by asking about her financial goals. Kai: So definitely saving up for a house; investing in — er, getting another car; taking care of parents as they age, and also children. So I think just educating myself and knowing how to do that effectively would definitely allow me to feel better about putting my money in places where I know it’s not too risky, where I might lose money. Grant Means: It’s important to understand your own personal risk tolerance. Kai: I think one of the issues is that I’m so risk-averse. Grant Means: Risk-averse? Kai: Yeah. Grant Means: Well, it’s interesting because, actually, the majority of your finances are extremely risky [unintelligible]. But at least the majority of your net worth, especially as it vests over time, is in something that … It very easily could go to zero if clinical trials don’t go well next week for whatever reason. Kevin Cool: Grant tells Chai about a fundamental concept of investing. Grant Means: There’s this principle called diversification: taking risk and spreading it around. So if you’re able to diversify your finances, then even as some of your dollars are in extremely risky places or relatively risky places, your overall financial risk is actually reduced. It’s kind of crazy. So right now, you’re one of the least-diversified people, actually, in the world because you’re invested in cash and one stock. Kevin Cool: Chai still struggles to get her head around the financial decisions she needs to make. Kai: Being here at my company, we’ve had financial advisers, representatives from the bank, coming in at different times to speak to us. And that was just something we never really did when I was going through grad school — not really thinking about how to organize in terms of planning how to [unintelligible] [some money]. It’s very different. So from not really having that to having it. So many choices. Grant Means: I think we can take what looks like needing to make a decision every paycheck and needing to make a decision every day and every market swing and turn it into just a few well-thought-out decisions that stand the test of time. Kevin Cool: Chai says she is encouraged by Grant’s advice as she figures out her life plans; but she also reflects on how she can be so educated, with a PhD, and know so little about finances. Kai: I’m probably not alone in not really thinking about a lot of these things that you probably should think about. That is just maybe something that, growing up, I hadn’t really considered as much. I’m not saying that what we learned in school wasn’t useful, but I think — you know, even elementary, high school, I feel like there could have been maybe more of an emphasis on things that were practical, like life skills and managing your finances. And that was just one thing that was quite glaringly not present. Kevin Cool: Even if Chai doesn’t yet know exactly what to do, she at least has a better understanding thanks to the kind of advice Grant offers. And she’s not alone. At work or at home, knowing about finance and basic accounting can help people make more-empowered decisions. That’s our focus today. This is If/Then, a podcast from Stanford Graduate School of Business, where we examine research findings that can help us navigate the complex issues facing us in business, leadership, and society. I’m Kevin Cool, Senior Editor at the GSB. Today we speak with Ed deHaan, Professor of Accounting. Ed deHaan: I think it’s wild that we are not teaching all of our middle schoolers and high schoolers how to manage household finance, understanding things like credit and interest and risk, so that when they go off to college or they go into the world and start working, they have some understanding of this. Kevin Cool: I want to start by talking about … You have a broad set of work around financial literacy in various settings. And we’re going to get into that, but I want to start with a more basic question, which is: If I’m not an accountant, and I don’t plan to be an accountant, why should I care about accounting? Ed deHaan: One answer is that, hopefully, on a day-to-day basis most of us don’t need to care about accounting. Accounting is what they call the “language of business.” It’s the backbone of communication within organizations and from organizations to outsiders. When it’s working as designed, the only people who need to worry about it are the accountants; the managers who are using the accounting reports — but hopefully they’re just using them as they are, without having to give it a lot of deep thought to where they come from; and then the industries who analyze the reports. So when it works, it facilitates everything in business. Internally what’s called “managerial accounting” is all the information that companies need to produce and use to run that business efficiently. “Financial accounting” is how people outside of the organization use financial information, what information they want, how companies produce it and communicate it. When it’s working well, it’s working well. And when it fails, we see catastrophic problems. Organizational failures. We can even see situations like bank failures, which we just observed here locally. Enron, parts of the financial crisis, were due to accounting failures. So every 10 years or so things blow up, and we try to avoid that as much as possible. Kevin Cool: Right. If I am a leader in an organization, what is my responsibility to the employees in terms of their financial literacy and understanding of what it is, how their money is being managed? Ed deHaan: So I think there’s a couple of different ways we can think about this. I’ll start with an example of my own research from a few years ago, where we looked at S&P 500 index funds. We showed, even among these identical index funds, some are charging, say, 2 basis points a year — so they’re practically free; it’s a great deal — and some are charging up to 500 basis points a year for functionally the same thing. Now, 500 basis points here is 5 percent a year, which means that if somebody has invested $100 in this, they’re down to $95 before anything else happens. Now, interesting: I presented this at a university — I think it was at University of Texas — where somebody in the room was a professor and was on the board that oversaw the retirement choices for University of Texas employees, and was shocked at this disparity in fees. And so this is somebody who works in a business school and who, presumably, has better-than-average knowledge. So I think there is — I don’t know if the word is “responsibility,” but I think there’s a benefit to employers to keeping a careful eye on what’s in these retirement plans, making sure that their employees are getting good options that are going to make them wealthier and them happier, which ultimately makes them happier working for the organization. Another completely different way we can think about this is the financial performance of the company itself. And I think it — you know, maybe at face value you would think, “Yeah, the average employee cares about whether the company they’re working for is doing well or not.” One big thing is: Is the company going to be here in five years? But what our research shows is that employees actually care about the financial performance of the company perhaps far more than the average manager realizes. They probably think, “Oh, I put out these earnings announcements; and the IR teams, the institutional investors, they get involved.” What we find is that employees are remarkably sensitive to the information that’s contained in these earnings announcements, and it has a major impact on their decisions about whether to stay working for a company as well as their decisions about where to go work in the future. Kevin Cool: In those cases, would you say that employees understand those earnings reports? Are they interpreting them accurately? Ed deHaan: My suspicion is no. The average person outside of Wall Street really struggles to understand the financial performance of a company. If I gave you, or gave even any of my students who have taken my class, the earnings announcement, they would maybe be able to get the basics; but it’s really complicated to understand what accounting reports say and what this means for the future of the company. So what we actually find in our research is that the number one way that employees appear to learn is from the media coverage of those earnings announcements. So it’s how the media are talking about it. The average reporter is probably a bit more savvy than the average person — Wall Street Journal reporters are very savvy — but that doesn’t mean that they even get it right all the time. Kevin Cool: So let’s walk through a typical earning report. How should an employee interpret that? What should they be looking for? Ed deHaan: Yeah, so I think advice that goes for almost any industry is you have to have a long-term perspective — not 10 years, because that’s probably longer than the average person works for a company; but certainly not just one quarter, either. I think we get overly fixated on “Did this quarter’s earnings hit the analysts’ forecast or what the Street was expecting?” There’s just a lot of randomness in every quarter, so I wouldn’t focus too much on that. I would be thinking more like, “How long would I like to work for this company? Two, three, four years?” and, “What is the company’s prospects looking like over that longer-term perspective?” I certainly wouldn’t be following the company’s stock price on a daily basis. This is just a recipe for madness. But that longer-term perspective that you could get from reading thoughtful media articles: Wall Street Journal type articles, New York Times articles. I almost certainly would ignore everything on social media. Most of that is designed to get clicks and to attract attention, and long-term predictions are generally not click-worthy. So I think I would be thinking about that. You also want a company that’s relatively consistent with their plan. If each quarter or once a year they’re coming out with some radical shift, what we might call a restructuring — you know, we’re completely changing our vision; we’re shutting down branches — this is probably an indicator that there’s a lack of stability at the senior-leadership levels, which, for many employees, is probably not so desirable if this is a job that you’re hoping to stay and grow within. Kevin Cool: Do you think companies … Are companies aware of the mindset or the effect that these reports can have on employees? Ed deHaan: Yeah, I’m not sure if the average manager is aware of this. I think the easy and the obvious answer is: Company is doing well; employees are happy. Company is not doing well; employees are going to start looking for another job. And that is certainly true. But this thing that we call “résumé value” is really important, as well: that employees want their résumé to reflect organizations that they’re proud of, that the outside market values — “outside market” being other employers. When the company is doing well, that increases the résumé value. Kevin Cool: Sure. Ed deHaan: So there is this maybe counterintuitive finding that when the company really has a great quarter, it’s not necessarily that their employees want to stay even more, but some number of employees think, “This is my time to move. This is when I can capitalize on the résumé value of my company.” Kevin Cool: So if we have a situation where people are acting, to some degree, on reports that they don’t understand, how do we remedy that? Ed deHaan: If I was a manager advising managers, I would say, “We think a lot about how to communicate financial information to our investors and maybe to our lenders. Maybe what we should do is think more about how we communicate this financial information to our employees so that they understand it.” So some companies do now have internal conference calls that follow the public conference call with all the analysts. These things are all hands, or at least all available — anybody can log on — and the CFO or the CEO will help the employees interpret what’s happening and will talk to them about the future of the company. Kevin Cool: You’re listening to If/Then, a podcast from Stanford Graduate School of Business. We’ll continue our conversation after the break. [Pause] Kevin Cool: So let’s pivot and talk about household finance. Some of your research has dealt very directly with this. How common is it for people to lose out when they’re working with companies that have a very sophisticated understanding of this and they have a very unsophisticated understanding of it? What’s the danger here? And can you offer some examples about mistakes people make, either in investing or how their money is being managed? Ed deHaan: Yeah. I think the best analogy when thinking about the financial sector as a normal person interacting with financial products, whether it’s choosing which stocks to invest, which credit cards to have, which savings products to use — the best analogy is a casino. When you walk into a casino, you expect to, on average, lose money to the casino. The casino is much more sophisticated than you are. Kevin Cool: The house always wins. Ed deHaan: The house always wins. And even if you’re a poker player playing against other poker players, probably there’s better poker players than you in the room. So this is the attitude, I think, we need to have. Now, it’s fair: The casino provides you a benefit, and they should make some money. And in the same way, in the financial sector, if a company is providing you a credit card or an index fund or a mortgage, they need to make money. That’s how they do it if they’re providing you financial advice. We just don’t want them to have too much of it. Right? You want a balance so that the companies make a profit and people can prosper. So I think we need to go into this recognizing that you are going to pay, and you are going to lose on average, particularly when you’re making a stock trade. If you are making a trade, you are making a bet, and somebody is on the other side of that. So assume that you are playing against the smartest poker players in the world, not your neighbor. So in terms of the mistakes that people make, there was a huge amount of research on the systematic errors that we make — these decision errors, these processing errors. Much of it goes back to Kahneman and Tversky, the sort of seminal psychologists. Tversky won a Nobel Prize for this. And we see this all the time in financial markets, things like: We are quick to recognize our gains and quick to forget our losses. So somebody might say, “Oh, I made a ton of money in the market last year.” And that’s true — maybe they made some really good bets that paid off — but they also lost a lot of money in small bits and pieces. And you put those together: It ends up worse than if they just invested in an S&P 500 index fund. This result has been replicated time after time after time. We have this overconfidence in our ability to invest, and we lose out as a result. I think another systematic mistake that we make is we have an overconfidence in the fairness of the system. If you think: I’m going to go out and get a credit card. Maybe they’re all the same. I’ll look at two or three. I’ll pick the one that seems to have the best website. Something like this, making these arbitrary — not arbitrary but sort of relatively surface-level decisions. This is a mistake. You need to comparison-shop. You need to be skeptical and read the fine print as much as you can because there are sharks out there who are just looking for minnows. Kevin Cool: Are there any interventions? Obviously regulation is an important part of this, but are there any interventions that we should be thinking about to protect people? Ed deHaan: The regulations or the interventions that I would recommend differ depending on what type of product we’re talking about. I think financial education is probably the biggest bang for the buck across all of those. I think it’s wild that we are not teaching all of our middle schoolers and high schoolers how to manage household finance, understanding things like credit and interest and risk, so that when they go off to college or they go into the world and start working, they have some understanding of this. So there’s some great work being done already here at GSB with Annamaria Lusardi on education starting at a very low level, young people, with financial literacy. So I think that’s first and foremost. Second, I think that we have some good regulations in place already, but the financial services sector moves more quickly than regulators. So one good example is Buy Now, Pay Later. You might have seen this. It’s popped up everywhere in the last couple of years. Double-digit growth rates year over year. Young people are using this for every transaction. Now, essentially, that’s like a credit card. There’s no real difference here. But the way that the fintech companies have structured this, it falls outside of the usual regulations around credit cards. So you don’t have protections, like very clear disclosures that are designed to help everyday people understand the fees they’re going to have to pay. Basic fraud protections. Communications from the Buy Now, Pay Later providers to credit bureaus, which is a really important part of our system for preventing people from what we call debt-stacking, meaning that you max out one credit card; you get another. You max that out; you get another. You max that out, and before you know it, you’re in a hole you can never dig your way out of. So Buy Now, Pay Later — or BNPL — is outside of that sector right now. New York state actually is going to be the first state that is going to be starting to regulate this, but it’s taken three-four years for anybody to make that first move. Kevin Cool: So I want to dig into this a little bit more and ask you how worried you are. You mentioned the “pay later” situation. I know if you make an Amazon purchase now, often you will get a pop-up that says you can do this in four payments or whatever. And as you say, there are no disclosures about fees or anything like that. Robinhood is an example of a company where you can easily, basically, day-trade. It’s very seamless, kind of frictionless. So there are all of these situations where people can easily spend their money and maybe not have much transparency about what those transactions are like. Ed deHaan: I’m hugely concerned about this. I think, going back to my “casino” example, it would be the equivalent of a casino popping up on every street corner and that if you can see above the jackpot machine, you can pull the lever — you know, this level of protection. Certainly there’s nothing in Robinhood that prevents a 16 year old really from going on and trading. Even if it’s against their terms of service, you can do it. So I think the ubiquity of ways to invest or to take on debt, and how quickly this is expanding, is a huge cause for concern. I think research has shown time and time and time again that people lose money in the stock market. I have a study that’s just coming out now where we try to investigate what we call the protectionary effects of trading hours. So until recently, if you wanted to trade, you had to trade from 9:30 AM to 4:00 PM New York time — which, for a California person, essentially means 6:30 AM until 1:00 PM. That’s a form of market-access protection. It’s a friction to engaging in the stock market for people on the West Coast or each time zone moving west. We do some fancy econometrics there, and what we find, using tax records for the entire U.S. population, is that losing one hour of morning trading because you’re more likely to be asleep than awake — because you’re trading on New York time — actually results in about a three-percentage-point increase in your net capital gains per year. These are meaningful results. We also see that, over the last decade, this protectionary effect has dwindled dramatically. That’s because we’re not just picking up trading gains to the New York Stock Exchange during regular hours. We’re now also picking up cryptocurrencies. We’re picking up round-the-clock trading, which you can do on Robinhood — out-of-hours trading, which has become more accessible. So this protectionary effect has waned over time. I think this is a huge problem. What I’m worried about is that people can trade with the swipe of a finger before they even get out of bed in the morning or after that three drinks at night, and they’re going to systematically transfer money from their bank accounts to the institutions who are taking the other sides of these trades. Kevin Cool: So let’s talk a little bit more about financial literacy, especially in an educational setting. Some families give their children allowances — $5 a week or $10 a week, however much that is — and part of the rationale, I think, is to give them some very basic experience in handling money. You mentioned earlier the value of savings, delaying the gratification until later. But as they age, what should we be doing to make that understanding a little bit deeper, a little bit more sophisticated? What should we be thinking about? Ed deHaan: There’s a lot of reasons to give a kid the allowance. Sometimes it’s to motivate them to do their chores. Sometimes it’s to give them some autonomy and start letting them feel more adult. Perhaps what we could think about doing is starting with the basic $5 a week, or whatever it is, and then slowly having them graduate up to larger and larger autonomy over their financials. So you could imagine a system where, instead of parents taking their child clothes-shopping and paying for it themselves, you come up with a clothing budget of what the parent would spend on the kid per year or per quarter, and then sort of depositing that $200 a quarter or whatever into their allowance account, and then they can go and buy their own clothes. Getting experience with these sort of micro transactions maybe would allow them, when they get to their college years and beyond, to make more savvy decisions about their financials. This is not within my research, but something I’ve observed across families is just the extent to which parents are transparent with their children about the household finances. Many kids grow up in a back box — you know, the parents pay for things, and they have absolutely no understanding of the parents’ financial situation. That might have all sorts of benefits in not transferring stress to young people when they don’t need it. But there also is something to be said for helping kids from a young age understand what income is and what a budget is, and why it is that we don’t go on vacations every week and why we don’t have the holidays four times a year or five times a year — things like this — and just making those conversations part of the dinner-table conversation. I think another part of it, which relates to what we’ve seen in meme stocks and whatnot, is sometimes young people need to feel the pain of their mistakes. So I think following some of the meme-stock and crypto crashes, there was some outcry as to who is going to make these young people whole who have lost their money. And I think the answer is “Nobody,” that when you touch a stove and you get burned, you get burned. We don’t want people to lose their arm, but we want them to feel enough pain that they won’t touch it again. But I also think young people have a remarkable ability to understand these concepts when presented to them. So I don’t think we should underestimate what they’re capable of. Kevin Cool: So hearing you talk about these things, it occurs to me that you look at things through a certain lens, through a particular lens. Now, maybe that is as an accounting expert, or maybe it is a mindset. But is it useful for people to apply that sort of lens in other parts of their lives? Ed deHaan: I suppose the lens I think that I see through is, really, an economic lens. I was an undergraduate business economics major. I did an economics master’s. The accounting PhD much of the first couple of years is, essentially, an economics PhD. So when I say “economic lens,” what I mean is just a really rational and sober approach to costs and benefits, and thinking carefully not only about the direct costs that you observe in making a transaction or in a decision but, also, the opportunity costs involved with it — all of the indirect things, or what you’re giving up. Now, that sober and rational approach is really difficult for us to maintain when we are talking about buying things that we want. When it comes to … You’re at the cash register just before the holidays. You see that perfect gift, and you think, “Somehow I’ll just squeeze it out. I’ll somehow manage to pay off the credit card. I’m going to buy that.” But if we could apply that rational lens at each step in our lives, I think this is something I would recommend. I can also tell you it can go too far — you probably don’t want to treat your personal relationships in this way. Kevin Cool: [Laughs] Ed deHaan: But at least when it comes to work and it comes to your house finances, trying to step back, being as rational as possible, is beneficial. An example I often give for this is: Pretend you’re not making the transaction, but you’re advising your grandmother about whether she should make the transaction, or your grandfather. What would you recommend to them? And try to follow that advice yourself. Kevin Cool: Well, thanks, Ed. This was really interesting. I appreciate it. [Music plays] Kevin Cool: It might start with knowing how to manage the household budget or understanding how credit cards work, but approaching the world with an accountant’s mindset can help you make all kinds of life decisions, including whether or when to leave your job. If we make sure people start learning about money at a younger age, it should make things easier later in life. As Ed says, we shouldn’t underestimate what young people can understand. Maybe that goes for all of us, whatever age we are. If/Then is produced by Jesse Baker and Eric Nuzum of Magnificent Noise for Stanford Graduate School of Business. Our show is produced by Jim Colgan and Julia Natt. Mixing and sound design by Kristin Mueller. From Stanford GSB: Jenny Luna, Sorel Husbands Denholtz, and Elizabeth Wyleczuk-Stern. If you enjoyed this conversation, we’d appreciate you sharing this with others who might be interested and hope you’ll try some of the other episodes in this series. For more on our professors and their research, or to discover more podcasts coming out of Stanford GSB, visit our website at gsb.stanford.edu. Find more on our YouTube channel. You can follow us on social media at Stanford GSB. I’m Kevin Cool." #metaglossia_mundus
BY LILY KEMPCZINSKI ON MAY 29, 2024 "Sign-language interpreters in the Durham schools demanded better pay during a school board meeting on Thursday. Meanwhile, dozens of Durham Association of Educators members and supporters called on the board to recognize the union and give it a stronger role in the planning process. The comments by staff who serve deaf students came after department members called in sick on Thursday to protest insufficient pay. After the district’s salary debacle, sign-language interpreters “now earn an average of $932 less per month than we were promised in October of 2023,” said interpreter Sarah Leonard, reading from a letter from the staff to the board. “Due to this change, many of us are contemplating leaving the district in order to support ourselves and our families with community positions that pay significantly more. If we are forced to leave, the district will find itself in a precarious position, both financially and legally,” Leonard continued. Other DPS staff also spoke about salary issues at the meeting. Christie Clem, a DAE member and physical therapist, called for more transparency about future pay rates. “The classified pay crisis caused employees to leave and destroyed our trust not only with the board but also with administration,” she said. “Classified employees don’t feel any better now than we did in January.” Interim Superintendent Catty Moore said the district hopes to release individual salary projections for the next school year by the end of this week. Clem was among more than 35 individuals who turned out to advocate for the Durham Association of Educators on Thursday. The advocates, many wearing DAE T-shirts and carrying homemade signs, called on the board to formally recognize the union and to establish a “formal meet and confer policy” by the start of the upcoming school year. Representatives from nine other unions — including the Union of Southern Service Workers, the Duke Graduates Students Union, and National Nurses United — spoke out in solidarity with the DAE. DAE President Symone Kiddoo laid out a clear timeline for the board. “If the board is not ready to formalize union recognition this year, we can take the summer to do all that we can to get on the same page about this…If, after that, the board does not pass a standard union recognition policy at or near the start of the school year, we will have to spend the next year at loggerheads as well.” North Carolina is among the few states where public-sector employees are prohibited from engaging in collective bargaining. DAE views meet and confer as an “alternative framework for honoring workers’ rights.” Efforts to establish a meet and confer policy have been ongoing in Durham. On February 15, DAE members met with the school board, leading to the formation of an ad hoc committee to work towards a meet and confer policy. However, the committee has been in tension. DAE members walked out of a May 20 ad hoc committee meeting after hearing the board’s proposed policy. The DAE posted on Facebook the following day that “our talks with the district about union recognition have broken down. Despite the fact that we now represent a majority of DPS workers, the board continues to delay the process, divide workers, and discredit our organizing.” Alongside urging the board to establish a formal policy, DAE members expressed anguish over the current state of the district. “We are in triage. We are in the trenches, as you already know. We can’t wait. We are in the middle of a staffing crisis and basically fueled by pay cuts, lack of trust in the district, quality teachers are leaving left and right, it’s like we’re bleeding from the arteries,” said educator Shamia Truitt. Also on Thursday, members of the public made another appeal to renovate the current Durham School of the Arts rather than build a replacement school. Moore reiterated that plans to build a new Durham School of the Arts will proceed. Speakers argued that the funds for the new construction, now estimated at more than $240 million, could better be allocated to aid in facility maintenance at other schools. Several also pointed out the historical, cultural, and emotional significance of the current DSA site. “It is vital that a school that calls itself the School of the Arts have a significant and visible presence in the downtown corridor,” said designer and two-time DSA parent Alicia Hylton-Daniel. “DSA’s current location does that, and so much more. The location tells a story of preservation and significant progress as it went from an all-white Durham high school to the cultured, diverse student body makeup it is today.”" #metaglossia_mundus
Andrzej Duda argued that Silesian is a dialect of Polish and not a language in itself. He also cited national security concerns. MAY 29, 2024 | CULTURE, LAW, POLITICS, SOCIETY "President Andrzej Duda has vetoed a law that would have made Silesian – which is spoken in the historical area of Silesia in southwest Poland – a recognised regional language. In his justification, Duda argued that Silesian is a dialect of Polish, rather than a language in itself, and also cited national security concerns. The president’s decision, which had been widely expected, was criticised by figures from the ruling coalition, whose parliamentary majority had approved the law in April. In the most recent national census, around 460,000 people in Poland said they use Silesian as their main tongue at home. That is far more than the 87,600 who speak Kashubian, a language native to northern Poland that is currently the country’s only recognised regional language. Such official recognition allows a language to be taught in schools and used in local administration in municipalities where at least 20% of the population declared in the last census that they speak it. However, in the justification for his veto today, Duda argued that, in “the opinions of experts, especially linguists”, Silesian does not meet the criteria defining a language laid out in the 2005 law regulating Poland’s recognised ethnic minorities and regional languages. It is instead an “ethnolect”, said Duda, a term that refers to a variety of a language associated with a certain ethnic group. The president noted that, as such, Silesian is still subject to legal protections and support, as are other dialects of Polish, under separate legislation. Duda also voiced his concern that, if Silesian were recognised as a regional language, it could “result in similar expectations among representatives of other regional groups who want to cultivate their local tongues”. Finally, the president also cited national security concerns in relation to the “current social and geopolitical situation…related to the war being waged on the eastern border”. At such a time, there must be “special care to preserve national identity”, including “cultivating the native language”. That latter justification was criticised as “nationalist hysteria” by Monika Rosa, an MP from the ruling coalition and one of the most vocal proponents of recognising Silesian as a regional language, reports the Gazeta Wyborcza daily. Rosa also dismissed Duda’s reference to experts. She said that everyone chooses whichever expert opinions best suit them. The MP pledged that another bill recognising Silesian would be presented to parliament and signed by the new president who will replace Duda when his final term ends next year. The speaker of parliament, Szymon Hołownia, who is a leader of one of the parties in the ruling coalition, also criticised Duda’s decision. “Diversity is Poland’s strength, not a threat to it. I’m sorry you don’t understand that, Mr President,” he wrote on social media. However, the president received praise from Janusz Kowalski, an opposition MP from the right-wing Sovereign Poland (Suwerenna Polska) party. “Respect for President Andrzej Duda, who defends the unitarity of the Republic of Poland,” tweeted Kowalski. “The Silesian language is the Polish language. Silesians are Poles! The German plan to break up the Polish national community has been stopped today.” Notes from Poland is run by a small editorial team and published by an independent, non-profit foundation that is funded through donations from our readers. We cannot do what we do without your support." #metaglossia_mundus
"When we're told "This coffee is hot" upon being served a familiar caffeinated beverage at our local diner or cafe, the message is clear. But what about when we're told "This coffee is not hot"? Does that mean we think it's cold? Or room temperature? Or just warm? by New York University When we're told "This coffee is hot" upon being served a familiar caffeinated beverage at our local diner or cafe, the message is clear. But what about when we're told "This coffee is not hot"? Does that mean we think it's cold? Or room temperature? Or just warm? A team of scientists has now identified how our brains work to process phrases that include negation (i.e., "not"), revealing that it mitigates rather than inverts meaning—in other words, in our minds, negation merely reduces the temperature of our coffee and does not make it "cold." "We now have a firmer sense of how negation operates as we try to make sense of the phrases we process," explains Arianna Zuanazzi, a postdoctoral fellow in New York University's Department of Psychology at the time of the study and the lead author of the paper, which appears in the journal PLOS Biology. "In identifying that negation serves as a mitigator of adjectives—bad or good, sad or happy, and cold or hot—we also have a better understanding of how the brain functions to interpret subtle changes in meaning." In an array of communications, ranging from advertising to legal filings, negation is often used intentionally to mask a clear understanding of a phrase. In addition, large language models in AI tools have difficulty interpreting passages containing negation. The researchers say that their results show how humans process such phrases while also potentially pointing to ways to understand and improve AI functionality. While the ability of human language to generate novel or complex meanings through the combination of words has long been known, how this process occurs is not well understood. To address this, Zuanazzi and her colleagues conducted a series of experiments to measure how participants interpreted phrases and also monitored participants' brain activity during these tasks—in order to precisely gauge related neurological function. In the experiments, participants read—on a computer monitor—adjective phrases with and without negation (e.g., "really not good" and "really really good") and rated their meaning on a scale from 1 ("really really bad") to 10 ("really really good") using a mouse cursor. This scale was designed, in part, to determine if participants interpreted phrases with negation as the opposite of those without negation—in other words, did they interpret "really not good" as "bad"—or, instead, as something more measured? Here, the researchers found that participants took longer to interpret phrases with negation than they did phrases without negation—indicating, not surprisingly given the greater complexity, that negation slows down our processing of meaning. In addition, drawing from how the participants moved their cursors, negated phrases were first interpreted as affirmative (i.e., "not hot" was initially interpreted as closer to "hot" than to "cold"), but later shifted to a mitigated meaning, suggesting that, for instance, "not hot" is not interpreted as either "hot" or "cold," but, rather, as something between "hot" and "cold." The scientists also used magnetoencephalography (MEG) to measure the magnetic fields generated by the electrical activity of participants' brains while they were performing these phrase-interpretation tasks. As with the behavioral experiments, neural representations of polar adjectives such as "cold" and "hot" were made more similar by negation, suggesting that the meaning of "not hot" is interpreted as "less hot" and the meaning of "not cold" as "less cold," becoming less distinguishable. In sum, neural data matched what was observed for the mouse movements in the behavioral experiments: negation does not invert the meaning of "hot" to "cold," but rather weakens or mitigates its representation along the semantic continuum between "cold" and "hot." "This research spotlights the complexity that goes into language comprehension, showing that this cognitive process goes above and beyond the sum of the processing of individual word meanings," observes Zuanazzi, now at the Child Mind Institute. More information: Zuanazzi A, Ripollés P, Lin WM, Gwilliams L, King J-R, Poeppel D, Negation mitigates rather than inverts the neural representations of adjectives. PLoS Biology (2024). DOI: 10.1371/journal.pbio.3002622" #metaglossia_mundus
"History of remote simultaneous interpretation and its impact on interpreters, and a description of sound tests conducted by the Translation Bureau. Developed in the mid-20th century, simultaneous interpretation is closely related to the development of sound systems. The invention of the microphone and speakers allowed interpreters to reproduce what one person is saying in another language without interrupting them. The introduction of teleconferencing and video conferencing gave rise to remote simultaneous interpretation, where the interpreter is not in the same place as the person speaking. It’s quite challenging for interpreters, who have to clearly see and hear the person speaking to render their message in another language, using the right words and tone. In addition, converting the sound to allow its transmission by telephone or over the Internet can affect the sound quality. Over the years, requests to interpret teleconferences and video conferences increased in the Government of Canada. Interpreters started reporting headaches and hearing problems, which prompted the Translation Bureau to create a working group to regulate remote simultaneous interpretation in 2015. In April 2019, after several accidents related to the quality of the sound transmitted by phone, the Bureau put an end to teleconference interpretation. In spring 2020, the lockdown due to the pandemic resulted in an explosion of requests for videoconference interpretation. This immediately resulted in an increase in health problems reported by interpreters. The Bureau quickly took action to protect them but interpreters are still feeling the effects today. This is why the Bureau is continuing its efforts to better understand and solve problems related to remote simultaneous interpretation. Analyzing the sound Even though the consequences of being exposed to loud noise are well known, few studies have been conducted on the effects of long-term exposure to sound from videoconferences. Over the past few years, the Bureau has called on a variety of sound and hearing specialists from Canada and other countries to obtain data that will help it to choose the best measures to protect interpreters. Tests may involve many factors, as shown in the list of studies obtained by the Bureau. For example, in spring 2023, sound specialists tested the frequency spectrum Footnote1 (sound quality) and the level of sound pressure Footnote2 (intensity or volume of sound) transmitted to interpreters in parliamentary committee rooms. The frequency spectrum test is relatively simple: a device that measures the frequencies received is plugged in, and used to determine whether these frequencies cover the recommended spectrum, between 125 and 15,000 hertz for simultaneous interpretation. With regard to sound pressure, a mannequin made of silicone designed to represent the human body, specifically the human ear, is used. Equipped with sensors, the mannequin reproduces the sound vibration in the ear and body and allows specialists to determine whether the sound pressure is appropriate. The mannequin’s ear is designed to reproduce the human ear canal and linked to an electronic eardrum. The data collected this way in spring 2023 was sent to hearing specialists so they could determine if the sound transmitted posed a danger to interpreters and make recommendations to reduce the risks. That is one example of the Bureau’s ongoing efforts to improve its understanding of the effects of remote simultaneous interpretation and better protect interpreters. Related links - Footnote 1
-
Measurement of the sound transmitted in hertz, from the lowest to the highest. If the spectrum is not broad enough, it is harder to understand what is being said. Return to footnote1Referrer - Footnote 2
-
The higher the sound pressure, the higher the risk of causing hearing damage." #metaglossia_mundus
"Translation, Cross-Cultural Adaptation, and Validation of Measurement Instruments: A Practical Guideline for Novice Researchers Paulo Cruchinho,1 María Dolores López-Franco,2 Manuel Luís Capelas,3 Sofia Almeida,4 Phillippa May Bennett,5– 7 Marcelle Miranda da Silva,1,8 Gisela Teixeira,1 Elisabete Nunes,1 Pedro Lucas,1 Filomena Gaspar1 On Behalf of the Handovers4SafeCare
1Nursing Research, Innovation and Development Center (CIDNUR) of Lisbon, Nursing School of Lisbon, Lisboa, Portugal; 2CTS-464 Nursing and Innovation in Healthcare, University of Jaén, Jaén, Spain; 3Universidade Católica Portuguesa, Faculty of Health Sciences and Nursing, Center for Interdisciplinary Research in Health (CIIS), Lisboa, Portugal; 4Universidade Católica Portuguesa, Faculty of Health Sciences and Nursing, Center for Interdisciplinary Research in Health (CIIS), Porto, Portugal; 5Center for English, Translation, and Anglo-Portuguese Studies (CETAPS), Lisboa, Portugal; 6Faculty of Social Sciences and Humanities of the New University of Lisbon, Lisboa, Portugal; 7Faculty of Arts and Humanities of the University of Coimbra, Department of Languages, Literatures and Cultures, Coimbra, Portugal; 8Federal University of Rio de Janeiro, Anna Nery Nursing School, Rio de Janeiro, Brazil
Correspondence: Paulo Cruchinho, Nursing School of Lisbon, Avenida Prof. Egas Moniz, Lisboa, 1600-190, Portugal, Tel +351 217913400, Email pjcruchinho@esel.pt
Abstract: Cross-cultural validation of self-reported measurement instruments for research is a long and complex process, which involves specific risks of bias that could affect the research process and results. Furthermore, it requires researchers to have a wide range of technical knowledge about the translation, adaptation and pre-test aspects, their purposes and options, about the different psychometric properties, and the required evidence for their assessment and knowledge about the quantitative data processing and analysis using statistical software. This article aimed: 1) identify all guidelines and recommendations for translation, cross-cultural adaptation, and validation within the healthcare sciences; 2) describe the methodological approaches established in these guidelines for conducting translation, adaptation, and cross-cultural validation; and 3) provide a practical guideline featuring various methodological options for novice researchers involved in translating, adapting, and validating measurement instruments. Forty-two guidelines on translation, adaptation, or cross-cultural validation of measurement instruments were obtained from “CINAHL with Full Text” (via EBSCO) and “MEDLINE with Full Text”. A content analysis was conducted to identify the similarities and differences in the methodological approaches recommended. Bases on these similarities and differences, we proposed an eight-step guideline that includes: a) forward translation; 2) synthesis of translations; 3) back translation; 4) harmonization; 5) pre-testing; 6) field testing; 7) psychometric validation, and 8) analysis of psychometric properties. It is a practical guideline because it provides extensive and comprehensive information on the methodological approaches available to researchers. This is the first methodological literature review carried out in the healthcare sciences regarding the methodological approaches recommended by existing guidelines.
Keywords: cross-cultural comparison, decision-making, psychometric properties, research design, validation studies, health services research" #metaglossia_mundus
"« Il n’y a aucune raison que l’édition vive dans une réserve alors que l’intelligence artificielle finira par être utilisée dans tous les secteurs », explique au Figaro Renaud Lefebvre, directeur général du syndicat national de l’édition (SNE). « Le téléphone a commencé à moins sonner, puis les deux maisons avec qui j’ai l’habitude de travailler m’ont tout simplement annoncé qu’elles préféraient se tourner vers des solutions d’intelligence artificielle, faute de moyens », confie de son côté Capucine, traductrice de livres pratiques, d’ouvrages de développement personnel et de biographies de stars.
« C’est la deuxième maison en quatre mois qui me propose des contrats au rabais, en troquant mon statut d’auteur pour celui de prestataire de services », témoigne un autre traducteur : « On me demande désormais d’éditer à la marge des textes, qui ont préalablement été traduits par une machine ».
« Dans les traductions littéraires, l’utilisation de l’intelligence artificielle n’est pas envisageable », tempère Anne Michel, à la tête du département étranger chez Albin Michel, qui note que « Dans les contrats rédigés par les maisons d’édition anglo-saxonnes, il est désormais demandé très spécifiquement, depuis plus de six mois, que les traductions soient faites par des humains et non par la machine ».
A contrario, des éditeurs de BD, mangas et webtoons se tournent désormais vers GeoComix/ComixSuite, une start-up française qui développe depuis 8 ans « une Intelligence Artificielle unique capable d'extraire et d'analyser tous les éléments présents sur une page de bande dessinée », de sorte d'automatiser la traduction et la réécriture des légendes dans plusieurs langues. Une société britannique, DeepZen, « promet de son côté aux éditeurs de diviser par dix le temps de production d’un livre audio, et par quatre le coût de conception ». Le Figaro relève qu'Audible « propose déjà plus de 40 000 livres audio dont les voix sont générées par IA », et que le deuxième groupe d’édition du monde, HarperCollins, vient de son côté d’officialiser un partenariat avec la start-up de clonage de voix, ElevenLabs, afin d’élargir son catalogue de livres audio en langues étrangères à coût réduit.
« Nous sommes dans un moment pivot. C’est inévitable que des emplois soient décimés dans les prochains mois », explique au Figaro Stephan Kalb, membre du bureau de l’association professionnelle LESVOIX qui, avec le Syndicat Français des Artistes interprètes (SFA), a lancé une pétition en janvier, Pour un doublage créé par des humains pour des humains, qui affiche plus de 120 000 signatures : « Nous risquons d’être parmi les premier·es à être remplacé·es, à très court terme par les outils de l’intelligence artificielle générative (IAG), capables de traduire, cloner, synthétiser des textes, des voix, des interprétations et des émotions avec une similitude étonnante. Nous sommes en première ligne car le traitement des données vocales nécessite moins de puissance de calcul que l’image. »" #metaglossia_mundus
"Dans cet article, Pascal Médeville décortique des crevettes (avec quelques conseils gastronomiques ) et le vocabulaire khmer autour de la stupidité. Écrit par Pascal Médeville Publié le 26 mai 2024, mis à jour le 26 mai 2024 Ce matin, j’ai été un peu agacé en vérifiant une traduction en khmer faite pour un client. Le client voulait un document bilingue, avec la traduction en khmer sous chaque paragraphe en anglais. Il avait à cet effet, dans son fichier Excel, inséré une ligne sous chaque ligne anglaise, et avait ajouté dans les lignes insérées la mention « Khmer Translation ». Or, notre traducteur a eu la brillante idée d’ajouter sa traduction dans les cellules anglaises, sous l’anglais, et de mettre sous les lignes ayant la mention « Khmer Translation », la ligne « សំណៅបកប្រែខ្មែរ » (traduction en khmer)… En expliquant à ma relectrice que « Khmer Translation » indiquait que le client souhaitait que nous insérions la traduction dans les lignes ainsi balisées, je ne pus m’empêcher de lui expliquer que si je ne doutais pas que notre traducteur possédât un cerveau, je le soupçonnais d’avoir oublié de s’en servir… En réponse à mes commentaires, mon adorable assistante m’a dit deux mots : ខួរបង្កង [khuo bâng-kâng], littéralement « cerveau de grosse crevette d’eau douce ». Les traductions en anglais que je trouve dans les dictionnaires en ligne pour l’expression ខួរបង្កង expliquent toutes que le mot ខួរបង្កង désigne ce que les cuisiniers français appellent le « corail », qui « est le nom donné à la partie verte devenant orangée à la cuisson qui se trouve dans le coffre des homards et des langoustes, et qui sert d’élément de liaison aux sauces d’accompagnement de poisson, de crustacés ou de coquillages. » (définition fournie par le site Gastromaniac). Le mot khmer ខួរ est un terme générique qui désigne la cervelle, la moëlle et, donc, le corail des crustacés : ខួរក្បាល signifie « cerveau », ខួរឆ្អឹង désigne la « moëlle des os », et ខួរបង្កង fait ainsi référence au corail des « chevrettes » ou des « demoiselles d Mékong » (autres noms donnés aux grosses crevettes d’eau douce fameuses au Cambodge). (Signalons en passant que, si jamais vous dégustez des chevrettes, ne vous abstenez surtout pas de sucer l’extrémité de la tête détachée du corps, pour en extraire la substantifique moëlle le corail dont la saveur est inoubliable !) En comparant le cerveau de notre cher traducteur à une « cervelle de chevrette », mon assistante suggérait qu’il n’était pas doté d’un cerveau humain, mais plutôt de celui d’un crustacé et que c’était là l’explication de son erreur. Au Cambodge, l’expression cervelle de chevrette est une expression gentillette couramment utilisée pour parler de quelqu’un qui ne brille pas par la vivacité de son esprit. Je qualifie cette expression de gentillette car il en existe d’autres beaucoup moins amènes : pour décrire une personne stupide, on utilise en général l’adjectif ល្ងង់ [l’ngung] et, pour appuyer son affirmation, on peut dire d’une personne qu’elle est bête comme un buffle (ល្ង់ងដូចក្របី), comme un bœuf (ល្ងង់ដូចគោ) ou encore comme un porc (ល្ងង់ដូចជ្រូក). Si je puis me permettre de vous prodiguer un conseil, c’est de vous abstenir d’utiliser ces métaphores, qui sont considérées comme des insultes majeures." #metaglossia_mundus
May 26, 2024 Code de l'info: 3488661 "IQNA-Zafarul Islam Khan, chercheur et penseur indien, a récemment achevé une traduction moderne des concepts du Saint Coran en anglais, qui correspond aux besoins des musulmans d'aujourd'hui et aide les non-musulmans à comprendre l'Islam. Zafarul Islam Khan, chercheur et penseur indienDans une interview accordée à Al Jazeera, ce chercheur de 75 ans a déclaré : « Ce projet a été réalisé dans le but de corriger et de réviser la traduction d'Abdullah Yusuf Ali dans les années 1930, qui contient de nombreuses erreurs et des équivalents incompréhensibles. Pour cette traduction, les commentaires, les biographies du Prophète (as) et les dictionnaires arabes les plus fiables ont été utilisés. Nous avons tenté de créer un pont culturel et religieux entre musulmans et non-musulmans, et de fournir une compréhension précise et équilibrée de l'Islam, et une traduction conforme aux croyances islamiques authentiques. Nous allons essayer de publier une deuxième édition car la plupart de ceux qui demandent des traductions sont des non-musulmans qui souhaitent en savoir plus sur l'Islam. Les nombreuses annotations prennent en compte les questions que le lecteur moyen, musulman ou non musulman, peut se poser en lisant le Coran. L’absence de cette approche dans de nombreuses traductions, a conduit à des problèmes et à des doutes, exploités par les ennemis de l’Islam. کامل شود برای سوژه محققان//یازده سال تلاش محقق هندی برای ارائه ترجمه تازه قرآن به زبان انگلیسی L’éloignement de la parole de Dieu est une des principales raisons du déclin moral de certaines sociétés islamiques, en particulier des musulmans indiens. Dieu parle à chacun de nous personnellement, à travers le Coran. Il est dommage de tirer le message de Dieu non pas de son livre, mais de ceux qui n'ont pas de réelles connaissances dans ce domaine. Certains interdisent même de lire la traduction du Coran et prétendent que si les gens le font, ils s’égareront. En insistant pour prononcer des sermons en arabe, que la grande majorité de notre peuple ne comprend pas, nous avons gaspillé une grande opportunité d’éducation et de formation hebdomadaire, lors du sermon du vendredi. Le sermon hebdomadaire du vendredi est une occasion pour conseiller et éduquer le grand public, mais en insistant sur les sermons en arabe, nous avons raté une excellente occasion de communiquer, chaque semaine, avec les gens sur des questions qui préoccupent la société. Si nous nous soucions vraiment de nous-mêmes, des générations futures et du bien-être de la nation et du pays, nous devons réfléchir et formuler un plan sérieux de réforme ». یازده سال تلاش محقق هندی برای ارائه ترجمه روزآمد قرآن به زبان انگلیسی /// تکمیل شد Cette traduction a été publiée par l'éditeur de Delhi, Faros Media, en 1234 grandes pages et comporte le texte arabe et sa traduction anglaise. Une autre édition sans texte arabe, a été publiée en 815 pages, au prix de 795 roupies, et sera disponible sur le site : TheGloriousQuran.net Zafarul Islam Khan est l'un des intellectuels musulmans les plus célèbres de l'Inde. Il est né en mars 1948 à Badaria, et est le fils de Maulana Wahiduddin Khan, un penseur musulman qui dirigeait le centre islamique « Al-Rasalah » de New Delhi. Zafarul Islam Khan a étudié à Al-Azhar et à l’Université du Caire de 1973 à 1966. En 1987, il a obtenu son doctorat en études islamiques à l'Université de Manchester. Zafar al-Islam a travaillé comme traducteur et éditeur au ministère libyen des Affaires étrangères dans les années 1970. Dans les années 1980, il a travaillé avec le « Muslim Institute », basé à Londres, dirigeant le service d'information « MuslimMedia » et leurs autres publications. کامل شود برای سوژه محققان//یازده سال تلاش محقق هندی برای ارائه ترجمه تازه قرآن به زبان انگلیسی Il est l'auteur et traducteur de plus de 50 livres en arabe, anglais et ourdou, dont « Hijrah in Islam » (Delhi, 1996) et « Palestine documents » (New Delhi, 1998). Il a publié huit articles dans l'Encyclopédie de l'Islam (Leiden) sur les questions indo-islamiques, et est un analyste des questions islamiques et sud-asiatiques sur des chaînes de radio et de télévision, notamment Al Jazeera et BBC Arabic, et dans des journaux, sur les questions internationales et islamiques, en particulier la question de la Palestine. یازده سال تلاش محقق هندی برای ارائه ترجمه روزآمد قرآن به زبان انگلیسی /// تکمیل شد En 2000, il lance « Milli Gazette », un bihebdomadaire en anglais. En décembre 2007, il a été élu pour un mandat de deux ans (2008-2009) président du « All India Muslim Majlis » (AIMMM), qui regroupe toutes les organisations islamiques en Inde. Il a également été élu président de l'AIMMM pour deux mandats supplémentaires. En juillet 2017, il a été nommé pour un mandat de trois ans, président de la Commission des minorités de Delhi, chargée de défendre les droits des minorités. En tant que président, Khan a constitué un comité d'enquête chargé de faire un rapport et des recommandations au gouvernement de Delhi, sur les émeutes de Delhi de 2020. Ce penseur musulman est l'auteur et le traducteur de plus de 40 livres en arabe, anglais et ourdou, publiés au Koweït, au Caire, à Beyrouth, à Londres et à Delhi depuis 1968. 4216374" #metaglossia_mundus
"Les étudiants en traduction, premières victimes de l’ère ChatGPT Depuis novembre 2022, le secteur de la traduction s’inquiète : l’intelligence artificielle va-t-elle bouleverser le métier, voire remplacer ses acteurs ? «Libération» a rencontré plusieurs de ces étudiants, qui ont commencé leur cursus avant même que ChatGPT n’existe. par Enora Foricher publié le 26 mai 2024 à 11h55 Il y a quelques semaines, une étudiante en traduction, qui a souhaité rester anonyme, passe un entretien d’embauche dans un cabinet d’expertise financière à Paris. Alors qu’elle expose ses compétences, acquises après cinq ans d’études dont un concours sélectif, le recruteur lui assène : «De toute façon, les traducteurs, vous allez être remplacés. Avant Chat GPT, on avait besoin de deux postes, maintenant un seul suffit.» Décontenancée face à un tel discours, la jeune femme se souvient de son inconfort jusqu’à la fin de l’entretien. Retenue pour le poste, elle préfère décliner. «Même si le salaire était intéressant, plus de 2 000 euros net mensuels, je ne peux pas accepter de travailler pour quelqu’un qui dénigre mon métier et n’en voit pas l’utilité.» Qu’il s’agisse des inquiétudes sincères de leur entourage ou de remarques désobligeantes lancées à la volée, tous les étudiants en traduction que Libération a rencontrés racontent avoir été confrontés à une même crainte. Leur profession va-t-elle disparaître ? «Impossible», répondent-ils. «Du moins pas tout de suite», nuance l’une d’entre eux. «Et pas pour toutes les tâches», complète une autre. «Cela nous fait réfléchir» Depuis l’arrivée sur le marché grand public de Chat GPT en novembre 2022, deux mots – intelligence artificielle – voire deux lettres – IA – sont sur toutes les lèvres. Et pour cause, le développement de l’intelligence artificielle a connu un grand coup d’accélérateur ces deux dernières années..." #metaglossia_mundus
"Depuis plus de 20 ans, Lhousseine Eshimi traduit dans les tribunaux la parole de mis en cause arabisants. Un destin auquel rien ne destinait ce professeur d’anglais né au Maroc. Justice : « Le traducteur-interprète est un pont entre deux rives » Depuis plus de vingt ans, Lhousseine Eshimi travaille comme traducteur-interprète pour la cour d’appel d’Agen. Publié le 25/05/2024 à 18:01 , mis à jour le 27/05/2024 à 11:58 l'essentielDepuis plus de 20 ans, Lhousseine Eshimi traduit dans les tribunaux la parole de mis en cause arabisants. Un destin auquel rien ne destinait ce professeur d’anglais né au Maroc. Lorsqu’il s’avance à la barre, il ajuste d’un air habitué le micro du pupitre. Il attend, les mains croisées et le regard alerte, le moment de parler. Ici, il se tient seul, au milieu du tribunal, et entouré de tous les côtés de robes noires. Un quotidien dans l’urgence Cet homme, c’est Lhousseine Eshimi, 60 ans. Il n’est pourtant ni juge, ni avocat, ni prévenu, ni témoin ou victime. Non, son métier c’est de traduire les paroles de prévenus qui ne parlent d’autres langues que l’arabe. « Traducteur-interprète », nous corrige-t-il d’abord. « Le travail de traduction est toujours accompagné du domaine de l’interprétariat. Le travail que je suis amené à faire nécessite que je doive interpréter, expliquer et expliciter certaines choses, par exemple au niveau culturel. » Un travail périlleux qui, s’il n’est pas mené correctement, peut mener à de fâcheuses déconvenues : « Une fois, dans le cabinet d’un juge d’instruction, un mis en cause a dit : "Je ne pardonnerai pas aux personnes qui m’ont accusé. À cause d’eux, je suis dans cette situation". La juge d’instruction et les avocats ont compris cela comme une menace. J’ai dû intervenir pour expliquer que cette personne voulait dire : "Je ne pardonne pas à ces personnes devant Dieu. C’est-à-dire que je délègue la justice à Dieu" ». Un travail qu’il met à disposition de la cour d’appel d’Agen. D’où sa présence, ce vendredi, au tribunal de Cahors qui dépend de la juridiction agenaise. Si nous avions pu prendre rendez-vous un mois en avance, nous ne savions pourtant pas qu’il serait sur nos terres deux jours plus tôt. En cause, les aléas du métier qui font qu’on doit être disponible rapidement selon les affaires. Dès mercredi, Lhousseine Eshimi s’est rendu au commissariat et au tribunal de Cahors pour aider les enquêteurs suite à l’interpellation de neuf personnes dans le cadre d’un réseau de cocaïne qui sévissait dans la ville. 20 ans de traduction et d’interprétation « Cela a fini tard tous les jours », souffle cet épris de justice qui doit assister aux auditions, audiences du tribunal, confrontations, écoutes téléphoniques… etc. Pourtant rien ne destinait ce Marocain – désormais naturalisé — à œuvrer dans ces salles où on décide le sort d’hommes et de femmes. Diplômé d’un DEUG en anglais à la faculté de lettres et de sciences de Kénitra (Maroc) en 1987, il rejoint la France cinq ans plus tard avec sa femme. Ici, il tente bien de continuer son métier de professeur mais les mutations ne l’enchantent guère. Il décide finalement d’ouvrir en 1997 un magasin d’alimentation à Agen, où se trouvent aussi une boulangerie et une boucherie. Elle ferme finalement en 2013. Entre-temps, il a ouvert en 2004 un cabinet de voyages toujours actif aujourd’hui. L’année d’avant, il a repris ses études pour faire une licence en langues arabes. Bref, Lhousseine a plusieurs vies. Celle qui le mène au tribunal commence seulement au début des années 2000. « Un juge d’instruction d’Agen avait entendu de parler de moi, que j’avais été professeur et m’a demandé mes services pour traduire. Je n’ai aucune idée de comment elle a eu mon téléphone. Je n’avais jamais travaillé pour la police ou la gendarmerie pourtant », s’étonne encore Lhousseine. Pour son baptême du feu, l’ancien prof découvre la gravité et les horreurs de la cour d’assises : une affaire de viol. Satisfaite, la juge le rappellera plusieurs fois. Ce qui a alors commencé comme une activité complémentaire est devenu un travail à temps plein où il enregistre désormais presque 20 ans au compteur. Un travail qui le passionne. En témoigne la lettre qu’il fait au procureur général d’Agen pour lui demander de renouveler, cette année, son assermentation (qui doit être renouvelée tous les cinq ans). Une formalité que Lhousseine transforme en une page recto verso où il loue « le rôle du traducteur-interprète [qui] est semblable à celui d’un pont qui relie deux rives. »" #metaglossia_mundus
Par Julien DONMEZ 25 mai 2024 "Japon : découvrez ces vitres innovantes offrant des traductions instantanées pour les touristes Au Japon, une entreprise développe une technologie innovante : des vitres transparentes capables de traduire instantanément les conversations entre Japonais et étrangers, favorisant ainsi l’interaction sans barrière linguistique. Le paysage technologique japonais continue de se transformer avec des innovations destinées à améliorer la vie quotidienne. Parmi ces innovations, une entreprise japonaise franchit un nouveau cap en facilitant les interactions entre touristes et locaux. L’initiative vise à réduire les défis linguistiques grâce à une vitre spéciale capable de traduire instantanément les propos échangés. Alors que la plupart des gens se tournent vers des applications comme Deepl ou Google Traduction pour surmonter les barrières linguistiques, cette approche japonaise se distingue par sa capacité à offrir une communication claire sans recourir à un smartphone. Cette invention s’inscrit dans le cadre d’une tendance plus large visant à rendre l’expérience touristique plus fluide et agréable. Des vitres qui traduisent instantanément Le Japon connaît une hausse spectaculaire du nombre de touristes, ayant doublé en une décennie. Dans ce contexte, la langue demeure une barrière importante, d’autant plus que la maîtrise de l’anglais parmi la population japonaise y est relativement faible. Pour répondre à ce défi, la société Toppan a mis au point une vitre transparente révolutionnaire. Cette technologie permet aux Japonais et aux étrangers de se comprendre en temps réel, en affichant la traduction des propos dans des bulles blanches, rappelant les dialogues des bandes dessinées. L’ensemble du dispositif est pensé pour être intuitif et facilement accessible. Sur le même sujet : Voitures électriques : ce chargeur rapide fonctionne à l’hydrogène, le voici Un outil appelé à prospérer La vitre de Toppan mesure 40 centimètres de haut et 60 centimètres de large. L’une de ses forces réside dans le fait qu’elle a été entraînée directement en japonais pour traduire une douzaine de langues, dont l’anglais, le français, le chinois et le coréen. Contrairement à d’autres outils de traduction qui passent souvent par l’anglais comme langue intermédiaire, cette vitre limite les risques d’erreurs en traduisant directement depuis le japonais. Cette technologie a déjà été installée dans plusieurs gares de Tokyo, notamment devant les guichets, facilitant la communication entre le personnel ferroviaire et les voyageurs étrangers. Son efficacité et son utilité ont conduit à une demande croissante à travers le pays. Une adoption en expansion L’expansion de ces vitres de traduction ne se limite pas aux gares. On les trouve aussi dans d’autres lieux touristiques majeurs et dans certains commerces à forte affluence touristique. Le concept séduit par sa capacité à rendre les échanges plus naturels et spontanés, sans nécessiter l’utilisation d’un appareil mobile. Les retours des utilisateurs sont particulièrement positifs. Les touristes apprécient de pouvoir dialoguer plus facilement et les Japonais bénéficient d’un outil qui facilite leur quotidien, sans les forcer à une maîtrise d’une langue étrangère. Impact sociétal et futur potentiel Le succès de ce dispositif repose sur plusieurs facteurs. En premier lieu, il répond à un besoin réel dans un pays qui voit son taux de tourisme exploser. Par ailleurs, il s’inscrit dans une volonté plus globale de démocratiser les outils numériques pour une utilisation quotidienne. De plus, cette technologie pourrait trouver des applications dans d’autres contextes, tels que les services médicaux, où la communication entre patients étrangers et personnel soignant peut être cruciale. Les développements technologiques, comme cette vitre de traduction, pourraient jeter les bases d’une interaction plus harmonieuse partout où la barrière linguistique demeure un obstacle. En somme, au Japon, cette innovation pourrait bien préfigurer d’autres outils similaires adaptés à des environnements variés. À long terme, la question se pose : cette innovation pourrait-elle influencer d’autres pays à intégrer des solutions de traduction instantanée dans les espaces publics et les services essentiels ?" #metaglossia_mundus
"Les trois quarts des Etats membres de l'ONU ont reconnu l'Etat de Palestine, proclamé par la direction palestinienne en exil il y a plus de 35 ans, comme l'ont fait à leur tour mardi l'Espagne, l'Irlande et la Norvège. Mais la plupart des pays d'Europe occidentale et d'Amérique du Nord ne l'ont pas fait. La guerre de presque huit mois entre Israël et le Hamas dans la bande de Gaza, déclenchée par l'attaque du mouvement islamiste palestinien le 7 octobre sur le territoire israélien, ravive les appels en faveur de la reconnaissance de l'Etat palestinien. D'après la liste fournie par l'Autorité palestinienne et les dernières annonces de gouvernements dans le monde, 146 pays sur les 193 Etats membres de l'ONU ont désormais fait part de leur reconnaissance de l'Etat palestinien. Peu avant l'Espagne, l'Irlande et la Norvège, quatre pays des Caraïbes (Jamaïque, Trinité-et-Tobago, Barbade et Bahamas) avaient rejoint cette liste, dont sont absents la plupart des pays d'Europe occidentale et d'Amérique du Nord, l'Australie, le Japon ou encore la Corée du Sud. Tout comme la Suisse. Contenu externe Ce contenu externe ne peut pas être affiché car il est susceptible de collecter des données personnelles. Pour voir ce contenu vous devez autoriser la catégorie Infographies. ACCEPTERPLUS D'INFO Mi-avril, les Etats-Unis ont eu recours à leur droit de veto au Conseil de sécurité de l'ONU pour bloquer une résolution visant à ce que la Palestine devienne un Etat membre à part entière de l'organisation internationale. >> Relire : La Palestine en tant qu'Etat: que changent les décisions de l'Irlande, l'Espagne et la Norvège? 1988, les premières reconnaissances Le 15 novembre 1988, quelques mois après le début de la première Intifada – soulèvement palestinien contre l'occupation israélienne – le dirigeant de l'Organisation de libération de la Palestine (OLP) Yasser Arafat autoproclame "l'établissement de l'Etat de Palestine", avec Jérusalem pour capitale, à la tribune du Conseil national palestinien (CNP), qui tient lieu de Parlement en exil, à Alger. Quelques minutes plus tard, l'Algérie reconnaît officiellement le nouvel Etat. Une semaine après, 40 pays, dont la Chine, l'Inde, la Turquie et la plupart des pays arabes, ont fait la même démarche. Suivront presque tous les pays du continent africain et du bloc soviétique. Dans les années 2010 et 2011 principalement, la plupart des pays d'Amérique centrale et d'Amérique latine suivent, marquant leur distance sur la scène internationale avec les Etats-Unis, grand allié d'Israël. 2012, un pied à l'ONU Sous la présidence de Mahmoud Abbas, successeur de Yasser Arafat, mort en 2004, l'Autorité palestinienne instituée par les accords d'Oslo (1993) sur l'autonomie palestinienne lance une offensive diplomatique au niveau des institutions internationales. Par un vote historique en novembre 2012, l'Etat de Palestine obtient le statut d'Etat observateur aux Nations unies. A défaut d'un statut de membre à part entière avec droit de vote, cela lui donne accès à des agences de l'ONU et des traités internationaux. Forts de ce statut, les Palestiniens vont rejoindre en 2015 la Cour pénale internationale (CPI), ce qui permet l'ouverture d'enquêtes sur des opérations militaires israéliennes dans les Territoires palestiniens occupés. Les Etats-Unis et Israël dénoncent cette décision. L'Unesco (Organisation des Nations unies pour l'éducation, la science et la culture) avait ouvert la voie en admettant en octobre 2011 l'Etat de Palestine comme un de ses membres à part entière. Israël et les Etats-Unis quitteront l'organisation en 2018, les seconds y reviendront en 2023. 2014, la Suède pionnière dans l'UE La Suède devient en 2014 le premier pays de l'UE à reconnaître l'Etat de Palestine, la République tchèque, la Hongrie, la Pologne, la Bulgarie, la Roumanie et Chypre l'ayant fait avant de rejoindre l'Union européenne. Cette décision de Stockholm, prise à un moment où les efforts pour résoudre le conflit israélo-palestinien sont dans une impasse complète, entraîne des années de relations houleuses avec Israël. 2024, un nouvel élan européen Dans un élan conjoint, l'Espagne et l'Irlande, tous deux membres de l'UE, ainsi que la Norvège, ont donc formellement emboîté le pas à la Suède, alors qu'une reconnaissance formelle de l'Etat palestinien a été longtemps vue par les pays occidentaux comme devant être la résultante d'un processus de paix avec Israël. Les chefs de gouvernement maltais et slovène s'étaient joints le 22 mars au Premier ministre espagnol Pedro Sanchez et à leur homologue irlandais pour se dire, dans une déclaration commune, "prêts à reconnaître la Palestine" si "les circonstances sont les bonnes". Le 9 mai, le gouvernement slovène a lancé ce processus de reconnaissance sur lequel le Parlement doit se prononcer d'ici au 13 juin. Le président français Emmanuel Macron a de son côté franchi un cap en février, estimant que "la reconnaissance d'un Etat palestinien n'[était] pas un tabou pour la France". Mais Paris répète que cette décision unilatérale doit être prise au "bon moment" et être "utile dans une stratégie globale pour la solution politique". L'Australie a également évoqué en avril la possibilité d'une telle reconnaissance. La Suisse a pour sa part déclaré le 22 mai que les conditions ne sont selon elle "pas réunies à l'heure actuelle" pour reconnaître un Etat palestinien." #metaglossia_mundus
"Bev Buchanan says she's one of few people left in Nova Scotia who are native signers of MSL. After writing her dissertation on the preservation of the language, she's looking to change that. As a deaf child of deaf parents in the 1960s, Beverly Buchanan used Maritime Sign Language at her home in Halifax. But as American Sign Language became the one taught in deaf schools, MSL started to be lost. Now, Buchanan is working to preserve and revitalize the endangered language. A Nova Scotia woman is working to preserve and revitalize Maritime Sign Language before it's lost. As a Deaf child of Deaf parents in Halifax in the 1960s, Bev Buchanan learned MSL at home. MSL is a descendant of British Sign Language, which was used in the Atlantic region during the 1800s. But as American Sign Language became the dominant sign language — the one taught in schools for the Deaf community across the continent — MSL was slowly lost. "Sign languages are some of the fastest-disappearing language," Buchanan told CBC News through an interpreter. "If we don't preserve them and document them, they will disappear faster." ASL-English interpreter Brenna D'Arcy facilitated the interview with Buchanan. 'Preserving an identity' In 2021, Buchanan earned her doctorate in education at Lamar University in Texas, writing her dissertation on the preservation of Maritime Sign Language. After decades of living in the United States, she's back home in Nova Scotia, where she was hired as the program manager for American Sign Language and Interpretation studies at the Nova Scotia Community College in Dartmouth. Buchanan is the first Deaf program manager at the school, and has her sights set on developing an MSL curriculum. She said the students are "very curious" about MSL, and many faculty members either learned some MSL growing up or moved from out of province and are committed to learning it. "The work that Bev is doing ... is not just preserving a language, but preserving an identity, a culture," said Justin Read, who learned MSL at home as a child and is now an instructor of the ASL/English interpretation program at the college. Read, as well as Buchanan, make the distinction between lowercase deaf, used in discussions around medical deafness, and uppercase Deaf when talking about community, culture and personal identity. Read said his parents were Deaf and Deaf-blind, and attended the Halifax School for the Deaf and the Amherst School for the Deaf. "[It's] just something that needs to be there for future generations to look back at and see where and how sign language has evolved and changed over time," he said. What is MSL? Derived from British Sign Language, MSL uses two-handed spelling, unlike ASL's one-handed alphabet, and there are some different signs for different words and phrases. WATCH | Here are some differences between MSL and ASL: https://www.cbc.ca/player/play/video/9.4234960 Beverly Buchanan demonstrates some examples of the differences between Maritime Sign Language and American Sign Language. MSL was taught at the Halifax School for the Deaf until it closed in 1960. After that, students attended the Amherst School for the Deaf, and eventually the school got new teachers who brought American Sign Language with them. "You can pinpoint that shift right there with the change in schools," Buchanan said. MSL is still used by older people in the region, but it hasn't been passed down to younger generations. "When we see MSL out there, it really connects us with our childhood and growing up, and allows us to just have that connection to the Deaf community," Read said. Buchanan said she's one of fewer than 100 native MSL signers left in Nova Scotia. Preservation efforts People in the Maritime Deaf community have been working to preserve MSL for years, documenting the language on video. As part of her dissertation work, Buchanan accessed those videos through the Nova Scotia Cultural Society of the Deaf. In analyzing more than 20 videos, she found more than 3,000 examples of signs, including 900 that were distinct to MSL. She catalogued all of them and created an online glossary. Those videos and glossary would be the jumping-off point for a semester-long MSL curriculum she's hoping to develop. "It's absolutely incredible that [Bev] took this daunting task on," Read said. He said interpreters in the region have an "ethical duty" to incorporate MSL into their work. "It's a great resource not only for interpreters, but for young deaf individuals out there who want to learn more about the history of language, more about the history of Deaf culture in the Maritimes."" #metaglossia_mundus
Cochrane's strength lies in its collaborative, global community. Cochrane Geographic Groups represent Cochrane in their host country, advocate for the use of Cochrane evidence in health policy and practice, and support Cochrane's members and supporters who live there. "Cochrane Japan: Building expertise and bridging language gaps Cochrane's strength lies in its collaborative, global community. Cochrane Geographic Groups represent Cochrane in their host country, advocate for the use of Cochrane evidence in health policy and practice, and support Cochrane's members and supporters who live there. Here we spotlight the impact of Cochrane Japan, who are dedicated to enhancing Japanese healthcare through evidence-based decision-making. Since 2012 various Cochrane activities happened in Japan but it wasn't until 2014 that a branch in Tokyo was established with an official Cochrane Japan being established in 2017. Today with over 200 members, Cochrane Japan is committed to producing accurate, up-to-date Cochrane reviews and providing support and training to new authors. Japan holds over four training workshops annually for new authors of Cochrane systematic reviews, healthcare professionals, and researchers, covering interventions and diagnostic accuracy tests. "Cochrane Japan workshops bring globally recognized methodology expertise to an accessible platform for future Cochrane authors. These sessions are crucial for fostering evidence-based practice in Japan," says Norio Watanabe, Director of Cochrane Japan. Cochrane Japan collaborates with Cochrane geographic groups in East Asia through the East Asia Cochrane Alliance (EACA), hosting meetings and training workshops. These collaborations enhance the impact of their work by fostering regional cooperation and knowledge sharing. The group's translation efforts significantly impact healthcare practices and policies in Japan, ensuring that medical professionals and the general population have access to high-quality, evidence-based information. Cochrane Japan translates and publishes more than 200 plain language summaries of Cochrane reviews in Japanese each year. There are more than 2900 translations of Cochrane evidence freely available for anyone to search and read. "Distributing plain language summaries in Japanese helps bridge the gap between complex research and everyday healthcare decisions. It's important that people have the latest health evidence in the language they can most easily understand," says Watanabe. Looking ahead, Cochrane Japan plans to offer free access to RevMan Web for all its members, encouraging more systematic reviews under Cochrane methodology. The group also seeks enthusiastic volunteers for translating plain language summaries and prospective authors for Cochrane reviews. "We welcome anyone in Japan who is passionate about evidence-based healthcare to join us in our mission," Watanabe adds. Wednesday, May 29, 2024" #metaglossia_mundus
"Translating Korean literature into English allows English-speaking audiences to access diverse notable works. By Honorary Reporter Foteini Chatzoudi from Greece Hwang Sok-yong's novel "Mater 2-10" is longlisted for this year's International Booker Prize as announced on April 9. The award recognizes the best in translated fiction, and the English-language edition was translated by the team of Sora Kim-Russell and Youngjae Josephine Bae
"Mater 2-10" is about three generations of a family of railroad workers and a laid-off factory worker organizing a high-altitude sit-in demonstration. It depicts the lives of working-class Koreans from the Japanese colonial era through national liberation and further in the 21st century. Kim-Russell, a Korean American based in Seoul, is an acclaimed literary translator. Her translation of Pyun Hye-young's "The Hole" won the 2017 Shirley Jackson Award and that of "At Dusk" by Hwang was longlisted for the 2018 Man Booker International Prize. She primarily translates works by Hwang and Pyun but has also covered those by novelists Kim Bo-young, Jeon Sungtae and Shin Kyung-sook. Bae, another freelance translator residing in Seoul, received the 2019 Literature Translation Institute of Korea Award for Aspiring Translators and the 2021 Korea Times Modern Korean Literature Translation Award.
The following are excerpts from an email interview with both translators from May 14-19.
Why did you decide to pursue a career in translation? Kim-Russell: I started as an aspiring writer and became fascinated with literary translation as a genre of writing. That, coupled with rent and the general need to make a living, inspired me to pursue a career in it. Bae: After spending my childhood in the U.S., I developed a passion for English. My first college job was translating news articles. Post-graduation, I worked as an in-house translator before exploring other career paths, eventually returning to translation.
What are the challenges in translating Korean-language books into English? Kim-Russell: The biggest one for me is always how to recreate the writer's voice in English, especially considering that most writers are not writing with translation in mind. The specificity of the audience also brings the challenges of puns, jokes and other forms of language play, jokes that only Korean readers would recognize or appreciate. Translating colloquial Korean into colloquial English without implicitly changing the setting is also challenging. Hwang Sok-yong’s book "Mater 2-10" was translated into English by Sora Kim-Russell and Youngjae Josephine Bae. (Booker Prizes) How did you two get assigned to translate Hwang Sok-yong's "Mater 2-10?" Kim-Russell: I convinced Youngjae that she was ready for a solo project of this magnitude, but she shot me down. Then we talked about working on it together. If not for her, I might still be on maternity leave. Bae: When Sora, expecting her second child, needed help translating "Mater 2-10," she approached me. Since I had no other commitments beyond my current non-fiction project, I agreed. Sora had already translated the first two chapters as a sample. I began from the third chapter and translated my part until she returned from maternity leave to complete the rest.
What impact did working on "Mater 2-10" have on you? Kim-Russell: When I read the book, I was struck by its scope and significance, as well as Hwang's achievement in storytelling and folkloric prose. His endeavor to create a uniquely Korean narrative style has always impressed me. Translating this work into English challenged me to reconsider my approach and the essence of translation. How did you preserve Hwang's voice and style given the linguistic differences? Kim-Russell: When considering linguistic differences broadly, I remind myself not to fixate on the potential readership. While others have explored this topic extensively, it's crucial to recognize that a book's audience is diverse and imaginative beyond expectation. Some readers will have no familiarity with the world of the book, others will feel right at home in it and others will understand it from a different but adjacent perspective. Letting go of worries or assumptions about imagined readers helps free you to take risks and find more ways to bring the author's voice and style to life in English. Bae: For ''Mater 2-10'', dealing with titles for individuals was tricky because a person can be called by a number of titles, in addition to their name in Korean, depending on who you're speaking with. For instance, my name is Youngjae, but my younger sister calls me eonni (sister) and her husband calls me cheohyeong (sister-in-law) based on their respective relations with me. Such titles came up quite often in ''Mater 2-10,', so at one point, we had to decide how much we could preserve without compromising the translation's readability.
What aspects of translation do you enjoy most and why? Bae: Translating is a chance to explore the English language and solve a puzzle by hunting for clues in the original text, dictionaries and a variety of other sources. It can be agonizing at times but rewarding when you find a combination of words that seem to work.
Kim-Russell: I often struggle with structure because the content and structure are provided for you, and your job is to play with words and focus on voice, intent and delivery. It's like solving a puzzle. Before starting the translation work, I'll often do a crossword or other word puzzle first to warm up and remind myself to have fun with it.
msjeon22@korea.kr
*This article is written by a Korea.net Honorary Reporter. Our group of Honorary Reporters are from all around the world, and they share with Korea.net their love and passion for all things Korean." #metaglossia_mundus
"You’ve read the book, you’ve seen the movie and you’ve noticed differences. Join this adaptation-focused book and film group to dig a little deeper and find out what might’ve influenced the changes in the story from book to screen. Next date: Saturday, 15 June 2024 | 01:30 PM to 03:00 PM You’ve read the book, you’ve seen the movie and you’ve noticed differences. Get involved in this adaptation-focused book and film discussion to dig a little deeper, consider the transition from page to screen and find out what might have influenced the changes in the story between forms. In Pride Month, we will explore Taichi Yamada's 1987 ghost story Strangers alongside its latest adaptation, Andrew Haigh's All of Us Strangers (2023). Yamada’s book is available for loan in print as All of Us Strangers and eBook as Strangers. Book online or by speaking with library staff." #metaglossia_mundus
"Dr. Randy Radford is an emergency room physician in Regina. "I've been at the ER here for 15 years and the population has totally changed from Regina," Radford said. From hospitals to 911 calls, Regina emergency services are needing to adjust to the changing demographics in Regina. Dr. Randy Radford is an emergency room physician in Regina. Radford said he's seen a growing number of patients who do not speak English and need translation. He will now use some form of translation for an average of two to three patients a day. "I've been at the ER here for 15 years and the population has totally changed from Regina," Radford said. "Before you just had a few ethnic populations. Now, every night we're getting people from any parts of the world from Bhutan, Tibet. It's amazing." Radford said most patients who can't speak English arrive at the hospital with a friend or family member who can. When that is not the case, he relies heavily on phone translator apps. With children, he sometimes uses a picture board that shows a human body. Patients can point to where they feel pain. Though not everything, Radford said, needs interpretation.
"I think that attitude of caring, really you don't need to translate it," he said. "It becomes very evident just in our body language." When an exact translation is critical, the hospital uses a service of phone interpreters called CanTalk. Based in Winnipeg, CanTalk has interpreters who cover more than 110 languages. Hospital staff show a book of flags to the patient. The patient can point to the flag of their home country, and the language they speak can then be determined. The interpreters will stay on the line for as long as needed, providing translation between the patient and physician. A "God send" for victims of domestic abuse Jen Renwick works at Family Service Regina working with domestic abuse cases. Like Dr. Radford, Jen Renwick has also seen an increase in clients who cannot speak English. With funding from the Ministry of Justice, Family Service Regina also started using CanTalk in 2012, and Renwick calls it a "God send." Before CanTalk, the service relied on translators from within the city, but this came with problems, such as the possibility of the information getting back to the victim's partner. "With Regina being such a small community, some of the clients that we have, we can't use the people in the small communities ... because there's no confidentiality," Renwick said. "They won't be safe if we were to use a translator from their own community. So that was definitely a barrier before that CanTalk allows us to overcome." CanTalk has a network of 1,400 language specialists employed or contracted from across Canada. Before using this service, a conflict of interest was sometimes an issue with local translators. "We've been in situations where we found out later that the translation was incorrect because the person who was translating was sort of 'covering', if you will, for the other person," Renwick said. Other emergency services offered for non-English speakers Regina Police have several officers who can speak a second language. They are able to cover off a wide list, including French and Arabic. The Regina Police will either use one of these officers, or use translators from the Regina Open Door Society. The Regina Fire Department uses the same service as Sask911. When Sask911 receives a call from someone who doesn't speak English, it connects with a call centre called LanguageLine. An interpreter will stay on the line while the call is forwarded to the fire department." #metaglossia_mundus
"AI Translation Startup Raises $300M, Valuation Grows to $2B DeepL plans global expansion to help businesses solve complex linguistic challenges Ben Wodecki, Jr. Editor May 29, 2024 DeepL has raised $300 million in a venture funding round to help businesses use AI to translate content at scale. Founded in 2017, DeepL is a German startup developing AI-powered translation tools. Businesses can use its Language AI platform to translate communications including marketing materials quickly. Index Ventures led the round, joined by ICONIQ Growth and Teachers’ Venture Growth. Existing investors IVP, WiL and Atomico also participated. The latest funding raises DeepL’s valuation to $2 billion. The startup plans to use the funds to invest in research and product innovation. DeepL also intends to expand globally and to employ more staff across areas including research, engineering and product. “We’re approaching an inflection point in the AI boom where businesses who are racing to adopt the technology begin to discern between hype versus solutions that are secure and actually solve real problems in their business,” said Jarek Kutylowski, DeepL’s founder and CEO. “This new investment comes during what is on track to be DeepL’s most transformative year yet and is a testament to the crucial role that our Language AI platform has in solving the complex linguistic challenges global companies face today.” DeepL has developed specialized AI models designed for translation tasks. The models power its enterprise-focused Language AI platform and can translate content so users can tailor materials for a specific market. The company says its customer network has expanded to over 100,000 businesses across various industries, including health care, retail and manufacturing. Zendesk, Deutsche Bahn and Coursera are among its customers. “At Zendesk we see first-hand the power of infusing AI tools into customer experience and DeepL’s industry-leading translation is a prime example,” said Adrian McDermott, Zendesk’s chief technology officer. “The ability to have accurate AI translation allows companies from startups to large enterprises the ability to scale globally, reaching prospects and existing customers in new ways.” The company previously demonstrated its translation tools in a humanoid robot, Ameca, enabling it to speak multiple languages. Combining DeepL technology with OpenAI’s GPT-3 language model, engineers taught the robot to speak Japanese, German, Chinese and French. “We’re highly focused on continued growth and innovation to expand our solutions and ensure they remain industry-leading in terms of quality, precision and security,” said Kutylowski. “This will bring us closer to a future where every company, regardless of location, can operate seamlessly on a global scale with our AI.”" #metaglossia_mundus
"What does it mean to translate another person's work? Are you translating just the words or the meaning? Jennifer Croft, a prize-winning translator shares her thoughts. Cicero summed up the struggle with translating an author’s work from one language to another well, he said “I did not think I ought to count the words out to the reader like coins but to pay them by weight”. Jennifer Croft won the Man Booker prize for her translation work and she tells us what she thinks the key to good translation is. Guest: Jennifer Croft, Translator and author" #metaglossia_mundus:
R.D. BURKE "SOME UNIQUE PROBLEMS IN THE DEVELOPMENT OF QUALIFIED TRANSLATORS OF SCIENTIFIC RUSSIAN Abstract—This paper outlines some of the problems encountered in the development of qualified translators of scientific Russian. The author describes some of his conclusions after two and a half years of teaching scientific Russian to qualified scientists. Recommendations are made for improving instruction in technical Russian and also for improving the quality of finished translations." #metaglossia_mundus
"Translating values across borders in transnational education Cultural differences can present universities with unique challenges in transnational education. By fostering an open dialogue with host nations, institutions can better understand each other’s values Values are a lodestar for a university’s mission and strategy, steering its identity, decisions and interactions with the wider world. Understanding how values can be translated in different regional settings is an urgent question for universities expanding their campus footprint to new territories. At a round table hosted in partnership with the University of Nottingham Malaysia at the 2024 THE Asia Universities Summit, industry leaders in the region discussed the common challenges faced in transnational education. Kylie Colvin, chief strategy and operations officer at the University of Nottingham Malaysia and co-chair of the UK International Campus Alliance Network (UK-ICAN), opened the session by highlighting the ongoing challenge in applying the university’s values in diverse cultural contexts. To overcome this challenge, it is important to comprehend how various stakeholders interpret the institution’s values, said Colvin. She emphasised that this challenge also presents an invaluable opportunity for learning from the host culture. Colvin highlighted that integrating local priorities would enable transnational education providers to effectively navigate cultural nuances and accomplish their objectives. Simon Guy, pro vice-chancellor of global for digital, international and sustainability at Lancaster University and co-chair of UK-ICAN, said that difficult conversations about “red lines” should begin at home. Universities should be bridge-builders, establishing a “community of communities” to create a positive impact in the wider world, Guy said. Operating in host nations with different values might require sensitivity around some issues but there remain opportunities for institutions to embody their core values. This is Lancaster University’s approach to developing an EDI strategy on its campus in China. “Rather than not have an EDI strategy and not talk about it, we are trying to use it where we can,” said Guy. “We are working with our staff around gender equality. We are working with our students around disability.” The panellists spoke about the need to innovate while respecting the political reality in different countries and improvising to implement university values as best they can. As definitions of values vary from country to country, they cannot always be translated like-for-like. However, by having an open dialogue between all stakeholders, institutions can find common ground, the panel agreed. However, international perspectives can sometimes augment values for the greater good. Accounting for regional cultural preferences can encourage buy-in to a broader goal, argued Priya Sharma, director of Principles of Responsible Management Education (PRME) at Monash University’s School of Business in Malaysia. “If you look at PRME, it started off being very American-focused,” she said. “But now it is opening up a lot of regions globally. If such values are to be implemented globally, then the regions have to come aboard as well.” Establishing closer links with regulators can help build the trust and understanding that is critical between a university and its host nation. Trust is essential for innovation, collaboration and for a university to make a case for its values – be they in issues surrounding gender equality, LGBTQ+ rights or teaching methodologies. Often cultural sensitivities can be navigated through sensitive diplomacy, the panel concluded. Find out more about the University of Nottingham Malaysia." #metaglossia_mundus
|
We present the latest updates on ChatGPT, Bard and other competitors in the artificial intelligence arms race.
We present the latest updates on ChatGPT, Bard and other competitors in the artificial intelligence arms race.
Full Transcript
LAUREN LEFFER: At the end of November, it’ll be one year since ChatGPT was first made public, rapidly accelerating the artificial intelligence arms race. And a lot has changed over the course of 10 months.
SOPHIE BUSHWICK: In just the past few weeks, both OpenAI and Google have introduced big new features to their AI chatbots.
LEFFER: And Meta, Facebook’s parent company, is jumping in the ring too, with its own public facing chatbots.
BUSHWICK: I mean, we learned about one of these news updates just minutes before recording this episode of Tech, Quickly, the version of Scientific American’s Science, Quickly podcast that keeps you updated on the lightning-fast advances in AI. I’m Sophie Bushwick, tech editor at Scientific American.
LEFFER: And I’m Lauren Leffer, tech reporting fellow.
[Clip: Show theme music]
BUSHWICK: So what are these new features these AI models are getting?
LEFFER: Let’s start with multimodality. Public versions of both OpenAI’s ChatGPT and Google’s Bard can now interpret and respond to image and audio prompts, not just text. You can speak to the chatbots, kind of like the Siri feature on an iPhone, and get an AI-generated audio reply back. You can also feed the bots pictures, drawings or diagrams, and ask for information about those visuals, and get a text response.
BUSHWICK: That is awesome. How can people get access to this?
LEFFER: Google’s version is free to use, while OpenAI is currently limiting its new feature to premium subscribers who pay $20 per month.
BUSHWICK: And multimodality is a big change, right? When I say “Large language model,” that used to mean text and text only.
LEFFER: Yeah, it’s a really good point. ChatGPT and Bard were initially built to parse and predict just text. We don’t know exactly what’s happened behind the scenes to get these multimodal models. But the basic idea is that these companies probably added together aspects of different AI models that they’ve built—say existing ones that auto-transcribe spoken language or generate descriptions of images—and then they used those tools to expand their text models into new frontiers.
BUSHWICK: So it sounds like behind the scenes we’ve got these sort of Frankenstein’s monster of models?
LEFFER: Sort of. It’s less Frankenstein, more kind of like Mr. Potato head, in that you have the same basic body just with new bits added on. Same potato, new nose.
Once you add in new capacities to a text-based AI, then you can train your expanded model on mixed-media data, like photos paired with captions, and boost its ability to interpret images and spoken words. And the resulting AIs have some really neat applications.
BUSHWICK: Yeah, I’ve played around with the updated ChatGPT, and this ability to analyze photos really impressed me.
LEFFER: Yeah, I had both Bard and ChatGPT try to describe what type of person I am based on a photo of my bookshelf.
BUSHWICK: Oh my god, it’s the new internet personality test! So what does your AI book horoscope tell you?
LEFFER: So not to brag, but to be honest both bots were pretty complimentary (I have a lot of books). But beyond my own ego, the book test demonstrates how people could use these tools to produce written interpretations of images, including inferred context. You know, this might be helpful for people with limited vision or other disabilities, and OpenAI actually tested their visual GPT-4 with blind users first.
BUSHWICK: That’s really cool. What are some other applications here?
LEFFER: Yeah, I mean, this sort of thing could be helpful for anyone—sighted or not—trying to understand a photo of something they’re unfamiliar with. Think, like, bird identification or repairing a car. In a totally different example, I also got ChatGPT to correctly split up a complicated bar tab from a photo of a receipt. It was way faster than I could’ve done the math, even with a calculator.
BUSHWICK: And when I was trying out ChatGPT, I took a photo of the view from my office window, asked ChatGPT what it was (which is the Statue of Liberty), and then asked it for directions. And it not only told me how to get the ferry, but gave me advice like “wear comfortable shoes.”
LEFFER: The directions thing was pretty wild.
BUSHWICK: It almost seemed like magic, but, of course…
LEFFER: It’s definitely not. It’s still just the result of lots and lots of training data, fed into a very big and complicated network of computer code. But even though it’s not a magic wand, multimodality is a really significant enough upgrade that might help OpenAI attract and retain users better than it has been. You know, despite all the new stories going around, fewer people have actually been using ChatGPT over the past three months. Usership dropped by about 10% for the first time in June, another 10% in July, and about 3% in August. The prevailing theory is that this has to do with summer break from school—but still losing users is losing users.
BUSHWICK: That makes sense. And this is also a problem for OpenAI, because it has all this competition. For instance, we have Google, which is keeping its own edge by taking its multimodal AI tool and putting it into a bunch of different products.
LEFFER: You mean like Gmail? Is Bard going to write all my emails from now on?
BUSHWICK: I mean, if you want it to. If you have a Gmail account, or even if you use YouTube or Google, if you have files stored in Google Drive, you can opt in and give Bard access to this individual account data. And then you can ask it to do things with that data, like find a specific video, summarize text from your emails, it can even offer specific location-based information. Basically, Google seems to be making Bard into an all-in-one digital assistant.
LEFFER: Digital assistant? That sounds kind of familiar. Is that at all related to the virtual chatbot pals that Meta is rolling out?
BUSHWICK: Sort of! Meta just announced it’s not introducing just one AI assistant, it’s introducing all these different AI personalities that you’re supposedly going to be able to interact with in Instagram or WhatsApp or its other products. The idea is it’s got one main AI assistant you can use, but you can also choose to interact with an AI that looks like Snoop Dogg and is supposedly modeled off specific personalities. You can also interact with an AI that has specialized function, like a travel agent.
LEFFER: When you're listing all of these different versions of an AI avatar you can interact with, the only thing my mind goes to is Clippy from the old school Microsoft Word. Is that basically what this is?
BUSHWICK: Sort of. You can have, like, a Mr. Beast Clippy, where when you're talking with it, it does – you know how Clippy kind of bounced and changed shape – these images of the avatars will sort of move as if they're actually participating in the conversation with you. I haven't gotten to try this out myself yet, but it does sound pretty freaky.
LEFFER: Okay, so we've got Mr. Beat, we've got Snoop Dogg. Anyone else?
BUSHWICK: Let's see, Paris Hilton comes to mind. And there's a whole slew of these. And I'm kind of interested to see whether people actually choose to interact with their favorite celebrity version or whether they choose the less anthropomorphized versions.
LEFFER: So these celebrity avatars, or whichever form you're going to be interacting with Meta’s AI in, is it also going to be able to access my Meta account data? I mean, there's like so much concern out there already about privacy and large language models. If there's a risk that these tools could regurgitate sensitive information from their training data or user interactions, why would I let Bard go through my emails or Meta read my Instagram DMs.
BUSHWICK: Privacy policies depend on the company. According to Google, it’s taken steps to ensure privacy for users who opt into the new integration feature. These steps include not training future versions of Bard on content from user emails or Google Docs, not allowing human reviewers to access users’ personal content, not selling the information to advertisers, and not storing all this data for long periods of time.
LEFFER: Ok, but what about Meta and its celebrity AI avatars?
BUSHWICK: Meta has said that, for now, it won’t use user content to train future versions of its AI…but that might be coming soon. So, privacy is still definitely a concern, and it goes beyond these companies. I mean, literal minutes before we started recording, we read the news that Amazon has announced it’s training a large language model on data that’s is going to include conversations recorded by Alexa.
LEFFER: So conversations that people have in their homes with their Alexa assistant.
BUSHWICK: Exactly.
LEFFER: That sounds so scary to me. I mean, in my mind, that's exactly what people have been afraid of with these home assistants for a long time, that they'd be listening, recording, and transmitting that data to somewhere that the person using it no longer has control over.
BUSHWICK: Yeah, anytime you let another service access information about you, you are opening up a new potential portal for leaks, and also for hacks.
LEFFER: It's completely unsettling. I mean, do you think that the benefits of any of these AIs outweigh the risks?
BUSHWICK: So, it's really hard to say right now. Google's AI integration, multimodal chat bots, and, I mean, just these large language models in general, they are all still in such early experimental stages of development. I mean, they still make a lot of mistakes, and they don't quite measure up to more specialized tools that have been around for longer. But they can do a whole lot all in one place, which is super convenient, and that can be a big draw.
LEFFER: Right, so they’re definitely still not perfect, and one of those imperfections: they’re still prone to hallucinating incorrect information, correct?
BUSHWICK: Yes, and that brings me to one last question about AI before we wrap up: Do eggs melt?
LEFFER: Well, according to an AI-generated search result gone viral last week, they do.
BUSHWICK: Oh, no.
LEFFER: Yeah, a screenshot posted on social media showed Google displaying a top search snippet that claimed, “an egg can be melted,” and then it went on to give instructions on how you might melt an egg. Turns out, that snippet came from a Quora answer generated by ChatGPT and boosted by Google’s search algorithm. It’s more of that AI inaccuracy in action, exacerbated by search engine optimization—though at least this time around it was pretty funny, and not outright harmful.
BUSHWICK: But Google and Microsoft – they’re both working to incorporate AI-generated content into their search engines. But this melted egg misinformation struck me because it’s such a perfect example of why people are worried about that happening.
LEFFER: Mmm…I think you mean eggs-ample.
BUSHWICK: Egg-zactly.
[Clip: Show theme music]
Science Quickly is produced by Jeff DelViscio, Tulika Bose, Kelso Harper and Carin Leong. Our show is edited by Elah Feder and Alexa Lim. Our theme music was composed by Dominic Smith.
LEFFER: Don’t forget to subscribe to Science Quickly wherever you get your podcasts. For more in-depth science news and features, go to ScientificAmerican.com. And if you like the show, give us a rating or review!
BUSHWICK: For Scientific American’s Science Quickly, I’m Sophie Bushwick.
LEFFER: I’m Lauren Leffer. See you next time!
ABOUT THE AUTHOR(S)
Sophie Bushwick is an associate editor covering technology at Scientific American. Follow her on Twitter @sophiebushwick Credit: Nick Higgins
Lauren Leffer is a tech reporting fellow at Scientific American. Previously, she has covered environmental issues, science and health. Follow her on Twitter @lauren_leffer""
#metaglossia_mundus