The doge meme teaches us so much about language learning and how challenging it can be to accurately combine words and patterns when using another language. The FLAX language system teaches us so much about how we can avoid using dodgy language by employing powerful open-source language analysis tools and authentic language resources.
The FLAX (Flexible Language Acquisition) project has won the LinkedUp Vici Competition for tools and demos that use open or linked data for educational purposes. This post is the one I wrote to accompany our project submission to the LinkedUp challenge.
FLAX is an open-source software system designed to automate the production and delivery of interactive digital language collections. Exercise material comes from digital libraries (language corpora, web data, open access publications, open educational resources) for a virtually endless supply of authentic language learning in context. With simple interface designs, FLAX has been designed so that non-expert users — language teachers, language learners, subject specialists, instructional design and e-learning support teams — can build their own language collections.
The FLAX software can be freely downloaded to build language collections with any text-based content and supporting audio-visual material, for both online and classroom use. FLAX uses the Greenstone suite of open-source multilingual software for building and distributing digital library collections, which can be published on the Internet or on CD-ROM. Issued under the terms of the GNU General Public License, Greenstone is produced by the New Zealand Digital Library Project at the University of Waikato, and developed and distributed in cooperation with UNESCO and the Human Info NGO.
REMIX WITH FLAX
At FLAX we understand that content and data vary in terms of licensing restrictions, depending on the publishing strategies adopted by institutions for the usage of their content and data. FLAX has, therefore, been designed to offer a flexible open-source suite of linguistic support options for enhancing such content and data across both open and closed platforms.
Featuring the Latest in Artificial Intelligence &
Natural Language Processing Software Designs
Within the FLAX bag of tricks, we have the open-source Wikipedia Miner Toolkit, which links in related words, topics and definitions from Wikipedia and Wiktionary as can be seen below in the Learning Collocations collection (click on the image to expand and visit the toolkit in action).
Featuring Open Data
Available on the FLAX website are completed collections and on-going collections development with registered users. Current research and development with the FLAX Law Collections is based entirely on open resources selected by language teachers and legal English researchers as shown in the table below. These collections demonstrate how users can build collections in FLAX according to their interests and needs.
Law Collections in FLAX
Type of Resource
Number and Source of Collection Resources
Open Access Law research articles
40 Articles (DOAJ – Directory of Open Access Journals, with Creative Commons licenses for the development of derivatives)
MOOC lecture transcripts and videos (streamed via YouTube and Vimeo)
15 Lectures (Oxford Law Faculty, Centre for Socio-Legal Studies and Department of Continuing Education)
PhD Law thesis writing
50-70 EThoS Theses (sections: abstracts, introductions, conclusions) at the British Library (Open Access but not licensed as Creative Commons – permission for reuse granted by participating Higher Education Institutions)
Linking in lexico-grammatical phrases from the British National Corpus (BNC) of 100 million words, the British Academic Written English corpus (BAWE) of 2500 pieces of assessed university student writing from across the disciplines, and the re-formatted Wikipedia corpus in English.
Linking in a reformatted Google n-gram corpus (English version) containing 380 million five-word sequences drawn from a vocabulary of 145,000 words.
FLAX Training Videos
Featuring Game-based Activities
Click on the image below to explore the different activities that can be applied to language collections in FLAX.
FLAX Apps for Android
We also have a suite of free game-based FLAX apps for Android devices. Now you can interact with the types of activities listed above while you’re learning on the move. Click on the FLAX app icon to the right to access and download the apps and enjoy!
A collaborative investigation is underway with FLAX and the Open Educational Resources Research Hub (OERRH), whereby a cluster of revised OER research hypotheses are currently being employed to evaluate the impact of developing and using open language collections in FLAX with informal MOOC learners as well as formal English language and translation students.
Current activity within open education can be characterised as having reached a beta phase of maturity. In much the same way that software progresses through a release life cycle, beta is the penultimate testing phase, after the initial alpha-testing phase, whereby the software is adopted beyond its original developer community.
Open education has now come to the attention of the mainstream press and traditional higher education, with the uptake of Open Educational Resources (OER) and with the advent of Massive Open Online Courses (MOOC). The participating masses can be likened to beta testers of these newly opened ways of educating. And, as with many recent software hits from Internet giants such as Google (e.g. Gmail), it is highly likely that open education will remain in a state of ‘perpetual beta’ development and testing, as we investigate and measure the impact of openness on education.
Funded by the William and Flora Hewlett Foundation, the OER Research Hub (OERRH) is currently spear-heading the testing of OER hypotheses and is aggregating research findings through their OER Impact Map. The beta testing metaphor is also relevant to my research with the FLAX language project for the open development and testing of the FLAX Open Source Software (OSS). I have been promoting the FLAX OSS language system across different educational contexts (Fitzgerald, 2013), and I am now investigating user experiences of the software across multiple research sites in order to involve users in language collections building and further development of the OSS. I will be posting findings from this research on the TOETOE project blog throughout this year.
According to publisher and open source advocate, Tim O’Reilly:
Users must be treated as co-developers, in a reflection of open source development practices (even if the software in question is unlikely to be released under an open source license.) The open source dictum, ‘release early and release often‘, in fact has morphed into an even more radical position, ‘the perpetual beta’, in which the product is developed in the open, with new features slipstreamed in on a monthly, weekly, or even daily basis. It’s no accident that services such as Gmail, Google Maps, Flickr, del.icio.us, and the like may be expected to bear a ‘Beta’ logo for years at a time. (O’Reilly, 2005)
Open Fellowship with the OER Research Hub at the UK Open University
My first introduction to the UK Open University, henceforth referred to here as the OU, was when my Dad took me to see the film Educating Rita in 1985. It took two years to reach our picture house in provincial-town New Zealand, and I was just at that age – twelve going on thirteen – to appreciate this Pygmalion story of a woman breaking through the class barriers with an emancipatory distance education from the OU. My Dad also took me canvasing with him for the NZ Labour Party in those formative years, showing me first-hand that life for those in state-housing areas was very different from life in homes belonging to those who had been to university.
I never imagined that I’d be at the OU but I am now on my second fellowship here, this time as an Open Fellow with the OERRH based at the Institution of Educational Technology, and previously from 2011-2012 as a SCORE Fellow with the Support Centre for Open Resources in Education. When Rita’s character was a student at the OU in the early 1980s, open meant that admissions barriers had been removed from entry to formal study. This is still true today with the OU’s 200,000 registered paying students coming from a variety of traditional and non-traditional backgrounds. Nonetheless, this is still ground-breaking when we consider that most of the brick ‘n’ mortar higher education institutions of the world, including those with online learning offerings, still maintain strict admissions policies based on entrance examinations and prerequisites. Open has come to mean much more than this, however, with the rapid ascension of OERs and MOOCs. And, the OU have been no strangers to this rise in informal education as demonstrated in their longstanding work with the BBC through their Open Media Unit, and in leading a bevy of wide-reaching open education projects, including OpenLearn and now FutureLearn.
Open Education Awash with Venture Capital
Open has come of age it seems, with pathways to courses, the sharing of courseware code and access to research becoming increasingly free and open to learners; and with models for educational delivery and accreditation being experimented with on an almost daily basis by educators and institutions. Getting an education is one thing but coming up with sustainable and workable solutions for the world’s problems is increasingly understood as something outside of our reach and beyond the actual remit of education. While we discuss how to come up with the best business models for selling MOOCs and higher education to the masses, it might behoove us to ask how we can occupy eduction to evolve sustainable communities (human and non-human) on this planet rather than continue to commodify learning, teaching and research as products for an increasingly globalised world.
Weller’s position paper on the battle for open (2013) echoes concerns from open education advocates on the distortion of key principles for openness in education (see Wiley, 2013); as being sold downstream through the imposed economic value system of a booming online education market (Education Sector Factbook, 2012). The open-washing of the open education movement, in favour of capitalising on ‘open’ education at a massive scale, is being viewed in much the same way as green activists view the green-washing of the green movement, with our world’s most pressing environmental problems playing second fiddle to the big business of so-called green solutions:
When they start offering solutions is the exact moment when they stop telling the truth, inconvenient or otherwise. Google “global warming solutions.” The first paid sponsor, www.CampaignEarth.org, urges “No doom and gloom!! When was the last time depression got you really motivated? We’re here to inspire realistic action steps and stories of success.” By “realistic” they don’t mean solutions that actually match the scale of the problem. They mean the usual consumer choices—cloth shopping bags, travel mugs, and misguided dietary advice—which will do exactly nothing to disrupt the troika of industrialization, capitalism, and patriarchy that is skinning the planet alive. But since these actions also won’t disrupt anyone’s life, they’re declared both realistic and a success. (Jenson, Keith & McBay, 2011)
Technology activists abound in support of the information wants to be free slogan from the 1960s. “Information wants to be free. Information also wants to be expensive. …That tension will not go away” (Brand, 1987). Activism that is focused on the tension surrounding the freedom of information continues to grow, but what of activism that is directed at the tension between education wanting to be open and education wanting to be exclusive? Education wanting to be for life and education wanting to be for jobs only? When will we witness the scaling of massive buildings like the Shard in London by education activists – let’s call one of them Rita – in protest of formal education’s direct relationship with the limitations of commercialization? When will we raise the red flag on the global business of buying and selling education as an endgame in itself?
The purpose of education is going untested in real terms and the open education movement has only just begun educating in beta, as it were, by drawing on a pedagogy of abundance rather than a perceived pedagogy of scarcity (Weller, 2011). This shift in awareness and practice echoes Stewart Brand’s comments to Steve Wozniak, at the first Hackers’ Conference in 1984, on how information wants to be free due to the cost of getting digitised information out becoming lower and lower. The economics of learning materials (Thomas, 2014), following a recent discussion on the oer-discuss list about the progression from reusable learning objects to open educational resources, marks another useful distinction using Marxist terminology, between learning materials that have exchange versus use value:
In the discussions about whether content has value, there is often a question about whether content can be bought and sold, whether it is “monetisable”. In marxist economics that is the type of value called exchange value: where a commodity can be exchanged for money. There is another type of value: use value. That is the extent to which a commodity is useful. It is about its utility, not its cost or price. I think most teaching resources can have a high use value both for primary use and secondary reuse, without that ever translating into an exchange value. They might be valuable but you can’t sell them. (Thomas, 2014)
It may be that Rita will draw on learning content and interactions from a variety of accessible places, including open publications and MOOCs, where ‘open’ equals free access only (for example, All Rights Reserved Coursera courses) rather than where open equals free plus legal rights to reuse, revise, remix and redistribute. It may also be that Rita will only begin to realise the use value of these educational resources – perhaps through joining Greenpeace or the Deep Green Resistance, for example – by synthesisng her contributions with those of her peers for the development of a learning community that is informal, networked and open. And, most importantly, where her developing awareness will actively challenge the perpetuation and escalation of global problems that are on a truly massive scale.
In critiquing open education, Audrey Watters, in her keynote address at the Open Education 2013 conference, also proposes communities rather than technology markets as the saviors of education:
Where in the stories we’re telling about the future of education are we seeing salvation? Why would we locate that in technology and not in humans, for example? Why would we locate that in markets and not in communities? What happens when we embrace a narrative about the end-times — about education crisis and education apocalypse? Who’s poised to take advantage of this crisis narrative? Why would we believe a gospel according to artificial intelligence, or according to Harvard Business School [Christensen’s Disruptive Innovation theory, 2013], or according to Techcrunch…? (Watters, 2013)
Brand, S. (1987). The Media Lab: Inventing the Future at MIT. Viking Penguin, p. 202, ISBN0-14-009701-5.
“…the attempt to cut out the middleman as far as possible and to give the learner direct access to the data” (Johns, 1991, p.30)
Importance is placed on empirical data when taking a corpus-informed and data-driven approach to language learning and teaching. Moving away from subjective conclusions about language based on an individual’s internalized cognitive perception of language and the influence of generic language education resources, empirical data enable language teachers and learners to reach objective conclusions about specific language usage based on corpus analyses. Tim Johns coined the term Data-Driven Learning (DDL) in 1991 with reference to the use of corpus data and the application of corpus-based practices in language learning and teaching (Johns, 1991). The practice of DDL in language education was appropriated from computer science where language is treated as empirical data and where “every student is Sherlock Holmes”, investigating the uses of language to assist with their acquisition of the target language (Johns, 2002:108).
A review of the literature indicates that the practice of using corpora in language teaching and learning pre-dates the term DDL with work carried out by Peter Roe at Aston University in 1969 (McEnery & Wilson, 1997, p.12). Johns is also credited for having come up with the term English for Academic Purposes (Hyland, 2006). Johns’ oft quoted words about cutting out the middleman tell us more about his DDL vision for language learning; where teacher intuitions about language were put aside in favor of powerful text analysis tools that would provide learners with direct access to some of the most extensive language corpora available, the same corpora that lexicographers draw on for making dictionaries, to discover for themselves how the target language is used across a variety of authentic communication contexts. As with many brilliant visions for impactful educational change, however, his also appears to have come before its time.
This post will argue that the original middleman in Johns’ DDL metaphor took on new forms beyond that of teachers getting in the way of learners having direct access to language as data. An argument will be put forward to claim that the applied corpus linguistics research and development community introduced new and additional barriers to the widespread adoption of DDL in mainstream language education. Albeit well intentioned and no doubt defined by restrictions in research and development practices along the way, new middlemen were paradoxically perpetuated by the proponents of DDL making theirs an exclusive rather than a popular sport with language learners and
teachers (Tribble, 2012). And, with each new wave of research and development in applied corpus linguistics new and puzzling restrictions confronted the language teaching and learning community.
The middleman in DDL has presented himself as a sophisticated corpus authority in the form of research and development outputs, including text analysis software designed by, and for, the expert corpus user with complex options for search refinement that befuddled the non-expert corpus user, namely language teachers and learners. Replication of these same research methods to obtain the same or similar results for uses in language teaching and learning has often been restricted to securing access to the exact same software and know-how for manipulating and querying linguistic data successfully.
Which language are you speaking?
He has been known to speak in programming languages with his interfaces often requiring specialist trainers to communicate his most simple functions. Even his most widely known KWIC (Key Word In Context) interface for linguistic data presentation with strings of search terms embedded in truncated language context snippets remain foreign-looking to the mostly uninitiated in language teaching and learning. In many cases, he has not come cheap either and requirements for costly subscriptions to and upgrades of his proprietary soft wares have been the norm, especially in the earlier days.
In particular, with reference to English Language Teaching (ELT), he has criticized many widely used ELT course book publications and their language offerings for ignoring his research findings based on evidence for how the English language is actually used across different contexts of use. In response, a few ELT course book publishers have clamored around him to help him get his words out for a price but in so doing have rendered his corpus analyses invisible, in turn creating even more of a dependency on course books rather than stimulating autonomy among language teachers and learners in the use of corpora and text analysis tools for DDL. And, because publishers were primarily confining him to the course book and sometimes CD-ROM format there were only so many language examples from the target corpora that could possibly fit between the covers of a book and only the most frequent language items made it onto the compact disc.
The Oxford Collocation Dictionary for Students of English, (2nd Edition from 2009 by Oxford University Press) based on the British National Corpus (BNC) is one example where high frequency collocations for very basic words like any and new predominate and where licensing restrictions permit only one computer installation per CD ROM. Further restrictions compound the openness issue with the use of closed corpora in leading corpus-derived ELT books such as the Cambridge University Press (CUP) publication, From Corpus to Classroom (O’Keeffe, McCarthy & Carter, 2007), which might have been more aptly entitled, From Corpus to Book, as it draws heavily on the closed Cambridge and Nottingham Discourse Corpus of English (CANCODE) from Cambridge University Press and Nottingham University and recommends the use of proprietary concordancing programs, Wordsmith Tools and MonoConc Pro, thereby rendering any replication of analyses for the said corpus inaccessible to its readers.
Mainstream language teacher training bodies continue to sidestep the DDL middleman in the development of their core training curricula (for example, the Cambridge ESOL exams) due to the problems he proposes with accessibility in terms of cost and complexity. Instead, English language teacher training remains steadily focused on how to select and exploit corpus-derived dictionaries with reference to training learners in how to identify, for example: definitions, derivatives, parts of speech, frequency, collocations and sample sentences. In the same way that corpus-derived course books do not render corpus analyses transparent to their users, training in dictionary use does not bring teachers and their learners any closer to the corpora they are derived from.
Cambridge English Corpus
Michael McCarthy presented, ‘Corpora and the advanced level: problems and prospects’ at IATEFL Liverpool 2013. One of the key take-away messages from his talk was the fact that learners of more advanced English receive little in the way of return on investment once the highest frequency items of English vocabulary had been acquired (he referred to the top 2000 words from the first wordlist of the British National Corpus that make up about 80% of standard English use). To learn the subsequent wordlists of 2000 words each the percentage of frequency in usage drops considerably, so in terms of cost for the time and money you might end up spending if you sign up to yet more English language classes may not be affordable or feasible. This has particular implications in learning English for Specific Purposes (ESP), including English for Academic Purposes (EAP) which many would argue is always concerned with developing specific academic English language knowledge and usage within specific academic discourse communities.
Catching Michael McCarthy on the way out of the presentation theatre he kindly agreed to walk and talk while rushing to catch his train out of Liverpool. Would the Cambridge English Corpus be made available anytime soon for non-commercial educational research and materials development purposes, I asked? I hastened to add the possibilities and the real world need for promoting corpus-based resources and practices in open and distance online education as well as in traditional classroom-based language education. He agreed that the technology had become a lot better for finally realising DDL within mainstream language teaching and learning and within materials development. Taking concordance line printouts into ELT classrooms had never really taken off in his estimation and I would have to agree with him on that point. He indicated that it would be unlikely for the corpus to become openly available anytime in the foreseeable future, however, due to the large amount of private investment in the development of the corpus with restricted access for those participating stakeholders on the project only.
But what would the real risk be in opening up this corpus to further educational research and development for non-commercial purposes with derivative resources made freely available online? Wouldn’t this be giving the corpus resource added sustainability with new lives and further opportunities for exploitation that could advance our shared understanding of how English works? – across different contexts, using current and high quality examples of language in context? More importantly, wouldn’t this give more software developers the chance to build more interfaces using the latest technology, and for more ELT materials developers, including language teachers, the chance to show different derivative resource possibilities for effectively using the corpus in language teaching and learning?
A non-commercial educational purpose only stipulation could be used in all of the above resource development scenarios. Indeed, these could all be linked back to the Cambridge English Corpus project website as evidence of the wider social and educational impact as a result of their initial investment. This is what will be happening with most of the publicly funded research projects in the UK following recommendations from the Finch report which come into effect in April 2014. It follows that Open Educational Resources (OER) and Open Educational teaching Practices (OEP) will allow for expertise to be readily available when Open Access research publishing is compulsory for all RCUK and EPSRC funding grants for the development of research-driven open teaching and learning derivatives. Privately funded research projects like this one from CUP could also be leading in this area of open access.
Corpora such as the British National Corpus (BNC), the British Academic Written English (BAWE) corpus, Wikipedia and Google linguistic data as a corpus are some of the many valuable resources that have all been developed into language learning and teaching resources that are openly available on the web. In the following sections, I will refer to leading applied corpus linguistics research and development outputs from leading researchers who have been making their wares freely available if not openly re-purposeable to other developers, as in the example of the FLAX language project’s Open Source Software (OSS). And, hopefully these corpus-based resources are getting easier to access for the non-expert corpus user.
“For the time being” CUP are providing free access to the English Vocabulary Profile website of resources based on the Cambridge English Corpus (formerly known as the Cambridge International Corpus), “the British National Corpus and the Cambridge Learner Corpus, together with other sources, including the Cambridge ESOL vocabulary lists and classroom materials.” Below is a training video resource from CUP available on YouTube, which highlights some of the uses for these freely available resources in language learning, teaching and materials development. This is a very useful step for CUP to be taking with making corpus-based resources and practices more accessible to the mainstream ELT community.
Open practices in applied corpus linguistics
Enter those applied corpus linguistics researchers and developers who have made some if not all of their text analysis tools and Part-Of-Speech-tagged corpora freely accessible via the Web to anyone who is interested in exploring how to use them in their research, teaching or independent language learning. Well-known web-based projects include Tom Cobb’s resource-rich Lextutor site, Mark Davies’ BYU-BNC (Brigham Young University – British National Corpus) concordancer interface and the Corpus of Contemporary American English (COCA) with WordandPhrase (with WordandPhrase training videos resources on YouTube) for general English and English for Academic Purposes (EAP), Laurence Anthony’s AntConc concordancing freeware for Do-It-Yourself (DIY) corpus building (with AntConc training video resources on YouTube), and the Sketch Engine by Lexical Computing which offers some open resources for DDL. Open invitations from the Lextutor and AntConc project developers seeking input on the design, development and evaluation of existing and proposed project tools and resources are made by way of social networking sites, the Lextutor Facebook group and the AntConc Google groups discussion list. Responses usually come from a steady number of DDL ‘geeks’, however, namely those who have reached a level of competence and confidence with discussing the tools and resources therein. And, most of those actively participating in these social networking sites are also engaging in corpus-based research.
Data-Driven Learning for the masses?
My own presentation at IATEFL Liverpool was based on my most recent project with the University of Oxford IT Services for providing and promoting OSS interfaces from the FLAX language project for increasing access to the BNC and BAWE corpora, both managed by Oxford. In addition to this, the same OSS developed by FLAX has been simplified with the development of easy-to-use interfaces for enabling language teachers to build their own open language collections for the web. Such collections using OER from Oxford lecture podcasts, which have been licensed as creative commons content, have also been demonstrated by the TOETOE International project (Fitzgerald, 2013).
The following two videos from the FLAX language collections show their OSS for using corpus-based resources in ELT that are accessible both in terms of simplicity and in terms of openness. The first training video demonstrates the Web as corpus and how this resource has been effectively mined and linked to the BNC for enhancement of both corpora for uses in DDL. The second training video demonstrates how to build your own Do-It-Yourself corpora using the FLAX OSS and Oxford OER. With open corpus-based resources the reality of DIY corpora is becoming increasingly possible in DDL research and teaching and learning practice (Charles, 2012; Fitzgerald, in press).
So, go ahead, and cut out the middleman in data-driven learning.
FLAX Web Collections (derived from Google linguistic data):
The Web Phrases and Web Collocations collections in FLAX are based on another extensive corpus of English derived from Google linguistic data. In particular, the Web Phrases collection allows you to identify problematic phrasing in writing by fine-tuning words that precede and follow phrases that you would like to use in your writing by drawing on this large database of English from Google. This allows you to substitute any awkward phrasing with naturally occurring phrases from the collection to improve the structure and the fluency of writing.
FLAX Do-It-Yourself Podcast Corpora – Part One:
Learn how to build powerful open language collections through this training video demonstration. Featuring audio and video podcast corpora using the FLAX Language tools and open educational resources (OER) from the OpenSpires project at the University of Oxford and TED Talks.
Fitzgerald, A. (In Press). Openness in English for Academic Purposes. Open Educational Resources Case Study based at Durham University: Pedagogical development from OER practice. Commissioned by the Higher Education Academy (HEA) and the Joint Information Systems Committee (JISC), United Kingdom.
Johns, T. (1991). From printout to handout: grammar and vocabulary teaching in the context of data-driven learning. In: T. Johns & P. King (Eds.), Classroom Concordancing. English Language Research Journal, 4: 27-45.
Johns, T. (2002). ‘Data-driven learning: the perpetual challenge.’ In: B. Kettemann & G. Marko (Eds.), Teaching and Learning by Doing Corpus Analysis. Amsterdam: Rodopi. 107-117.
Hyland, K. (2006). English for Academic Purposes: An Advanced Handbook. London: Routledge.
McEnery, T. & A. Wilson. (1997). Teaching and language corpora. ReCALL, 9 (1): 5-14.
O’Keeffe, A., McCarthy, M., & Carter R. (2007). From Corpus to Classroom: language use and language teaching. Cambridge: Cambridge University Press.
Oxford Collocation Dictionary for Students of English (2nd Edition) (2009), Oxford University Press.
This is the seventh post in a blog series based on the the TOETOE International project with the University of Oxford, the UK Higher Education Academy (HEA) and the Joint Information Systems Committee (JISC). I have also made this post in the Open Educational Practices (OEP) series available as a .pdf on Slideshare.
Standard industry tools in corpus linguistics for doing translation, summarisation, extraction of information, and the formatting of data for analysis in linguistic software programs were generally what was needed before one could get started with building a corpus. It is safe to say that language teachers and many researchers who do not have a background in computer science will never have the time or the interest in these processes. This is why simple interface designs like those in the FLAX language project that have been designed for the non-expert corpus user, namely language teachers and learners, are enabling teaching practitioners to be part of the language collections building process.
Stable open source software (OSS) has been designed to enable non-corpus specialists to build their own language collections consisting of text and audio-visual content that benefit from powerful text analysis tools and resources in FLAX. These collections can be hosted directly on the FLAX website under the registered users section or the OSS can be hosted on the users’ preferred website or content management system. A Moodle version of the FLAX tools has also been developed and new tools and interactive games are currently in the beta development stage for stable release later this year in 2013.
This post from the TOETOE International project includes links to two training videos for building do-it-yourself (DIY) podcast corpora as can be seen below. These demonstrate new OSS tools and interfaces from FLAX for developing interactive open language collections, based on creative commons resources from the Oxford OpenSpires project and a TED Talk given by Oxford academic, Ian Goldin. These training videos and others in the FLAX series from this project will be promoted via Russell Stannard’s Teacher Training Videos (TTV) site to reach wider international audiences including those who do not have access to YouTube. Further plans for the re-use of resource outputs from this project include the translation of the FLAX training videos into Chinese, Vietnamese, and Portuguese. And, later in 2013, the FLAX project will be releasing further OSS for enabling teachers to build more interaction into the development of DIY open language collections.
FLAX Do-It-Yourself (DIY) Podcast Corpora with Oxford OER part one
Learn how to build powerful open language collections through this training video demonstration. Featuring audio and video podcast corpora using the FLAX Language tools and open educational resources (OER) from the OpenSpires project at the University of Oxford and TED Talks.
FLAX Do-It-Yourself (DIY) Podcast Corpora with Oxford OER part two
Continue to learn how to make powerful open language collections and how to build interactivity into those collections with a wide variety of automated interactive language learning tasks through this demonstration training video. Featuring audio and video podcast corpora, using the FLAX Language tools and open educational resources (OER) from the OpenSpires project at the University of Oxford and TED Talks.
It is anticipated that these open tools and resources will provide simple and replicable pathways for other higher education institutions to develop language support collections around their own OER podcasts for wider uptake and accessibility with international audiences. The training videos demonstrate how a variety of activities have also been built into the FLAX OSS for enabling teachers to manipulate texts within the collections to create language-learning interaction with the open podcast content. The following slideshow from the 2013 eLearning Symposium with the Centre for Languages, Linguistics, and Area Studies (LLAS) at the University of Southampton shows the interactivity that can be built into the DIY corpora with FLAX. It also highlights how corpus-based resources and Data-Driven Learning did not feature at the recent BALEAP Professional Issues Meeting on Blending EAP with Technology at Southampton in the A-Z of Technology in EAP that was later compiled by the event organisers. This points to a lack of awareness around corpus-based resources in EAP where there have been no studies conducted on the user interface designs of most concordancing software for usability in mainstream language education as well as highlighting the lack of comprehensive research on technology in EAP.
TED (Ideas worth Spreading) encourages the re-use of their creative commons content for non-commercial educational purposes and many stakeholders have engaged in the re-use of TED Talks and YouTube with the TED-Ed programme. However, adding value to an open resource can also result in the decision by ELT materials developers to create a paywall around the support resource as can be seen below in the English Attack language learning software interface for TED Talks, free movie trailers etc. Perhaps this says something about the industry of ELT which views OER as yet more resources to make money from – high quality accessible resources no less that have been expressly released for sharing and the promotion of understanding…
A lot of talk around defining current and trending practices in EAP can be tuned into via open as well as proprietary channels. In this section, I will refer to new-found open practices in EAP which are embracing Web 2.0 technologies amidst a backdrop of closed practices in EAP academic publishing and within subscription-only EAP memberships. I will open up discussion around these different practices within EAP to sketch out common ground for where EAP could be heading with respects to global outreach.
Toward open practices in EAP
Recent months have evidenced a steady opening up of practices for sharing expertise and resources in EAP. The new EAP teaching blog based at Nottingham University as a discussion-based side-shoot to their new Masters programme in EAP teaching makes use of the most widely used open-source blogging software, WordPress. Thanks to our friends in Canada, EAP tweetchat sessions are run on twitter with the hashtag #EAPchat every first and third Monday of the month, bringing together EAP practitioners who wish to participate in global EAP discussions as well as suggest topics for upcoming tweetchat sessions. An archived transcript page is available at the end of each EAPchat twitter session.
Free webinars from Oxford University Press (OUP), the largest academic publishing house in the world, are also broadcasting talk on EAP to the world. Julie Moore who has collaborated on the new Oxford EAP book series has also contributed free webinars with OUP attended by EAP practitioners from around the world. A review of one of Julie’s webinars on academic grammar can be found on the OUP-sponsored ELT global blog. Wouldn’t it be great if more EAP practitioners opened up their practice in this way to suggest areas of expertise in EAP that they would like to contribute and broadcast via webinars with OUP’s considerable market outreach?
The EAP community in the UK mainly gathers around BALEAP with their Professional Issues Meetings, accreditation scheme, biennial conference and lively email discussion list. There is a noticeable push-pull between open and closed EAP practices within BALEAP which I would like to bring into the open for discussion. Openness was built into the Durham PIM on the EAP Practitioner in June of this year to make this the first BALEAP event to have a twitter hashtag thanks to forward thinking from Steve Kirk. Since this PIM he has also been curating a useful EAP practitioner resources site with Scoop.it!
There does seem to be a willingness on the part of BALEAP members to explore with new technologies so that their discussions around issues on EAP are openly available. However, the BALEAP email discussion list which I mentioned above is the only one of half a dozen similarly JISC-hosted email discussion lists that I belong to which is closed off by the BALEAP membership subscription pay-wall. The others which I subscribe to for free are all open, and discussion transcripts from their contributing members can be searched via the web through the JISC email archives. This has been a BALEAP executive committee decision to keep the email discussion list closed and I question whether this decision best reflects the current drive toward openness among BALEAP members who are interested in sharing their insights and expertise with those around the world for whom BALEAP membership is not an affordable option.
BALEAP recently added the strap-line the global forum for EAP practitioners to its website. Formerly the British Association of Lecturers in EAP (hence the continuity from the acronym to the name BALEAP), some of their event and research outputs can be found on their website but others can only be accessed via the subscription-only Journal of English for Academic Purposes (JEAP). And, you can probably guess where I’m going here with concerns around openness or lack thereof with respects to being the global EAP practitioner forum…
Nonetheless, an invaluable EAP resource that BALEAP have put out onto the wild web is the EAP teacher competency framework. An EAP practitioner portfolio mentoring programme is currently in the pilot stages and there is talk of matching EAP teaching competencies in BALEAP with the UK Professional Standards Framework (UKPSF) at the HEA, but once again for those non-UK and freelance EAP practitioners who do not work for UK higher education institutions that subscribe to the HEA such an alignment of frameworks may not be suitable or relevant. That said, the essence of the UKPSF is useful and perhaps with the current OER International programme at the HEA we can see ownership of the UKPSF go international? HEA accreditation as a UK body will remain a reality, however, so it will be interesting to see what the HEAL working party at BALEAP who are collaborating with the HEA will come up with in response to shaping the identity of BALEAP who aspire to be known as the global forum for EAP practitioners.
Having recently formed a Web Resources Sub Committee (WRSC) with other technologically and OER oriented EAPers at BALEAP we may yet see things open up. Below is the presentation Ylva Berglund Prytz and myself (both on the WRSC at BALEAP) gave on Openness in English for Specific Academic Purposes (ESAP) at the PIM in Sheffield in November, 2011.
Elsevier are the publishers of JEAP and from experience open access in academic publishing has come about through the pressure tactics of certain academic communities of practice lobbying for green and gold standard open access publications in their representative fields. Open Access week – set the default to open is coming up again on October 22nd.
Moving to open access research publications all depends on the culture of the academic research community. It will take those EAP practitioners and researchers working in privileged and well-resourced institutions that can easily afford institutional subscriptions to memberships like BALEAP to seriously consider open access and the potential for global reach of research into EAP. It will also take those EAP practitioners who are working off their institutional radars, so to speak, and who are experimenting with Web 2.0 technologies to get their message and expertise out there for global interaction around issues in EAP practice and research. Something I picked up from Steve Kirk’s Scoop.it! account is a recent book setting an open trend in EAP publishing, Writing Programs Worldwide: Profiles of Academic Writing in Many Places which is published in a free digital online format as well as a pay-for print version. This echoes what publishers are doing with big names in more open fields such as the Bloomsbury Academic publication of The Digital Scholar by Martin Weller. Exciting times and opportunities lie ahead for EAP publishing.
English for Specific Academic Purposes with data driven learning resources
It seems to be no great coincidence that Tim Johns who coined the term Data Driven Learning (DDL) in 1994 had also come up with the term English for Academic Purposes (EAP) in 1974 (Hyland, 2006). According to Chris Tribble’s preliminary results from his latest survey in-take on DDL (announced at the TaLC closing keynote address), EAP practitioners still make up a high percentage of those who took the survey, indicating greater uptake of corpus-based resources and practices in EAP than those in EFL / ESL, for example.
Open corpus-based tools and resources have the potential to equip and enable EAP practitioners to develop relevant ESAP materials. Awareness of and training in these open corpus-based resources will need to be shared across the EAP community, however, to ensure that we are crowd-sourcing our expertise and our resources in this area. If you click on the image below this will take you to a talk I gave at the Open University in the UK on addressing academic literacies with corpus-based OER. This was inspired by the Tribble DDL survey and the lead up to the TaLC10 conference. It was an added bonus to have one of the BAWE corpus developer team members in the audience that day and to receive positive feedback on how FLAX have opened up the BAWE in collaboration with TOETOE and the Learning Technologies Group Oxford.
Over the course of this academic year FLAX and TOETOE will continue to build onto work around opening up research corpora like the BAWE and the BNC managed by the Oxford Text Archive for developing resources for ESAP. We will also be engaging with various stakeholder groups through f2f workshops, online surveys and interviews for open corpus-based resources evaluation which I will be sharing insights from on this blog.
One final word on OER and where corpus-based resources might play a significant role in making higher education more accessible to the estimated 100 million learners worldwide who currently qualify to study at university level but do not have the means to do so (UNESCO, 2008). Because English is the educational lingua franca, open educationalists are going to source support resources for academic English from the approaches and materials that are currently popular and openly available to re-use under creative commons licences. This throws up interesting issues around specificity in EAP for supporting learners with discipline-specific English.
A parallel universe in EAP materials development
It would be an understatement to say that the academic publishing world is undergoing a radical transformation with the arrival of digital and open publishing formats which are democratising publishing as we know it. Niko Pfund, President of Oxford University Press (USA), discusses the ways in which technology affects reading, scholarship, publishing and even thinking in a presentation he gave at Oxford recently which you can access by clicking on the cartoon image above.
I learned a lot from this podcast, including OUP’s commitment since 2003 to publishing all research monographs in both digital and print formats. I also learned of their admiration for what Wikipedians have done for opening up knowledge and publishing through human crowd-sourcing that utilises open technologies and platforms. A parallel drawn here to something that was brought up repeatedly at the EduWiki conference is how academic publishing houses like OUP are well placed to open up the disciplines in the same way as Wikipedia by bringing the voices of the academy into the public sphere through more accessible means of communication than research, and by effectively linking this research to current world events to gain wider relevance and readership.
Pfund refers to messy experimental times in academic publishing with lots of new business models currently being explored for spear-heading changes in publishing. OUP heavily subsidise and give away a lot of published resources including ELT textbooks to the developing world, but not yet under open licences (someone please correct me if I’m wrong here) for those practitioners working in under-resourced communities so that they can re-mix and re-distribute these same resources.
OUCS and OUP are literally down the road from one another, a parallel universe as it were. The former is research, learning and teaching focused with a strong commitment to public scholarship, and the later is focused on exploring new practices and business models for delivering the best in academic publishing. Arguably, there is a lot of overlap that can be tapped into here for the collaborative development of open corpus-based resources and practices for the global ELT market.
In-house EAP materials development
EAP teachers have been developing in-house EAP materials in response to the generic EAP teaching resources available on the mainstream market as a means to meeting the real needs of their students going onto all number of degree programmes. However, as I mentioned in section 2 of this blog post, many of these in-house EAP materials make use of third party copyrighted texts and therefore cannot be shared beyond the secret garden of the classroom or the institutional password-protected VLE. An enormous opportunity presents itself here to EAP practitioners and corpus linguists alike to push out resources in English for Specific Academic Purposes (ESAP) using open Data-Driven Learning (DDL) methods, texts, tools and platforms for sharing OER for ESAP. A significant cultural shift in practice will be required, however, to realise this vision for developing flexible and open ESAP resources that can be adapted for use in multiple educational contexts both off- and on-line. Once again, in subsequent blog posts, I will be presenting open educational practices and open research methods to open up discussion for ways forward with this particular global EAP vision.
Alexander, O., Bell, D., Cardew, S., King, J., Pallant, A., Scott, M., Thomas, D., & Ward Goodbody, M. (2008) Competency framework for teachers of English for Academic Purposes, BALEAP.
Hyland, K. (2006). English for Academic Purposes: An Advanced Handbook. London: Routledge.
Johns, T. (1994). From Printout to Handout: Grammar and Vocabulary Teaching in the Context of Data-driven Learning. In Odlin, T. (ed.), Perspectives on Pedagogical Grammar: 27-45. Cambridge: Cambridge University Press.
I confess that I spend most of my time listening to BBC Radio 3. The parallel that I will draw here is that I was never formally educated in classical music in the same way as I have never worked toward formal qualifications in corpus linguistics during any of my studies. Because I am working broadly across the areas of language resources development and enhancing teaching and learning practices through technology it was only a matter of time, however, before I started exploring and toying with corpus-based resources. I met Dr. Shaoqun Wu of the FLAX project while at a conference in Villach, Austria in 2006 and by 2007 I had begun to delve into the world of open-source digital library collections development with the University of Waikato’s Greenstone software, developed and distributed in cooperation with UNESCO, for realising the much broader vision of reaching under-resourced communities around the world with these open technologies and collections.
Bridging Teaching and Language Corpora (TaLC)
Let’s fast forward to the 2012 Teaching and Language Corpora Conference in Warsaw, Poland. Although I have participated in corpus linguistics conferences before, this was my first time to attend the biennial TaLC conference. TaLCers are very much researchers working in the area of corpus linguistics and DDL and this conference was themed around bridging the gap between DDL research and uses for corpus-based resources and practices in language teaching and learning.
One of the keynote addresses from James Thomas, Let’s Marry, called for greater connectedness in pursuing relationships between those working in DDL research and those working in pedagogy and language acquisition. At one point he asked the audience to make a show of hands for those who knew of big names in the ELT world, including Scrivener, Harmer and Thornbury. Only a few raised their hands. He also made the point that these same ELT names don’t make their way into citations for research on DDL. Interestingly, I was tweeting points made in the sessions I attended to relevant EAP and ELT / EFL / ESL communities online without a TaLC conference hashtag. It would’ve been great to have the other TaLCers tweeting along with me, raising questions and noting key take-away points from the conference to engage interested parties who could not make the conference in person and to catalogue a twitterfeed for TaLC that could be searched by anyone via the Internet at a later point in time. It would’ve also been great to record keynote and presentation speakers as webcasts for later viewing. When approached about these issues later, however, the conference organisers did express interest in ways of amplifying their events by building such mechanisms for openness into their next conference.
Prising open corpus linguistics research in Data Driven Learning (DDL)
Problems with accessing and successfully implementing corpus-based resources into language teaching and learning scenarios have been numerous. As I discussed in section 2 of this blog, many of the concordancing tools referred to in the research have been subscription-based proprietary resources (for example, the Wordsmith Tools), most of which have been designed for at least the intermediate-level concordance user in mind. These tools can easily overwhelm language teaching practitioners and their students with the complex processing of raw corpus data that are presented via complex interfaces with too many options for refinement. Mike Scott, the main developer of the Wordsmith Tools has also released a free version of his concordancing suite with less functionality and this would suffice for many language teaching and learning purposes. He attended my presentation on opening up research corpora with open-source text analysis tools and OER and was very open-minded as were the other TaLCers whom I met at the conference regarding new and open approaches for engaging teachers and learners with corpus-based resources.
There are many freely available annotated bibliographies compiled by corpus linguists which you can access on the web for guidance on published research into corpus linguistics. Many researchers working in this area are also putting pre-print versions of their research publications on the web for greater access and dissemination of their work, see Alex Boulton’s online presence for an example of this. Also hinted at earlier in part 2 of this blog are the closed formats many of this published research takes, however, in the form of articles, chapters and the few teaching resources available that are often restricted to and embedded within subscription-only journals or pricey academic monographs. For example, Berglund-Prytz’s ‘Text Analysis by Computer: Using Free Online Resources to Explore Academic Writing’ in 2009 is a great written resource for where to get started with OER for EAP but ironically the journal it is published in, Writing and Pedagogy, is not free. Lancaster University is home to the openly available BNCweb concordancing software which you only need register for to be able to install a free standard copy on your personal computer. A valuable companion resource on BNCweb was published by Peter Lang in 2008 but once again this is not openly accessible to interested readers who cannot afford to buy the book. The great news is that the main TaLC10 organiser, Agnieszka Lenko, has spearheaded openness with this most recent event by trying to secure an Open Access publication for the TaLC10 proceedings papers with Versita publishers in London.
DIY corpora with AntConc in English for Specific Academic Purposes (ESAP)
At TaLC10 I discovered a lot of overlap with Maggie Charles’ work on building DIY corpora with EAP postgraduate students using the AntConc freeware by Laurence Anthony. We had also included workshops on AntConc for students in our OER for EAP cascade at Durham so it was great to see another EAP practitioner working in this way who had gathered data from her on-going work in this area for presentation and discussion at the conference. Many of her students at the University of Oxford Language Centre are working toward dissertation or thesis writing which raises interesting questions around enabling EAP students to become proficient in developing self-study resources for English for Specific Academic Purposes (ESAP). Her recent paper in the English for Specific Purposes Journal (2012) points to AntConc’s flexibility for student use due to it being freeware that can be installed on any personal computer or flash-drive key for portable use. Laurence Anthony’s website also offers a lot of great video training resources for how to use AntConc. The potential that AntConc offers for building select corpora to those students currently pursuing inter-disciplinary studies in higher education is also noted by Charles. Having said this, drawbacks with certain more obscure subject disciplines, for example Egyptology (Ibid.), that had not yet embraced digital research cultures and were still publishing research in predominantly print-based volumes or image-based .pdf files made the development of DIY corpora still beyond the reach of those few students.
Beyond books and podcasts through linking and crowd-sourcing
While presenting on the power of linked resources within the FLAX collections and pushing these outward to wider stakeholder communities through TOETOE, I came across another rapid innovation JISC-funded OER project at the Beyond Books conference at Oxford. The Spindle project, also based at the Learning Technologies Group Oxford, has been exploring linguistic uses for Oxford’s OpenSpires podcasts with work based on open-source automatic transcription tools. Automatic transcription is often accompanied with a high rate of inaccuracy. Spindle has been looking at ways for developing crowd-sourcing web interfaces that would enable English language learners to listen to the podcasts and correct the automatic transcription errors as part of a language learning crowd-sourcing task.
Automatic keyword generation was also carried out in the SPINDLE project on OpenSpires project podcasts, yielding far more accurate results. These keyword lists which can be assigned as metadata tags in digital repositories and channels like iTunesU offer further resource enhancement for making the podcasts more discoverable. Automatically generated keyword lists such as these can also be used for pedagogical purposes with the pre-teaching of vocabulary, for example. The TED500 corpus by Guy Aston which I also came across at TaLC10 is based on the TED talks (ideas worth spreading) which have also been released under creative commons licences and transcribed through crowd-sourcing.
The potential for open linguistic content to be reused, re-purposed and redistributed by third parties globally, provided that they are used in non-commercial ways and are attributed to their creators, offers new and exciting opportunities for corpus developers as well as educational practitioners interested in OER for language learning and teaching.
Anthony, L. (n.d.). Laurence Anthony’s Website: AntConc.
Berglund-Prytz, Y (2009). Text Analysis by Computer: Using Free Online Resources to Explore Academic Writing.Writing and Pedagogy 1(2): 279–302.
British National Corpus, version 3 (BNC XML Edition). 2007. Distributed by Oxford University Computing Services on behalf of the BNC Consortium.
Charles, M. (2012). ‘Proper vocabulary and juicy collocations’: EAP students evaluate do-it-yourself corpus-building. English for Specific Purposes, 31: 93-102.
Lexical Analysis Software & Oxford University Press (1996-2012). Wordsmith Tools.
Hoffmann, S., Evert, S., Smith, N., Lee, D. & Berglund Prytz, Y. (2008). Corpus Linguistics with BNCweb – a Practical Guide. Frankfurt am Main: Peter Lang.
Previously, I left off with reflections from the 2012 IATEFL conference and exhibition in Glasgow. Wandering through the exhibition hall crammed with vendor-driven English language resources for sale from the usual suspects (big brand publishers), the analogy of the greatest hits came to mind with respects to EFL / ESL and EAP materials development and publishing. But at this same IATEFL event there was also a lot of co-channel interference feeding in from the world of self-publishing, reflecting how open digital scholarship has become mainstream practice in Teaching English as a Foreign Language (TEFL), also known as Teaching English as a Second Language (TESL) in North America. The launch of the round initiative at IATEFL, bridging the gap between ELT blogging and book-making, where the emphasis is on teachers as publishers is but one example.
Crosstalk in ELT materials development and publishing
Let’s take a closer look at the crosstalk happening within the world of ELT materials development and publishing, where messages are being transmitted simultaneously from radio 1 and radio 2 type stations. Across the wider ELT world, TEFL / TESL has embraced Web 2.0 far more readily than EAP (but there are interesting signs of open online life emerging from some EAP practitioners, which I will highlight in the last section of this blog).
Within TEFL, we can observe more in the way of collaboration between open and proprietary publishing practices. English360, also present at IATEFL 2012, combines proprietary content from Cambridge University Press with teachers’ lesson plans, along with tools for creating custom-made pay-for online English language courses. Across the ELT resources landscape open resources and practices proliferate, including: free ELT magazines and journals; blogs and commentary-led discussions; micro-blogging via twitter feeds and tweetchat sessions; instructional and training videos via YouTube and iTunesU (both proprietary channels that hold a lot of OER), and; online communities with lesson plan resource banks. These and many more open educational practices (OEP) are the norm in TEFL / TESL. And, let’s not forget Russell Stannard’s Teacher Training Videos website of free resources for navigating web-based language tools and projects drawing on his service as the Web Watcher at English Teaching Professional for well over a decade now.
The broken record in ELT publishing
Yet, both the TEFL / TESL and EAP markets are still well and truly saturated with the glossy print-based textbook format, stretching to the CD-ROM and mostly password-protected online resource formats. The greatest hits get played over and over again and the needle continues to get stuck in many places.
Exactly why does the closed textbook format concern me so much? It’s an issue of granularity or size really which leads to further issues with flexibility, specificity and currency. As we all know, there are only so many target language samples and task types that you can pack into a print-based textbook. Beyond the trendy conversation-based topics, what are sometimes useful and transferable are the approaches that make up the pedagogy contained therein. Unlocking these approaches and linking to wider and more relevant and authentic language resources is key. We can see this approach to linked resources development taken by the web-based FLAX and WordandPhrase corpus-based projects. Publishers are aware of the limitations of the textbook format but they’re also trying to reach a large consumer base to boost their sales so it remains in their best interests to keep resources generic. Think of all the academic English writing books out there, many of which claim to be based on the current research for meeting your teaching and learning needs for academic English writing across the disciplines, but turn out to be more of the same topic-based how-to skills books working within the same essayist writing tradition.
The open textbook movement brings a new type of textbook to the world of education. One that can be produced at a fraction of the cost and one that can be tailored, linked to external resources, changed and updated whenever the pedagogical needs arise.
The argument in favour of textbooks in ELT has always been one for providing structure to the teaching and learning sequence of a particular syllabus or course. Locked-down proprietary textbook, CD-ROM and online resource formats are not only expensive but they are inflexible. And, these force teachers into problematic practices. Despite trying to point out the perils of plagiarism to our students, as language teachers we are supplementing textbooks with texts, images and audio-visual material from wherever we can beg, borrow and steal them. Of course we do this for principled pedagogical reasons and if we don’t plan on sharing these teaching materials beyond classroom and password-protected VLE walls we’re probably OK, right?
I’ve seen many a lesson handout or in-house course pack for language teaching that includes many third party texts and images which are duly referenced. Whether the teacher/materials developer puts the small ‘c’ in the circle or not, marking this handout or course pack as copyrighted, the default license is one of copyright to the institution where that practitioner works. And, this is where the problem lies. The handout or course pack is potentially in breach of the copyright of any third party materials used therein, unless the teacher/materials developer has gained clearance from the copyright holders or unless those third party materials are openly licensed as OER for re-mixing. Good practice with materials development and licensing will ensure that valuable resources created by teachers can be legitimately shared across learning and teaching communities. You can do this through open publishing technologies and/or in collaboration with publishers.
A deficit in corpus-based resources training
Good corpus-derived textbooks from leading publishing houses do exist. Finally, the teaching of spoken grammar gets the nod with The Handbook of Spoken Grammar textbook by Delta Publishing. But, and this is a big but, do these textbooks go far enough to address the current deficit in teacher and learner training with corpus-based tools and resources? I expect the publishers would direct this question to the academic monographs, of which there are a fair few, on Data Driven Learning (DDL) and corpus linguistics. I have some on my bookshelf and there are many more in the library where I am a student/fellow, all cross-referenced to academic journal articles from research into corpus linguistics and DDL which I will be talking about more in the third section of this blog. But exactly how accessible are these resources – in terms of their cost, the academic language they are packaged in, the closed proprietary formats they are published in, and in relation to much of the subscription-only corpora and concordancing software their research is based on? It’s no wonder that training in corpus tools and resources is not part of mainstream English language teacher training. Of course, there are open exceptions that provide new models in corpus-based resources development and publishing practices and this is very much what the TOETOE project is trying to share with language education communities.
Corpus linguists are well aware that corpus-based resources and tools in language teaching and materials development haven’t taken off as a popular sport in mainstream language teaching and teacher training. This does run counter to the findings from the research, however, where the argument is that DDL has reached a level of maturity (Nesi & Gardner, 2011; Reppen, 2010; O’Keefe et.al., 2007; Biber, 2006). Similarly, many of the findings from leading researchers (too many to cite!) in language and teaching corpora have been baffled by the chasm between the research into DDL and the majority of mainstream ELT materials that appear on the market that continue to ignore the evidence about actual language usage from corpus-based research studies. Once again, this comes back to the issue of specific versus generic language materials and the issues raised around limitations with developing restricted resource formats.
Gangnam style corpus-based resources development
So what’s it going to take for corpus-based resources to take off Gangnam style in mainstream language teaching and teacher training? And, how are we going to make these resources cooler and more accessible so as to stop language teaching practitioners from giving them a bad rap? More and more corpus-based tools and resources are being built with or re-purposed with open source technologies and platforms. We are now presented with more and more web-based channels for the dissemination of educational resources, offering the potential for massification and exciting new possibilities for achieving what has always eluded the language education and language corpora research community, namely the wide-scale adoption of corpus-based resources in language education.
I’ve actually been asked to take the word ‘corpus’ out of a workshop title by a conference organiser so as to attract more participants. If you’re interested in expressing your own experiences with using corpora in language teaching and would like to make suggestions for where you think data-driven learning should be heading you can complete Chris Tribble’s on-going online survey on DDL here.
Radio, what’s new? Someone still loves you (corpus-based resources)…
Publishers constantly need ideas for and examples of good educational resources. No great surprises there. I would like to propose that OER and OEP are a great way to get noticed by publishers to start working with them. Sitting on the steering committee meeting with the JISC-funded PublishOER project members at Newcastle University in the UK in early September, we also had representatives from Elsevier, RightsCom, the Royal Veterinary College (check out their exciting WikiVet OER project) and JISC Collections at the table. Elsevier who have borne the brunt of a lot of the lash back in academic publishing from the Open Access movement are trying to open up to the fast changing landscape of open practices in publishing. PublishOER are creating new mechanisms, a permissions request system, for allowing teachers and academics to use copyrighted resources in OER. These OER will include links and recommendations leading back to the publishers’ copyrighted resources as a mechanism for promoting them. Publishers are also interested in using OER developed by teachers and academics that are well designed and well received by students. Re-mixable OER offer great business opportunities for publishers as well as great dissemination opportunities for DDL researchers and practitioners, enabling effective corpus-based ELT resources to reach broader audiences.
Sustainability is an important issue with any project, resource, event or community. How many times have we seen school textbook sets stay unused on shelves, or heard of government-funded project resources that go unused perhaps due to a lack of discoverability? To build new and useful resources online does not necessarily mean that teachers and learners will come in droves to find and use these resources even if they are for free. David Duebelbeiss of EFL Classroom 2.0 is currently exploring new business models for sharing and selling ELT resources. One example is the sale of lesson plans in a can which were once free and now sell for $19.95, a “once and forever payment”. Some teachers can even make it rich as is reported in this businessweek article about a kindergarten teacher who sold her popular lesson plans through the TeachersPayTeachers initiative.
Transaction costs in materials development don’t only include the cost of the tools and resources that enable materials development, they also include the cost in terms of time spent on developing resources and marketing them. Open education also points to the unnecessary cost in duplicating the same educational resources over and over again because they haven’t been designed and licensed openly for sharing and re-mixing. Putting your resources in the right places, in more than one, and working with those that understand new markets, new technologies and new business models, including open education practitioners and publishers, are all ways forward to ensure a return on investment with materials development.
Hopefully, by providing new frequencies for practitioners to tune into for how to create resources from both open and proprietary resources a new mixed economy (as the PublishOER crowd like to refer to it) will be realised.
A matter of scale in open and distance education
Let’s not forget those working in ELT around the world, many of whom are volunteers, who along with their students simply cannot afford the cost of proprietary and subscription-only educational resources, let alone the investment and infrastructure for physical classrooms and schools. Issues around technology and ELT resources and practices in developing countries did surface at IATEFL 2012 but awareness around the more pressing issues may not be finding ways to effectively filter their way through to well-resourced ELT practitioners and the institutions that employ them. ELT is still fixated on classroom-based teaching resources and practices.
The Hornby Educational Trust in collaboration with the British Council which is a registered charity have been offering scholarships to English language teachers working in under-resourced communities since 1970. I attended a session given by the Hornby scholars at IATEFL 2012 and although I was impressed by the enthusiasm and range of expertise of those who had been selected for scholarships, reporting on ELT interventions they had devised in their local contexts, I couldn’t help but wonder about the scale of the challenges we currently face in education globally. How are we going to provide education opportunities for the additional 100 million learners currently seeking access to the formal post-secondary sector (UNESCO, 2008)? In Sub-Saharan Africa, more than half of all children will not have the privilege of a senior high school education (Ibid). What open and distance education teaches us is that there are just not enough teachers/educators out there. Nor will the conventional industrial model of educational delivery be able to meet this demand.
As DDL researchers and resource developers who are looking for ways to make our research and practice more widely adopted in language teaching and learning globally, wouldn’t we also want to be thinking about where the real educational needs are and how we might be reaching under-resourced communities with open corpus-based educational resources for uses in EFL / ESL and EAP among other target languages? First of all, we would need to devote more attention to unpacking corpus-based resources so that they are more accessible to the non-expert user, and we would need to find more ways of making these resources more discoverable.
In interviews released as OER on YouTube by DigitaLang with leading TEFLers at IATEFL 2012, I was able to catch up on opinions around the use of technology in ELT. Nik Peachey corrected the often widely held misconception about the digital divide for uses of technology in developing countries, pointing to the adoption of mobile and distance education rather than the importation of costly print-based published materials with first-world content and concerns that are often inappropriate for developing world contexts. You can view his interview here:
Thinking beyond classroom-based practice
Scott Thornbury, writer of the A-Z of ELT blog – another influential and popular discussion site for the classic hits in ELT for those who are both new and old to the field – also praised the Hornby scholars and gave his views on technology in ELT in a further IATEFL 2012 DigitaLang interview. He talks about the ‘human factor’ as something that occurs in classroom-based language teaching. In order to nurture this human factor, he recommends that technology be kept for uses outside the classroom or at best for uses in online teacher education. Open and distance education practitioners and researchers would also agree that well-resourced face-2-face instruction yields high educational returns as in the case of the Hornby scholarships, but they would also argue that this is not a scalable business model for meeting the needs of the many who still lack access to formal post-secondary education. What is more, the human factor as evidenced in online collaborative learning is well documented in the research from open and distance education as it is from traditional technology-enhanced classroom-based teaching.
“Access to reliable and affordable internet connectivity poses unique challenges in the developing world. That said, I believe it possible to design open courses which use a mix of conventional print-based materials for “high-bandwidth” data and mobile telephony for “low-bandwidth” peer-to-peer interactions. So for example, the OERu delivery model will be able to produce print-based study materials and it would be possible to automatically generate CD-ROM images of the rich media (videos / audio) contained in the course for offline viewing. We already have the capability to generate collections of OERu course materials authored in WikiEducator to produce print-based equivalents which could be reproduced and distributed locally. The printed document provides footnotes for all the web-links in the materials which OERu learners could investigate when visiting an Internet access point. OERu courses integrate microblogging for peer-to-peer interactions and we produce a timeline of all contributions via discussion forums, blogs etc. The bandwidth requirements for these kind of interactions are relatively low which address to some extent the cost of connectivity.”
Biber, D., (2006). University language: a corpus-based study of spoken and written registers. Amsterdam: John Benjamins.
Nesi, H, Gardner, S., Thompson, P. & Wickens, P. (2007). The British Academic Written English (BAWE) corpus, developed at the Universities of Warwick, Reading and Oxford Brookes under the directorship of Hilary Nesi and Sheena Gardner (formerly of the Centre for Applied Linguistics [previously called CELTE], Warwick), Paul Thompson (Department of Applied Linguistics, Reading) and Paul Wickens (Westminster Institute of Education, Oxford Brookes), with funding from the ESRC (RES-000-23-0800)
Nesi, H. and Gardner, S. (2012). Genres across the Disciplines: Student writing in higher education. Cambridge: Cambridge University Press.
O’Keeffe, A., McCarthy, M., & Carter R. (2007). From Corpus to Classroom: language use and language teaching. Cambridge: Cambridge University Press.
Reppen, R. (2010). Using Corpora in the Language Classroom . Cambridge: Cambridge University Press.
Original, in-house and live, this station brings us what’s new in the world of OER for corpus-based language resources.
Kicking things off in late March with Clare Carr from Durham, we co-presented an OER for EAP corpus-based teacher and learner training cascade project at the Eurocall CMC & Teacher Education Annual Workshop in Bologna, Italy. This was very much a flipped conference whereby draft presentation papers were sent to be read in advance by participants and where the focus was on discussion rather than presentation at the physical event. Russell Stannard of Teacher Training Videos (TTV) was the keynote speaker at this conference and I have been developing some training resources for the FLAX open-source corpus collections which will be ready to go live on TTV soon. New collections in FLAX have opened up the BAWE corpus and have linked this to the BNC, a Google-derived n-gram corpus as well as Wikimedia resources, namely Wikipedia and Wiktionary. These collections in FLAX show what’s cutting edge in the developer world of open corpus-based resources for language learning and teaching.
Focusing on linked resources: which academic vocabulary list?
In a later post, I will be looking at Mark Davies’ new work with Academic Vocabulary Lists based on a 110 million-word academic sub corpus in the Corpus of Contemporary American (COCA) English – moving away from the Academic Word List (AWL) by Coxhead (2000) based on a 3.5 million-word corpus – and his innovative web tools and collections based on the COCA. Once again, Davies’ Word and Phrase project website at Brigham Young University contains a bundle of powerfully linked resources, including a collocational thesaurus which links to other leading research resources such as the on-going lexical database project at Princeton, WordNet.
The open approach to developing non-commercial learning and teaching corpus-based resources in FLAX also shows the commitment to OER at OUCS (including the Oxford Text Archive), where the BAWE and the BNC research corpora are both managed. Click on the image below to visit the BAWE collections in FLAX.
Open eBooks for language learning and teaching
Learning Through Sharing: Open Resources, Open Practices, Open Communication, was the theme of the EuroCALL conference and to follow things up the organisers have released a call for OER in languages for the creation of an open eBook on the same theme. The book will be “a collection of case studies providing practical suggestions for the incorporation of Open Educational Resources (OER) and Practices (OEP), and Open Communication principles to the language classroom and to the initial and continuing development of language teachers.” This open-access e-Book, aimed at practitioners in secondary and tertiary education, will be freely available for download. If you’re interested in submitting a proposal to contribute to this electronic volume, please send in a case study proposal (maximum 500 words) by 15 October 2012 to the co-editors of the publication, Ana Beaven (University of Bologna, Italy), Anna Comas-Quinn (Open University, UK) and Barbara Sawhill (Oberlin College, USA).
MOOC on Open Translation tools and practices
Another learning event which I’ve just picked up from EuroCALL is a pilot Massive Open Online Course in open translation practices being run from the British Open University from 15th October to 7 December 2012 (8 weeks), with the accompanying course website opening on Oct 10th 2012. Visit the “Get involved” tab on the following site: http://www.ot12.org/. “Open translation practices rely on crowd sourcing, and are used for translating open resources such as TED talks and Wikipedia articles, and also in global blogging and citizen media projects such as Global Voices. There are many tools to support Open Translation practices, from Google translation tools to online dictionaries like Wordreference, or translation workflow tools like Transifex.” Some of these tools and practices will be explored in the OT12 MOOC.
Bringing open corpus-based projects to the Open Education community
On the back of the Cambridge 2012 conference: Innovation and Impact – Openly Collaborating to Enhance Education held in April, I’ve been working on another eBook chapter on open corpus-based resources which will be launched very soon at the Open Education conference in Vancouver. The Cambridge 2012 event was jointly hosted in Cambridge, England by the Open Course Ware Consortium (OCWC) and SCORE. Presenting with Terri Edwards from Durham, we covered EAP student and teacher perceptions of training with open corpus-based resources from three projects: FLAX, the Lextutor and AntConc. These three projects vary in terms of openness and the type of resources they are offering. In future posts I will be looking at their work and the communities that form around their resources in more depth. The following video from the conference has captured our presentation and the ensuing discussion at this event to a non-specialist audience who are curious to know how open corpus-based resources can help with the open education vision. Embedding these tools and resources into online and distance education to support the growing number of learners worldwide who wish to access higher education, where the OER and most published research are in English, opens a whole new world of possibilities for open corpus-based resources and EAP practitioners working in this area.
A further video from a panel discussion which I contributed to – an OER kaleidoscope for languages – looks at three further open language resources projects that are currently underway and building momentum here in the UK: OpenLives, LORO, the CommunityCafe. Reference to other established OER projects for languages and the humanities including LanguageBox and the HumBox are also made in this talk.
A world declaration for OER
The World OER congress in June at the UNESCO headquarters in Paris marked ten years since the coining of the term OER in 2002 along with the formal adoption of an OER declaration (click on the image to see the declaration). I’ve included the following quotation from the OER declaration to provide a backdrop to this growing open education movement as it applies to language teaching and learning, highlighting that attribution for original work is commonplace with creative commons licensing.
Emphasizing that the term Open Educational Resources (OER) was coined at UNESCO’s 2002 Forum on OpenCourseWare and designates “teaching, learning and research materials in any medium, digital or otherwise, that reside in the public domain or have been released under an open license that permits no-cost access, use, adaptation and redistribution by others with no or limited restrictions. Open licensing is built within the existing framework of intellectual property rights as defined by relevant international conventions and respects the authorship of the work”.
Wikimedia – why not?
Earlier in September, I volunteered to present at the EduWiki conference in Leicester which was hosted by the Wikimedia UK chapter. Most people are familiar with Wikipedia which is the sixth most visited website in the world. It is but one of many sister projects managed by the Wikimedia Foundation, however, along with others such as Wikiversity, Wiktionary etc.
I will also be blogging soon about widely held misconceptions for uses of Wikipedia in EAP and EFL / ESL while exploring its potentials in writing instruction with reference to some very exciting education projects using Wikipedia around the world. The types of texts that make up Wikipedia alongside many academics’ realisations that they need to be reaching wider audiences with their work through more accessible modes of writing transmission are all issues I will be commenting on in this blog in the very near future.
Presenting the work the FLAX team have done with text mining, incorporating David Milne’s Wikipedia mining tool, the potential of Wikipedia as an open corpus resource in language learning and teaching is evident. I was demonstrating how this Wikipedia corpus has been linked to other research corpora in FLAX, namely the BNC and the BAWE, for the development of corpus-based OER for EFL / ESL and EAP. And, let’s not forget that it’s all for free!
The open approach to corpus resources development
There is no reason why the open approach taken by FLAX cannot be extended to build open corpus-based collections for learning and teaching other modern languages, linking different language versions of Wikipedia to relevant research corpora and resources in the target language. In particular, functionality in the FLAX collections that enable you to compare how language is used differently across a range of corpora, which are further supported by additional resources such as Wiktionary and Roget’s Thesaurus, make for a very powerful language resource. Crowd-sourcing corpus resources through open research and education practices and through the development of open infrastructure for managing and making these resources available is not as far off in the future as we might think. The Common Language Resources and Technology Infrastructure (CLARIN) mission in Europe is a leading success story in the direction currently being taken with corpus-based resources (read more about the recent workshop for CLARIN-D held in Leipzig, Germany).
These past few months I’ve been tuning into a lot of different practitioner events and discussions across a range of educational communities which I feel are of relevance to English language education where uses for corpus-based resources are concerned. There’s something very distinct about the way these different communities are coming together and in the way they are sharing their ideas and outputs. In this post, I will liken their behaviour to different types of radio station broadcast, highlighting differences in communication style and the types of audience (and audience participation) they tend to attract.
I’ve also been re-setting my residential as well as my work stations. No longer at Durham University’s English Language Centre, I’m now London-based and have just set off on a whirlwind adventure for further open educational resources (OER) development and dissemination work with collaborators and stakeholders in a variety of locations around the world. TOETOE is going international and is now being hosted by Oxford University Computing Services (OUCS) in conjunction with the Higher Education Academy (HEA) and the Joint Information Systems Committee (JISC) as part of the UK government-funded OER International programme.
I will also be spreading the word about the newly formed Open Education Special Interest Group (OESIG), the Flexible Language Acquisition (FLAX) open corpus-based language resources project at the University of Waikato, and select research corpora, including the British National Corpus (BNC) and the British Academic Written English (BAWE) corpus, both managed by OUCS, which have been prised open by FLAX and TOETOE for uses in English as a Foreign Language (EFL) – also referred to as English as a Second Language (ESL) in North America – and English for Academic Purposes (EAP). Stay tuned to this blog in the coming months for more insights into open corpus-based English language resources and their uses in different teaching and learning contexts.
This post is what those in the blogging business refer to as a ‘cornerstone’ post as it includes many insights into the past few months of my teaching fellowship in OER with the Support Centre in Open Educational Resources (SCORE) at the Open University in the UK. Many posts within one as it were. This post also provides a road map for taking my project work forward while identifying shorter blogging themes for posts that will follow this one. This particular post will also act as the mother-ship TOETOE post from which subsequent satellite posts will be linked. Please use the red menu hyperlinks in the section below to dip in and out of the four main sections of this blog post series. I have elected to choose this more reflective style of writing through blogging so that my growing understandings in this area are more accessible to unanticipated readers who may stumble upon this blog and hopefully make comments to help me refine my work. Two more formal case studies on my TOETOE project to date will be coming out soon via the HEA and the JISC.
I have also made this hyperlinked post (in five sections) available as a .pdf on Slideshare.
Which station(s) are you listening to?
BBC Radio has been going since 1927. With audiences in the UK, four stations in particular are firm favourites: youth oriented BBC Radio 1 featuring new and contemporary music; BBC Radio 2 with middle of the road music for the more mature audience; high culture and arts oriented BBC Radio 3, and; news and current affairs oriented BBC Radio 4. Of course there are many more stations but these four are very typical of those found around the world. What is more, I’ve selected these four very distinct stations as the basis to build a metaphor around the way four very distinct educational practitioner communities are intersecting with corpus-based language teaching resources. This metaphor will draw on thought waves from the following:
Following on from an earlier post, in which Judith Pete, outlined some of the issues around promoting diversity and inclusion in GO-GN, she follows up in this piece with some suggestions for guidelines and principles. I’m enormously grateful to her and the contributors for this work, it is essential that those of us in the… Read more »
For the next webinar, we are going in-house, with OER Hub’s very own, Dr Beck Pitt as the next speaker. Beck will talk about the UK Open Textbook project which was funded by the Hewlett Foundation. Join us on Wednesday 3rd July, 4pm GMT, if you can. Beck has written a brief background to her talk, as… Read more »
The GO-GN webinars are back! Rejoice! We’re trying something a bit different with the format, so have scheduled the next three in advance. Please enter them in your diaries now. They are as ever, open to all, but the drop-in session is primarily aimed at GO-GN doctoral researchers. We’re back with the First Wednesday of… Read more »
This is a guest post from Judith Pete. Judith has been leading a project examining ways in which the GO-GN network could include more members from under-represented regions. Here she reflects on the project. Personal reflections, insights and explorations with stakeholders: This is a life-shaking kind of project that I have to admit is changing… Read more »
Galway was a breath of fresh air. The pace and intensity of the GO-GN seminars and #OER19 was no holiday, but it was invigorating nonetheless. Walks under blue skies, comfy hotel, no domestic duties for a week, for me a change really is as good as a holiday. And most energising of all, reconnecting with old friends and making new ones…. Read more »