The doge meme teaches us so much about language learning and how challenging it can be to accurately combine words and patterns when using another language. The FLAX language system teaches us so much about how we can avoid using dodgy language by employing powerful open-source language analysis tools and authentic language resources.
The FLAX (Flexible Language Acquisition) project has won the LinkedUp Vici Competition for tools and demos that use open or linked data for educational purposes. This post is the one I wrote to accompany our project submission to the LinkedUp challenge.
FLAX is an open-source software system designed to automate the production and delivery of interactive digital language collections. Exercise material comes from digital libraries (language corpora, web data, open access publications, open educational resources) for a virtually endless supply of authentic language learning in context. With simple interface designs, FLAX has been designed so that non-expert users — language teachers, language learners, subject specialists, instructional design and e-learning support teams — can build their own language collections.
The FLAX software can be freely downloaded to build language collections with any text-based content and supporting audio-visual material, for both online and classroom use. FLAX uses the Greenstone suite of open-source multilingual software for building and distributing digital library collections, which can be published on the Internet or on CD-ROM. Issued under the terms of the GNU General Public License, Greenstone is produced by the New Zealand Digital Library Project at the University of Waikato, and developed and distributed in cooperation with UNESCO and the Human Info NGO.
REMIX WITH FLAX
At FLAX we understand that content and data vary in terms of licensing restrictions, depending on the publishing strategies adopted by institutions for the usage of their content and data. FLAX has, therefore, been designed to offer a flexible open-source suite of linguistic support options for enhancing such content and data across both open and closed platforms.
Featuring the Latest in Artificial Intelligence &
Natural Language Processing Software Designs
Within the FLAX bag of tricks, we have the open-source Wikipedia Miner Toolkit, which links in related words, topics and definitions from Wikipedia and Wiktionary as can be seen below in the Learning Collocations collection (click on the image to expand and visit the toolkit in action).
Wikipedia Mining Tool in FLAX Learning Collocations Collection – click on the image to expand and visit the collection
Featuring Open Data
Available on the FLAX website are completed collections and on-going collections development with registered users. Current research and development with the FLAX Law Collections is based entirely on open resources selected by language teachers and legal English researchers as shown in the table below. These collections demonstrate how users can build collections in FLAX according to their interests and needs.
Law Collections in FLAX
Type of Resource
Number and Source of Collection Resources
Open Access Law research articles
40 Articles (DOAJ – Directory of Open Access Journals, with Creative Commons licenses for the development of derivatives)
MOOC lecture transcripts and videos (streamed via YouTube and Vimeo)
15 Lectures (Oxford Law Faculty, Centre for Socio-Legal Studies and Department of Continuing Education)
PhD Law thesis writing
50-70 EThoS Theses (sections: abstracts, introductions, conclusions) at the British Library (Open Access but not licensed as Creative Commons – permission for reuse granted by participating Higher Education Institutions)
Linking in a reformatted version of Wikipedia (English version), providing key terms and concepts as a powerful gloss resource for the Law Collections.
Linking in lexico-grammatical phrases from the British National Corpus (BNC) of 100 million words, the British Academic Written English corpus (BAWE) of 2500 pieces of assessed university student writing from across the disciplines, and the re-formatted Wikipedia corpus in English.
Linking in a reformatted Google n-gram corpus (English version) containing 380 million five-word sequences drawn from a vocabulary of 145,000 words.
FLAX Training Videos
Featuring Game-based Activities
Click on the image below to explore the different activities that can be applied to language collections in FLAX.
FLAX Apps for Android
We also have a suite of free game-based FLAX apps for Android devices. Now you can interact with the types of activities listed above while you’re learning on the move. Click on the FLAX app icon to the right to access and download the apps and enjoy!
FLAX Research & Development
To date, we have distributed the English Common Law and the Age of Globalization MOOC collections in FLAX to thousands of registered learners in over a 100 countries – wow!
A collaborative investigation is underway with FLAX and the Open Educational Resources Research Hub (OERRH), whereby a cluster of revised OER research hypotheses are currently being employed to evaluate the impact of developing and using open language collections in FLAX with informal MOOC learners as well as formal English language and translation students.
“…the attempt to cut out the middleman as far as possible and to give the learner direct access to the data” (Johns, 1991, p.30)
Importance is placed on empirical data when taking a corpus-informed and data-driven approach to language learning and teaching. Moving away from subjective conclusions about language based on an individual’s internalized cognitive perception of language and the influence of generic language education resources, empirical data enable language teachers and learners to reach objective conclusions about specific language usage based on corpus analyses. Tim Johns coined the term Data-Driven Learning (DDL) in 1991 with reference to the use of corpus data and the application of corpus-based practices in language learning and teaching (Johns, 1991). The practice of DDL in language education was appropriated from computer science where language is treated as empirical data and where “every student is Sherlock Holmes”, investigating the uses of language to assist with their acquisition of the target language (Johns, 2002:108).
A review of the literature indicates that the practice of using corpora in language teaching and learning pre-dates the term DDL with work carried out by Peter Roe at Aston University in 1969 (McEnery & Wilson, 1997, p.12). Johns is also credited for having come up with the term English for Academic Purposes (Hyland, 2006). Johns’ oft quoted words about cutting out the middleman tell us more about his DDL vision for language learning; where teacher intuitions about language were put aside in favor of powerful text analysis tools that would provide learners with direct access to some of the most extensive language corpora available, the same corpora that lexicographers draw on for making dictionaries, to discover for themselves how the target language is used across a variety of authentic communication contexts. As with many brilliant visions for impactful educational change, however, his also appears to have come before its time.
This post will argue that the original middleman in Johns’ DDL metaphor took on new forms beyond that of teachers getting in the way of learners having direct access to language as data. An argument will be put forward to claim that the applied corpus linguistics research and development community introduced new and additional barriers to the widespread adoption of DDL in mainstream language education. Albeit well intentioned and no doubt defined by restrictions in research and development practices along the way, new middlemen were paradoxically perpetuated by the proponents of DDL making theirs an exclusive rather than a popular sport with language learners and
The middle man comic – first issue cover via Wikipedia
teachers (Tribble, 2012). And, with each new wave of research and development in applied corpus linguistics new and puzzling restrictions confronted the language teaching and learning community.
The middleman in DDL has presented himself as a sophisticated corpus authority in the form of research and development outputs, including text analysis software designed by, and for, the expert corpus user with complex options for search refinement that befuddled the non-expert corpus user, namely language teachers and learners. Replication of these same research methods to obtain the same or similar results for uses in language teaching and learning has often been restricted to securing access to the exact same software and know-how for manipulating and querying linguistic data successfully.
Which language are you speaking?
He has been known to speak in programming languages with his interfaces often requiring specialist trainers to communicate his most simple functions. Even his most widely known KWIC (Key Word In Context) interface for linguistic data presentation with strings of search terms embedded in truncated language context snippets remain foreign-looking to the mostly uninitiated in language teaching and learning. In many cases, he has not come cheap either and requirements for costly subscriptions to and upgrades of his proprietary soft wares have been the norm, especially in the earlier days.
In particular, with reference to English Language Teaching (ELT), he has criticized many widely used ELT course book publications and their language offerings for ignoring his research findings based on evidence for how the English language is actually used across different contexts of use. In response, a few ELT course book publishers have clamored around him to help him get his words out for a price but in so doing have rendered his corpus analyses invisible, in turn creating even more of a dependency on course books rather than stimulating autonomy among language teachers and learners in the use of corpora and text analysis tools for DDL. And, because publishers were primarily confining him to the course book and sometimes CD-ROM format there were only so many language examples from the target corpora that could possibly fit between the covers of a book and only the most frequent language items made it onto the compact disc.
The Oxford Collocation Dictionary for Students of English, (2nd Edition from 2009 by Oxford University Press) based on the British National Corpus (BNC) is one example where high frequency collocations for very basic words like any and new predominate and where licensing restrictions permit only one computer installation per CD ROM. Further restrictions compound the openness issue with the use of closed corpora in leading corpus-derived ELT books such as the Cambridge University Press (CUP) publication, From Corpus to Classroom (O’Keeffe, McCarthy & Carter, 2007), which might have been more aptly entitled, From Corpus to Book, as it draws heavily on the closed Cambridge and Nottingham Discourse Corpus of English (CANCODE) from Cambridge University Press and Nottingham University and recommends the use of proprietary concordancing programs, Wordsmith Tools and MonoConc Pro, thereby rendering any replication of analyses for the said corpus inaccessible to its readers.
Mainstream language teacher training bodies continue to sidestep the DDL middleman in the development of their core training curricula (for example, the Cambridge ESOL exams) due to the problems he proposes with accessibility in terms of cost and complexity. Instead, English language teacher training remains steadily focused on how to select and exploit corpus-derived dictionaries with reference to training learners in how to identify, for example: definitions, derivatives, parts of speech, frequency, collocations and sample sentences. In the same way that corpus-derived course books do not render corpus analyses transparent to their users, training in dictionary use does not bring teachers and their learners any closer to the corpora they are derived from.
Cambridge English Corpus
Michael McCarthy presented, ‘Corpora and the advanced level: problems and prospects’ at IATEFL Liverpool 2013. One of the key take-away messages from his talk was the fact that learners of more advanced English receive little in the way of return on investment once the highest frequency items of English vocabulary had been acquired (he referred to the top 2000 words from the first wordlist of the British National Corpus that make up about 80% of standard English use). To learn the subsequent wordlists of 2000 words each the percentage of frequency in usage drops considerably, so in terms of cost for the time and money you might end up spending if you sign up to yet more English language classes may not be affordable or feasible. This has particular implications in learning English for Specific Purposes (ESP), including English for Academic Purposes (EAP) which many would argue is always concerned with developing specific academic English language knowledge and usage within specific academic discourse communities.
Catching Michael McCarthy on the way out of the presentation theatre he kindly agreed to walk and talk while rushing to catch his train out of Liverpool. Would the Cambridge English Corpus be made available anytime soon for non-commercial educational research and materials development purposes, I asked? I hastened to add the possibilities and the real world need for promoting corpus-based resources and practices in open and distance online education as well as in traditional classroom-based language education. He agreed that the technology had become a lot better for finally realising DDL within mainstream language teaching and learning and within materials development. Taking concordance line printouts into ELT classrooms had never really taken off in his estimation and I would have to agree with him on that point. He indicated that it would be unlikely for the corpus to become openly available anytime in the foreseeable future, however, due to the large amount of private investment in the development of the corpus with restricted access for those participating stakeholders on the project only.
But what would the real risk be in opening up this corpus to further educational research and development for non-commercial purposes with derivative resources made freely available online? Wouldn’t this be giving the corpus resource added sustainability with new lives and further opportunities for exploitation that could advance our shared understanding of how English works? – across different contexts, using current and high quality examples of language in context? More importantly, wouldn’t this give more software developers the chance to build more interfaces using the latest technology, and for more ELT materials developers, including language teachers, the chance to show different derivative resource possibilities for effectively using the corpus in language teaching and learning?
A non-commercial educational purpose only stipulation could be used in all of the above resource development scenarios. Indeed, these could all be linked back to the Cambridge English Corpus project website as evidence of the wider social and educational impact as a result of their initial investment. This is what will be happening with most of the publicly funded research projects in the UK following recommendations from the Finch report which come into effect in April 2014. It follows that Open Educational Resources (OER) and Open Educational teaching Practices (OEP) will allow for expertise to be readily available when Open Access research publishing is compulsory for all RCUK and EPSRC funding grants for the development of research-driven open teaching and learning derivatives. Privately funded research projects like this one from CUP could also be leading in this area of open access.
Corpora such as the British National Corpus (BNC), the British Academic Written English (BAWE) corpus, Wikipedia and Google linguistic data as a corpus are some of the many valuable resources that have all been developed into language learning and teaching resources that are openly available on the web. In the following sections, I will refer to leading applied corpus linguistics research and development outputs from leading researchers who have been making their wares freely available if not openly re-purposeable to other developers, as in the example of the FLAX language project’s Open Source Software (OSS). And, hopefully these corpus-based resources are getting easier to access for the non-expert corpus user.
“For the time being” CUP are providing free access to the English Vocabulary Profile website of resources based on the Cambridge English Corpus (formerly known as the Cambridge International Corpus), “the British National Corpus and the Cambridge Learner Corpus, together with other sources, including the Cambridge ESOL vocabulary lists and classroom materials.” Below is a training video resource from CUP available on YouTube, which highlights some of the uses for these freely available resources in language learning, teaching and materials development. This is a very useful step for CUP to be taking with making corpus-based resources and practices more accessible to the mainstream ELT community.
Open practices in applied corpus linguistics
Enter those applied corpus linguistics researchers and developers who have made some if not all of their text analysis tools and Part-Of-Speech-tagged corpora freely accessible via the Web to anyone who is interested in exploring how to use them in their research, teaching or independent language learning. Well-known web-based projects include Tom Cobb’s resource-rich Lextutor site, Mark Davies’ BYU-BNC (Brigham Young University – British National Corpus) concordancer interface and the Corpus of Contemporary American English (COCA) with WordandPhrase (with WordandPhrase training videos resources on YouTube) for general English and English for Academic Purposes (EAP), Laurence Anthony’s AntConc concordancing freeware for Do-It-Yourself (DIY) corpus building (with AntConc training video resources on YouTube), and the Sketch Engine by Lexical Computing which offers some open resources for DDL. Open invitations from the Lextutor and AntConc project developers seeking input on the design, development and evaluation of existing and proposed project tools and resources are made by way of social networking sites, the Lextutor Facebook group and the AntConc Google groups discussion list. Responses usually come from a steady number of DDL ‘geeks’, however, namely those who have reached a level of competence and confidence with discussing the tools and resources therein. And, most of those actively participating in these social networking sites are also engaging in corpus-based research.
Data-Driven Learning for the masses?
My own presentation at IATEFL Liverpool was based on my most recent project with the University of Oxford IT Services for providing and promoting OSS interfaces from the FLAX language project for increasing access to the BNC and BAWE corpora, both managed by Oxford. In addition to this, the same OSS developed by FLAX has been simplified with the development of easy-to-use interfaces for enabling language teachers to build their own open language collections for the web. Such collections using OER from Oxford lecture podcasts, which have been licensed as creative commons content, have also been demonstrated by the TOETOE International project (Fitzgerald, 2013).
The following two videos from the FLAX language collections show their OSS for using corpus-based resources in ELT that are accessible both in terms of simplicity and in terms of openness. The first training video demonstrates the Web as corpus and how this resource has been effectively mined and linked to the BNC for enhancement of both corpora for uses in DDL. The second training video demonstrates how to build your own Do-It-Yourself corpora using the FLAX OSS and Oxford OER. With open corpus-based resources the reality of DIY corpora is becoming increasingly possible in DDL research and teaching and learning practice (Charles, 2012; Fitzgerald, in press).
So, go ahead, and cut out the middleman in data-driven learning.
FLAX Web Collections (derived from Google linguistic data):
The Web Phrases and Web Collocations collections in FLAX are based on another extensive corpus of English derived from Google linguistic data. In particular, the Web Phrases collection allows you to identify problematic phrasing in writing by fine-tuning words that precede and follow phrases that you would like to use in your writing by drawing on this large database of English from Google. This allows you to substitute any awkward phrasing with naturally occurring phrases from the collection to improve the structure and the fluency of writing.
FLAX Do-It-Yourself Podcast Corpora – Part One:
Learn how to build powerful open language collections through this training video demonstration. Featuring audio and video podcast corpora using the FLAX Language tools and open educational resources (OER) from the OpenSpires project at the University of Oxford and TED Talks.
Charles, M. (2012). ‘Proper vocabulary and juicy collocations’: EAP students evaluate do-it-yourself corpus-building. English for Specific Purposes, 31: 93-102.
Davies, M. (1991-present). The Corpus of Contemporary American English (COCA). Retrieved from http://corpus.byu.edu/coca/
Fitzgerald, A. (2013). TOETOE International: FLAX Weaving with Oxford Open Educational Resources. Open Educational Resources International Case Study. Commissioned by the Higher Education Academy (HEA), United Kingdom. Retrieved from http://www.heacademy.ac.uk/projects/detail/oer/OER_int_006_Ox%282%29
Fitzgerald, A. (In Press). Openness in English for Academic Purposes. Open Educational Resources Case Study based at Durham University: Pedagogical development from OER practice. Commissioned by the Higher Education Academy (HEA) and the Joint Information Systems Committee (JISC), United Kingdom.
FLAX. (n.d.). The “Flexible Language Acquisition Project”. Retrieved from http://flax.nzdl.org/
Johns, T. (1991). From printout to handout: grammar and vocabulary teaching in the context of data-driven learning. In: T. Johns & P. King (Eds.), Classroom Concordancing. English Language Research Journal, 4: 27-45.
Johns, T. (2002). ‘Data-driven learning: the perpetual challenge.’ In: B. Kettemann & G. Marko (Eds.), Teaching and Learning by Doing Corpus Analysis. Amsterdam: Rodopi. 107-117.
Hyland, K. (2006). English for Academic Purposes: An Advanced Handbook. London: Routledge.
McEnery, T. & A. Wilson. (1997). Teaching and language corpora. ReCALL, 9 (1): 5-14.
O’Keeffe, A., McCarthy, M., & Carter R. (2007). From Corpus to Classroom: language use and language teaching. Cambridge: Cambridge University Press.
Oxford Collocation Dictionary for Students of English (2nd Edition) (2009), Oxford University Press.
A lot of talk around defining current and trending practices in EAP can be tuned into via open as well as proprietary channels. In this section, I will refer to new-found open practices in EAP which are embracing Web 2.0 technologies amidst a backdrop of closed practices in EAP academic publishing and within subscription-only EAP memberships. I will open up discussion around these different practices within EAP to sketch out common ground for where EAP could be heading with respects to global outreach.
Toward open practices in EAP
Recent months have evidenced a steady opening up of practices for sharing expertise and resources in EAP. The new EAP teaching blog based at Nottingham University as a discussion-based side-shoot to their new Masters programme in EAP teaching makes use of the most widely used open-source blogging software, WordPress. Thanks to our friends in Canada, EAP tweetchat sessions are run on twitter with the hashtag #EAPchat every first and third Monday of the month, bringing together EAP practitioners who wish to participate in global EAP discussions as well as suggest topics for upcoming tweetchat sessions. An archived transcript page is available at the end of each EAPchat twitter session.
Free webinars from Oxford University Press (OUP), the largest academic publishing house in the world, are also broadcasting talk on EAP to the world. Julie Moore who has collaborated on the new Oxford EAP book series has also contributed free webinars with OUP attended by EAP practitioners from around the world. A review of one of Julie’s webinars on academic grammar can be found on the OUP-sponsored ELT global blog. Wouldn’t it be great if more EAP practitioners opened up their practice in this way to suggest areas of expertise in EAP that they would like to contribute and broadcast via webinars with OUP’s considerable market outreach?
The EAP community in the UK mainly gathers around BALEAP with their Professional Issues Meetings, accreditation scheme, biennial conference and lively email discussion list. There is a noticeable push-pull between open and closed EAP practices within BALEAP which I would like to bring into the open for discussion. Openness was built into the Durham PIM on the EAP Practitioner in June of this year to make this the first BALEAP event to have a twitter hashtag thanks to forward thinking from Steve Kirk. Since this PIM he has also been curating a useful EAP practitioner resources site with Scoop.it!
There does seem to be a willingness on the part of BALEAP members to explore with new technologies so that their discussions around issues on EAP are openly available. However, the BALEAP email discussion list which I mentioned above is the only one of half a dozen similarly JISC-hosted email discussion lists that I belong to which is closed off by the BALEAP membership subscription pay-wall. The others which I subscribe to for free are all open, and discussion transcripts from their contributing members can be searched via the web through the JISC email archives. This has been a BALEAP executive committee decision to keep the email discussion list closed and I question whether this decision best reflects the current drive toward openness among BALEAP members who are interested in sharing their insights and expertise with those around the world for whom BALEAP membership is not an affordable option.
BALEAP recently added the strap-line the global forum for EAP practitioners to its website. Formerly the British Association of Lecturers in EAP (hence the continuity from the acronym to the name BALEAP), some of their event and research outputs can be found on their website but others can only be accessed via the subscription-only Journal of English for Academic Purposes (JEAP). And, you can probably guess where I’m going here with concerns around openness or lack thereof with respects to being the global EAP practitioner forum…
Nonetheless, an invaluable EAP resource that BALEAP have put out onto the wild web is the EAP teacher competency framework. An EAP practitioner portfolio mentoring programme is currently in the pilot stages and there is talk of matching EAP teaching competencies in BALEAP with the UK Professional Standards Framework (UKPSF) at the HEA, but once again for those non-UK and freelance EAP practitioners who do not work for UK higher education institutions that subscribe to the HEA such an alignment of frameworks may not be suitable or relevant. That said, the essence of the UKPSF is useful and perhaps with the current OER International programme at the HEA we can see ownership of the UKPSF go international? HEA accreditation as a UK body will remain a reality, however, so it will be interesting to see what the HEAL working party at BALEAP who are collaborating with the HEA will come up with in response to shaping the identity of BALEAP who aspire to be known as the global forum for EAP practitioners.
Having recently formed a Web Resources Sub Committee (WRSC) with other technologically and OER oriented EAPers at BALEAP we may yet see things open up. Below is the presentation Ylva Berglund Prytz and myself (both on the WRSC at BALEAP) gave on Openness in English for Specific Academic Purposes (ESAP) at the PIM in Sheffield in November, 2011.
Elsevier are the publishers of JEAP and from experience open access in academic publishing has come about through the pressure tactics of certain academic communities of practice lobbying for green and gold standard open access publications in their representative fields. Open Access week – set the default to open is coming up again on October 22nd.
Moving to open access research publications all depends on the culture of the academic research community. It will take those EAP practitioners and researchers working in privileged and well-resourced institutions that can easily afford institutional subscriptions to memberships like BALEAP to seriously consider open access and the potential for global reach of research into EAP. It will also take those EAP practitioners who are working off their institutional radars, so to speak, and who are experimenting with Web 2.0 technologies to get their message and expertise out there for global interaction around issues in EAP practice and research. Something I picked up from Steve Kirk’s Scoop.it! account is a recent book setting an open trend in EAP publishing, Writing Programs Worldwide: Profiles of Academic Writing in Many Places which is published in a free digital online format as well as a pay-for print version. This echoes what publishers are doing with big names in more open fields such as the Bloomsbury Academic publication of The Digital Scholar by Martin Weller. Exciting times and opportunities lie ahead for EAP publishing.
English for Specific Academic Purposes with data driven learning resources
It seems to be no great coincidence that Tim Johns who coined the term Data Driven Learning (DDL) in 1994 had also come up with the term English for Academic Purposes (EAP) in 1974 (Hyland, 2006). According to Chris Tribble’s preliminary results from his latest survey in-take on DDL (announced at the TaLC closing keynote address), EAP practitioners still make up a high percentage of those who took the survey, indicating greater uptake of corpus-based resources and practices in EAP than those in EFL / ESL, for example.
Open corpus-based tools and resources have the potential to equip and enable EAP practitioners to develop relevant ESAP materials. Awareness of and training in these open corpus-based resources will need to be shared across the EAP community, however, to ensure that we are crowd-sourcing our expertise and our resources in this area. If you click on the image below this will take you to a talk I gave at the Open University in the UK on addressing academic literacies with corpus-based OER. This was inspired by the Tribble DDL survey and the lead up to the TaLC10 conference. It was an added bonus to have one of the BAWE corpus developer team members in the audience that day and to receive positive feedback on how FLAX have opened up the BAWE in collaboration with TOETOE and the Learning Technologies Group Oxford.
OU video presentation on Addressing Academic Literacies with open corpus-based resources
Over the course of this academic year FLAX and TOETOE will continue to build onto work around opening up research corpora like the BAWE and the BNC managed by the Oxford Text Archive for developing resources for ESAP. We will also be engaging with various stakeholder groups through f2f workshops, online surveys and interviews for open corpus-based resources evaluation which I will be sharing insights from on this blog.
One final word on OER and where corpus-based resources might play a significant role in making higher education more accessible to the estimated 100 million learners worldwide who currently qualify to study at university level but do not have the means to do so (UNESCO, 2008). Because English is the educational lingua franca, open educationalists are going to source support resources for academic English from the approaches and materials that are currently popular and openly available to re-use under creative commons licences. This throws up interesting issues around specificity in EAP for supporting learners with discipline-specific English.
A parallel universe in EAP materials development
Cartoon image referred to by Niko Pfund, USA president of OUP in podcast on Ebooks, Reading and Scholarship in a Digital Age
It would be an understatement to say that the academic publishing world is undergoing a radical transformation with the arrival of digital and open publishing formats which are democratising publishing as we know it. Niko Pfund, President of Oxford University Press (USA), discusses the ways in which technology affects reading, scholarship, publishing and even thinking in a presentation he gave at Oxford recently which you can access by clicking on the cartoon image above.
I learned a lot from this podcast, including OUP’s commitment since 2003 to publishing all research monographs in both digital and print formats. I also learned of their admiration for what Wikipedians have done for opening up knowledge and publishing through human crowd-sourcing that utilises open technologies and platforms. A parallel drawn here to something that was brought up repeatedly at the EduWiki conference is how academic publishing houses like OUP are well placed to open up the disciplines in the same way as Wikipedia by bringing the voices of the academy into the public sphere through more accessible means of communication than research, and by effectively linking this research to current world events to gain wider relevance and readership.
Pfund refers to messy experimental times in academic publishing with lots of new business models currently being explored for spear-heading changes in publishing. OUP heavily subsidise and give away a lot of published resources including ELT textbooks to the developing world, but not yet under open licences (someone please correct me if I’m wrong here) for those practitioners working in under-resourced communities so that they can re-mix and re-distribute these same resources.
OUCS and OUP are literally down the road from one another, a parallel universe as it were. The former is research, learning and teaching focused with a strong commitment to public scholarship, and the later is focused on exploring new practices and business models for delivering the best in academic publishing. Arguably, there is a lot of overlap that can be tapped into here for the collaborative development of open corpus-based resources and practices for the global ELT market.
In-house EAP materials development
EAP teachers have been developing in-house EAP materials in response to the generic EAP teaching resources available on the mainstream market as a means to meeting the real needs of their students going onto all number of degree programmes. However, as I mentioned in section 2 of this blog post, many of these in-house EAP materials make use of third party copyrighted texts and therefore cannot be shared beyond the secret garden of the classroom or the institutional password-protected VLE. An enormous opportunity presents itself here to EAP practitioners and corpus linguists alike to push out resources in English for Specific Academic Purposes (ESAP) using open Data-Driven Learning (DDL) methods, texts, tools and platforms for sharing OER for ESAP. A significant cultural shift in practice will be required, however, to realise this vision for developing flexible and open ESAP resources that can be adapted for use in multiple educational contexts both off- and on-line. Once again, in subsequent blog posts, I will be presenting open educational practices and open research methods to open up discussion for ways forward with this particular global EAP vision.
References
Alexander, O., Bell, D., Cardew, S., King, J., Pallant, A., Scott, M., Thomas, D., & Ward Goodbody, M. (2008) Competency framework for teachers of English for Academic Purposes, BALEAP.
Hyland, K. (2006). English for Academic Purposes: An Advanced Handbook. London: Routledge.
Johns, T. (1994). From Printout to Handout: Grammar and Vocabulary Teaching in the Context of Data-driven Learning. In Odlin, T. (ed.), Perspectives on Pedagogical Grammar: 27-45. Cambridge: Cambridge University Press.
Previously, I left off with reflections from the 2012 IATEFL conference and exhibition in Glasgow. Wandering through the exhibition hall crammed with vendor-driven English language resources for sale from the usual suspects (big brand publishers), the analogy of the greatest hits came to mind with respects to EFL / ESL and EAP materials development and publishing. But at this same IATEFL event there was also a lot of co-channel interference feeding in from the world of self-publishing, reflecting how open digital scholarship has become mainstream practice in Teaching English as a Foreign Language (TEFL), also known as Teaching English as a Second Language (TESL) in North America. The launch of the round initiative at IATEFL, bridging the gap between ELT blogging and book-making, where the emphasis is on teachers as publishers is but one example.
Crosstalk in ELT materials development and publishing
Let’s take a closer look at the crosstalk happening within the world of ELT materials development and publishing, where messages are being transmitted simultaneously from radio 1 and radio 2 type stations. Across the wider ELT world, TEFL / TESL has embraced Web 2.0 far more readily than EAP (but there are interesting signs of open online life emerging from some EAP practitioners, which I will highlight in the last section of this blog).
Within TEFL, we can observe more in the way of collaboration between open and proprietary publishing practices. English360, also present at IATEFL 2012, combines proprietary content from Cambridge University Press with teachers’ lesson plans, along with tools for creating custom-made pay-for online English language courses. Across the ELT resources landscape open resources and practices proliferate, including: free ELT magazines and journals; blogs and commentary-led discussions; micro-blogging via twitter feeds and tweetchat sessions; instructional and training videos via YouTube and iTunesU (both proprietary channels that hold a lot of OER), and; online communities with lesson plan resource banks. These and many more open educational practices (OEP) are the norm in TEFL / TESL. And, let’s not forget Russell Stannard’s Teacher Training Videos website of free resources for navigating web-based language tools and projects drawing on his service as the Web Watcher at English Teaching Professional for well over a decade now.
The broken record in ELT publishing
Broken record of “I believe in miracles” by Ian Crowther via Flickr
Yet, both the TEFL / TESL and EAP markets are still well and truly saturated with the glossy print-based textbook format, stretching to the CD-ROM and mostly password-protected online resource formats. The greatest hits get played over and over again and the needle continues to get stuck in many places.
Exactly why does the closed textbook format concern me so much? It’s an issue of granularity or size really which leads to further issues with flexibility, specificity and currency. As we all know, there are only so many target language samples and task types that you can pack into a print-based textbook. Beyond the trendy conversation-based topics, what are sometimes useful and transferable are the approaches that make up the pedagogy contained therein. Unlocking these approaches and linking to wider and more relevant and authentic language resources is key. We can see this approach to linked resources development taken by the web-based FLAX and WordandPhrase corpus-based projects. Publishers are aware of the limitations of the textbook format but they’re also trying to reach a large consumer base to boost their sales so it remains in their best interests to keep resources generic. Think of all the academic English writing books out there, many of which claim to be based on the current research for meeting your teaching and learning needs for academic English writing across the disciplines, but turn out to be more of the same topic-based how-to skills books working within the same essayist writing tradition.
Open textbooks
The open textbook movement brings a new type of textbook to the world of education. One that can be produced at a fraction of the cost and one that can be tailored, linked to external resources, changed and updated whenever the pedagogical needs arise.
The argument in favour of textbooks in ELT has always been one for providing structure to the teaching and learning sequence of a particular syllabus or course. Locked-down proprietary textbook, CD-ROM and online resource formats are not only expensive but they are inflexible. And, these force teachers into problematic practices. Despite trying to point out the perils of plagiarism to our students, as language teachers we are supplementing textbooks with texts, images and audio-visual material from wherever we can beg, borrow and steal them. Of course we do this for principled pedagogical reasons and if we don’t plan on sharing these teaching materials beyond classroom and password-protected VLE walls we’re probably OK, right?
I’ve seen many a lesson handout or in-house course pack for language teaching that includes many third party texts and images which are duly referenced. Whether the teacher/materials developer puts the small ‘c’ in the circle or not, marking this handout or course pack as copyrighted, the default license is one of copyright to the institution where that practitioner works. And, this is where the problem lies. The handout or course pack is potentially in breach of the copyright of any third party materials used therein, unless the teacher/materials developer has gained clearance from the copyright holders or unless those third party materials are openly licensed as OER for re-mixing. Good practice with materials development and licensing will ensure that valuable resources created by teachers can be legitimately shared across learning and teaching communities. You can do this through open publishing technologies and/or in collaboration with publishers.
A deficit in corpus-based resources training
Good corpus-derived textbooks from leading publishing houses do exist. Finally, the teaching of spoken grammar gets the nod with The Handbook of Spoken Grammar textbook by Delta Publishing. But, and this is a big but, do these textbooks go far enough to address the current deficit in teacher and learner training with corpus-based tools and resources? I expect the publishers would direct this question to the academic monographs, of which there are a fair few, on Data Driven Learning (DDL) and corpus linguistics. I have some on my bookshelf and there are many more in the library where I am a student/fellow, all cross-referenced to academic journal articles from research into corpus linguistics and DDL which I will be talking about more in the third section of this blog. But exactly how accessible are these resources – in terms of their cost, the academic language they are packaged in, the closed proprietary formats they are published in, and in relation to much of the subscription-only corpora and concordancing software their research is based on? It’s no wonder that training in corpus tools and resources is not part of mainstream English language teacher training. Of course, there are open exceptions that provide new models in corpus-based resources development and publishing practices and this is very much what the TOETOE project is trying to share with language education communities.
Corpus linguists are well aware that corpus-based resources and tools in language teaching and materials development haven’t taken off as a popular sport in mainstream language teaching and teacher training. This does run counter to the findings from the research, however, where the argument is that DDL has reached a level of maturity (Nesi & Gardner, 2011; Reppen, 2010; O’Keefe et.al., 2007; Biber, 2006). Similarly, many of the findings from leading researchers (too many to cite!) in language and teaching corpora have been baffled by the chasm between the research into DDL and the majority of mainstream ELT materials that appear on the market that continue to ignore the evidence about actual language usage from corpus-based research studies. Once again, this comes back to the issue of specific versus generic language materials and the issues raised around limitations with developing restricted resource formats.
Gangnam style corpus-based resources development
Gangnam Style by PSY 싸이 강남스타일 via Flickr
So what’s it going to take for corpus-based resources to take off Gangnam style in mainstream language teaching and teacher training? And, how are we going to make these resources cooler and more accessible so as to stop language teaching practitioners from giving them a bad rap? More and more corpus-based tools and resources are being built with or re-purposed with open source technologies and platforms. We are now presented with more and more web-based channels for the dissemination of educational resources, offering the potential for massification and exciting new possibilities for achieving what has always eluded the language education and language corpora research community, namely the wide-scale adoption of corpus-based resources in language education.
I’ve actually been asked to take the word ‘corpus’ out of a workshop title by a conference organiser so as to attract more participants. If you’re interested in expressing your own experiences with using corpora in language teaching and would like to make suggestions for where you think data-driven learning should be heading you can complete Chris Tribble’s on-going online survey on DDL here.
Radio, what’s new? Someone still loves you (corpus-based resources)…
PublishOER
Publishers constantly need ideas for and examples of good educational resources. No great surprises there. I would like to propose that OER and OEP are a great way to get noticed by publishers to start working with them. Sitting on the steering committee meeting with the JISC-funded PublishOER project members at Newcastle University in the UK in early September, we also had representatives from Elsevier, RightsCom, the Royal Veterinary College (check out their exciting WikiVet OER project) and JISC Collections at the table. Elsevier who have borne the brunt of a lot of the lash back in academic publishing from the Open Access movement are trying to open up to the fast changing landscape of open practices in publishing. PublishOER are creating new mechanisms, a permissions request system, for allowing teachers and academics to use copyrighted resources in OER. These OER will include links and recommendations leading back to the publishers’ copyrighted resources as a mechanism for promoting them. Publishers are also interested in using OER developed by teachers and academics that are well designed and well received by students. Re-mixable OER offer great business opportunities for publishers as well as great dissemination opportunities for DDL researchers and practitioners, enabling effective corpus-based ELT resources to reach broader audiences.
Sustainability is an important issue with any project, resource, event or community. How many times have we seen school textbook sets stay unused on shelves, or heard of government-funded project resources that go unused perhaps due to a lack of discoverability? To build new and useful resources online does not necessarily mean that teachers and learners will come in droves to find and use these resources even if they are for free. David Duebelbeiss of EFL Classroom 2.0 is currently exploring new business models for sharing and selling ELT resources. One example is the sale of lesson plans in a can which were once free and now sell for $19.95, a “once and forever payment”. Some teachers can even make it rich as is reported in this businessweek article about a kindergarten teacher who sold her popular lesson plans through the TeachersPayTeachers initiative.
Transaction costs in materials development don’t only include the cost of the tools and resources that enable materials development, they also include the cost in terms of time spent on developing resources and marketing them. Open education also points to the unnecessary cost in duplicating the same educational resources over and over again because they haven’t been designed and licensed openly for sharing and re-mixing. Putting your resources in the right places, in more than one, and working with those that understand new markets, new technologies and new business models, including open education practitioners and publishers, are all ways forward to ensure a return on investment with materials development.
Hopefully, by providing new frequencies for practitioners to tune into for how to create resources from both open and proprietary resources a new mixed economy (as the PublishOER crowd like to refer to it) will be realised.
A matter of scale in open and distance education
Let’s not forget those working in ELT around the world, many of whom are volunteers, who along with their students simply cannot afford the cost of proprietary and subscription-only educational resources, let alone the investment and infrastructure for physical classrooms and schools. Issues around technology and ELT resources and practices in developing countries did surface at IATEFL 2012 but awareness around the more pressing issues may not be finding ways to effectively filter their way through to well-resourced ELT practitioners and the institutions that employ them. ELT is still fixated on classroom-based teaching resources and practices.
The Hornby Educational Trust in collaboration with the British Council which is a registered charity have been offering scholarships to English language teachers working in under-resourced communities since 1970. I attended a session given by the Hornby scholars at IATEFL 2012 and although I was impressed by the enthusiasm and range of expertise of those who had been selected for scholarships, reporting on ELT interventions they had devised in their local contexts, I couldn’t help but wonder about the scale of the challenges we currently face in education globally. How are we going to provide education opportunities for the additional 100 million learners currently seeking access to the formal post-secondary sector (UNESCO, 2008)? In Sub-Saharan Africa, more than half of all children will not have the privilege of a senior high school education (Ibid). What open and distance education teaches us is that there are just not enough teachers/educators out there. Nor will the conventional industrial model of educational delivery be able to meet this demand.
As DDL researchers and resource developers who are looking for ways to make our research and practice more widely adopted in language teaching and learning globally, wouldn’t we also want to be thinking about where the real educational needs are and how we might be reaching under-resourced communities with open corpus-based educational resources for uses in EFL / ESL and EAP among other target languages? First of all, we would need to devote more attention to unpacking corpus-based resources so that they are more accessible to the non-expert user, and we would need to find more ways of making these resources more discoverable.
In interviews released as OER on YouTube by DigitaLang with leading TEFLers at IATEFL 2012, I was able to catch up on opinions around the use of technology in ELT. Nik Peachey corrected the often widely held misconception about the digital divide for uses of technology in developing countries, pointing to the adoption of mobile and distance education rather than the importation of costly print-based published materials with first-world content and concerns that are often inappropriate for developing world contexts. You can view his interview here:
Thinking beyond classroom-based practice
Scott Thornbury, writer of the A-Z of ELT blog – another influential and popular discussion site for the classic hits in ELT for those who are both new and old to the field – also praised the Hornby scholars and gave his views on technology in ELT in a further IATEFL 2012 DigitaLang interview. He talks about the ‘human factor’ as something that occurs in classroom-based language teaching. In order to nurture this human factor, he recommends that technology be kept for uses outside the classroom or at best for uses in online teacher education. Open and distance education practitioners and researchers would also agree that well-resourced face-2-face instruction yields high educational returns as in the case of the Hornby scholarships, but they would also argue that this is not a scalable business model for meeting the needs of the many who still lack access to formal post-secondary education. What is more, the human factor as evidenced in online collaborative learning is well documented in the research from open and distance education as it is from traditional technology-enhanced classroom-based teaching.
For a view into how open and distance education practitioners and researchers are trying to scale these learning and accreditation opportunities for the developing world, the following open discussion thread from Wayne Mackintosh on MOOCs for developing countries – discussion from the OERuniversity Google Groups provides an entry point:
“Access to reliable and affordable internet connectivity poses unique challenges in the developing world. That said, I believe it possible to design open courses which use a mix of conventional print-based materials for “high-bandwidth” data and mobile telephony for “low-bandwidth” peer-to-peer interactions. So for example, the OERu delivery model will be able to produce print-based study materials and it would be possible to automatically generate CD-ROM images of the rich media (videos / audio) contained in the course for offline viewing. We already have the capability to generate collections of OERu course materials authored in WikiEducator to produce print-based equivalents which could be reproduced and distributed locally. The printed document provides footnotes for all the web-links in the materials which OERu learners could investigate when visiting an Internet access point. OERu courses integrate microblogging for peer-to-peer interactions and we produce a timeline of all contributions via discussion forums, blogs etc. The bandwidth requirements for these kind of interactions are relatively low which address to some extent the cost of connectivity.”
References:
Altbach, P. G., Reisberg, L., & Rumbley, L. E. (2009). Trends in Global Higher Education: Tracking an Academic Revolution. A Report Prepared for the UNESCO 2009 World Conference on Higher Education. Retrieved from http://unesdoc.unesco.org/images/0018/001832/183219e.pdf
Biber, D., (2006). University language: a corpus-based study of spoken and written registers. Amsterdam: John Benjamins.
Nesi, H, Gardner, S., Thompson, P. & Wickens, P. (2007). The British Academic Written English (BAWE) corpus, developed at the Universities of Warwick, Reading and Oxford Brookes under the directorship of Hilary Nesi and Sheena Gardner (formerly of the Centre for Applied Linguistics [previously called CELTE], Warwick), Paul Thompson (Department of Applied Linguistics, Reading) and Paul Wickens (Westminster Institute of Education, Oxford Brookes), with funding from the ESRC (RES-000-23-0800)
Nesi, H. and Gardner, S. (2012). Genres across the Disciplines: Student writing in higher education. Cambridge: Cambridge University Press.
O’Keeffe, A., McCarthy, M., & Carter R. (2007). From Corpus to Classroom: language use and language teaching. Cambridge: Cambridge University Press.
Reppen, R. (2010). Using Corpora in the Language Classroom . Cambridge: Cambridge University Press.
Original, in-house and live, this station brings us what’s new in the world of OER for corpus-based language resources.
Flipped conferencing
Kicking things off in late March with Clare Carr from Durham, we co-presented an OER for EAP corpus-based teacher and learner training cascade project at the Eurocall CMC & Teacher Education Annual Workshop in Bologna, Italy. This was very much a flipped conference whereby draft presentation papers were sent to be read in advance by participants and where the focus was on discussion rather than presentation at the physical event. Russell Stannard of Teacher Training Videos (TTV) was the keynote speaker at this conference and I have been developing some training resources for the FLAX open-source corpus collections which will be ready to go live on TTV soon. New collections in FLAX have opened up the BAWE corpus and have linked this to the BNC, a Google-derived n-gram corpus as well as Wikimedia resources, namely Wikipedia and Wiktionary. These collections in FLAX show what’s cutting edge in the developer world of open corpus-based resources for language learning and teaching.
Focusing on linked resources: which academic vocabulary list?
In a later post, I will be looking at Mark Davies’ new work with Academic Vocabulary Lists based on a 110 million-word academic sub corpus in the Corpus of Contemporary American (COCA) English – moving away from the Academic Word List (AWL) by Coxhead (2000) based on a 3.5 million-word corpus – and his innovative web tools and collections based on the COCA. Once again, Davies’ Word and Phrase project website at Brigham Young University contains a bundle of powerfully linked resources, including a collocational thesaurus which links to other leading research resources such as the on-going lexical database project at Princeton, WordNet.
The open approach to developing non-commercial learning and teaching corpus-based resources in FLAX also shows the commitment to OER at OUCS (including the Oxford Text Archive), where the BAWE and the BNC research corpora are both managed. Click on the image below to visit the BAWE collections in FLAX.
BAWE case study text from the Life Sciences collection in FLAX with Wikipedia resources
Open eBooks for language learning and teaching
Learning Through Sharing: Open Resources, Open Practices, Open Communication, was the theme of the EuroCALL conference and to follow things up the organisers have released a call for OER in languages for the creation of an open eBook on the same theme. The book will be “a collection of case studies providing practical suggestions for the incorporation of Open Educational Resources (OER) and Practices (OEP), and Open Communication principles to the language classroom and to the initial and continuing development of language teachers.” This open-access e-Book, aimed at practitioners in secondary and tertiary education, will be freely available for download. If you’re interested in submitting a proposal to contribute to this electronic volume, please send in a case study proposal (maximum 500 words) by 15 October 2012 to the co-editors of the publication, Ana Beaven (University of Bologna, Italy), Anna Comas-Quinn (Open University, UK) and Barbara Sawhill (Oberlin College, USA).
MOOC on Open Translation tools and practices
Another learning event which I’ve just picked up from EuroCALL is a pilot Massive Open Online Course in open translation practices being run from the British Open University from 15th October to 7 December 2012 (8 weeks), with the accompanying course website opening on Oct 10th 2012. Visit the “Get involved” tab on the following site: http://www.ot12.org/. “Open translation practices rely on crowd sourcing, and are used for translating open resources such as TED talks and Wikipedia articles, and also in global blogging and citizen media projects such as Global Voices. There are many tools to support Open Translation practices, from Google translation tools to online dictionaries like Wordreference, or translation workflow tools like Transifex.” Some of these tools and practices will be explored in the OT12 MOOC.
Bringing open corpus-based projects to the Open Education community
On the back of the Cambridge 2012 conference: Innovation and Impact – Openly Collaborating to Enhance Education held in April, I’ve been working on another eBook chapter on open corpus-based resources which will be launched very soon at the Open Education conference in Vancouver. The Cambridge 2012 event was jointly hosted in Cambridge, England by the Open Course Ware Consortium (OCWC) and SCORE. Presenting with Terri Edwards from Durham, we covered EAP student and teacher perceptions of training with open corpus-based resources from three projects: FLAX, the Lextutor and AntConc. These three projects vary in terms of openness and the type of resources they are offering. In future posts I will be looking at their work and the communities that form around their resources in more depth. The following video from the conference has captured our presentation and the ensuing discussion at this event to a non-specialist audience who are curious to know how open corpus-based resources can help with the open education vision. Embedding these tools and resources into online and distance education to support the growing number of learners worldwide who wish to access higher education, where the OER and most published research are in English, opens a whole new world of possibilities for open corpus-based resources and EAP practitioners working in this area.
A further video from a panel discussion which I contributed to – an OER kaleidoscope for languages – looks at three further open language resources projects that are currently underway and building momentum here in the UK: OpenLives, LORO, the CommunityCafe. Reference to other established OER projects for languages and the humanities including LanguageBox and the HumBox are also made in this talk.
A world declaration for OER
The World OER congress in June at the UNESCO headquarters in Paris marked ten years since the coining of the term OER in 2002 along with the formal adoption of an OER declaration (click on the image to see the declaration). I’ve included the following quotation from the OER declaration to provide a backdrop to this growing open education movement as it applies to language teaching and learning, highlighting that attribution for original work is commonplace with creative commons licensing.
Emphasizing that the term Open Educational Resources (OER) was coined at UNESCO’s 2002 Forum on OpenCourseWare and designates “teaching, learning and research materials in any medium, digital or otherwise, that reside in the public domain or have been released under an open license that permits no-cost access, use, adaptation and redistribution by others with no or limited restrictions. Open licensing is built within the existing framework of intellectual property rights as defined by relevant international conventions and respects the authorship of the work”.
Wikimedia – why not?
Wikimedia Foundation
Earlier in September, I volunteered to present at the EduWiki conference in Leicester which was hosted by the Wikimedia UK chapter. Most people are familiar with Wikipedia which is the sixth most visited website in the world. It is but one of many sister projects managed by the Wikimedia Foundation, however, along with others such as Wikiversity, Wiktionary etc.
I will also be blogging soon about widely held misconceptions for uses of Wikipedia in EAP and EFL / ESL while exploring its potentials in writing instruction with reference to some very exciting education projects using Wikipedia around the world. The types of texts that make up Wikipedia alongside many academics’ realisations that they need to be reaching wider audiences with their work through more accessible modes of writing transmission are all issues I will be commenting on in this blog in the very near future.
Presenting the work the FLAX team have done with text mining, incorporating David Milne’s Wikipedia mining tool, the potential of Wikipedia as an open corpus resource in language learning and teaching is evident. I was demonstrating how this Wikipedia corpus has been linked to other research corpora in FLAX, namely the BNC and the BAWE, for the development of corpus-based OER for EFL / ESL and EAP. And, let’s not forget that it’s all for free!
The open approach to corpus resources development
There is no reason why the open approach taken by FLAX cannot be extended to build open corpus-based collections for learning and teaching other modern languages, linking different language versions of Wikipedia to relevant research corpora and resources in the target language. In particular, functionality in the FLAX collections that enable you to compare how language is used differently across a range of corpora, which are further supported by additional resources such as Wiktionary and Roget’s Thesaurus, make for a very powerful language resource. Crowd-sourcing corpus resources through open research and education practices and through the development of open infrastructure for managing and making these resources available is not as far off in the future as we might think. The Common Language Resources and Technology Infrastructure (CLARIN) mission in Europe is a leading success story in the direction currently being taken with corpus-based resources (read more about the recent workshop for CLARIN-D held in Leipzig, Germany).
These past few months I’ve been tuning into a lot of different practitioner events and discussions across a range of educational communities which I feel are of relevance to English language education where uses for corpus-based resources are concerned. There’s something very distinct about the way these different communities are coming together and in the way they are sharing their ideas and outputs. In this post, I will liken their behaviour to different types of radio station broadcast, highlighting differences in communication style and the types of audience (and audience participation) they tend to attract.
I’ve also been re-setting my residential as well as my work stations. No longer at Durham University’s English Language Centre, I’m now London-based and have just set off on a whirlwind adventure for further open educational resources (OER) development and dissemination work with collaborators and stakeholders in a variety of locations around the world. TOETOE is going international and is now being hosted by Oxford University Computing Services (OUCS) in conjunction with the Higher Education Academy (HEA) and the Joint Information Systems Committee (JISC) as part of the UK government-funded OER International programme.
I will also be spreading the word about the newly formed Open Education Special Interest Group (OESIG), the Flexible Language Acquisition (FLAX) open corpus-based language resources project at the University of Waikato, and select research corpora, including the British National Corpus (BNC) and the British Academic Written English (BAWE) corpus, both managed by OUCS, which have been prised open by FLAX and TOETOE for uses in English as a Foreign Language (EFL) – also referred to as English as a Second Language (ESL) in North America – and English for Academic Purposes (EAP). Stay tuned to this blog in the coming months for more insights into open corpus-based English language resources and their uses in different teaching and learning contexts.
This post is what those in the blogging business refer to as a ‘cornerstone’ post as it includes many insights into the past few months of my teaching fellowship in OER with the Support Centre in Open Educational Resources (SCORE) at the Open University in the UK. Many posts within one as it were. This post also provides a road map for taking my project work forward while identifying shorter blogging themes for posts that will follow this one. This particular post will also act as the mother-ship TOETOE post from which subsequent satellite posts will be linked. Please use the red menu hyperlinks in the section below to dip in and out of the four main sections of this blog post series. I have elected to choose this more reflective style of writing through blogging so that my growing understandings in this area are more accessible to unanticipated readers who may stumble upon this blog and hopefully make comments to help me refine my work. Two more formal case studies on my TOETOE project to date will be coming out soon via the HEA and the JISC.
I have also made this hyperlinked post (in five sections) available as a .pdf on Slideshare.
Which station(s) are you listening to?
BBC Radio has been going since 1927. With audiences in the UK, four stations in particular are firm favourites: youth oriented BBC Radio 1 featuring new and contemporary music; BBC Radio 2 with middle of the road music for the more mature audience; high culture and arts oriented BBC Radio 3, and; news and current affairs oriented BBC Radio 4. Of course there are many more stations but these four are very typical of those found around the world. What is more, I’ve selected these four very distinct stations as the basis to build a metaphor around the way four very distinct educational practitioner communities are intersecting with corpus-based language teaching resources. This metaphor will draw on thought waves from the following:
Recent Comments