Research-led teaching; research impact; and the politics of journal publishing, predation and prestige.

 For the first time in my academic career of 6.5 years since PhD graduation, I am expected to teach both an undergraduate and postgraduate course, within about a month since commencing my duties as a senior lecturer at my present institution. This is both exciting yet a little anxiety-provoking, not because I do not believe in my capabilities, but because there are so many new things to grapple with, and also because I am given the free hand of figuring out my course content (as long as the institutionally-approved learning outcomes are intact) while figuring out how to deliver them to students whom I have not met, and who are very different from the students I had been surrounded by prior.

The exciting about the courses is that I get to learn along side the students (I mean, I have never taught classes specific to drama/performance arts!) and at the same time, use this time to question what I understand about art (because this is the first time in my entire academic career that I am actually in an arts school. Not just the arts you theorize about but the arts you practice). This allows me to explore the other side given how much of my earlier career experience had been focused on a scientific endeavor that had informed a large portion of my post-doctoral life. Even if the courses I am teaching appear at this point to have nothing to do with the research I have ongoing, preparing for classes by looking and reading through the texts I will assign students had actually been instructive in giving me a different view on the research I do (even though the research stemmed from different disciplines and departments of knowledge), and perhaps, would make me better at being the artscientist I am working towards. 

To think about research impact from a less personal perspective, the Malaysian academic sector had been rocked by furore over the revelation that its academics had been participant in predatory publishing, and it might seem strange to most that this revelation should be anything new, given that it has been ongoing, on a global basis, involving not only academics occupying positions of marginal resource and networks within the larger complex (and often problematic) academic capital, but also academics involved in marginal fields (who might appear, seemingly, to have more capital) but who may had difficulty publishing their work in the usual annals due to the consideration of such work as being fringe - there is a fine line between being participant in the predatory culture willfully to score points to fulfill the provincial goals of your institution with no understanding of its consequences to your larger standing, versus those who may work with publishers that may sit in the grey-line of being predatory or not really predatory by offering certain academics the platform for their work without necessarily compromising the usual vetting process (I had seen this happening in certain areas of sciences where otherwise respectable researchers would actually do special issues in journals that may thread this fine line). 

There is also a necessity for empathy in understanding this issue - for those of us rained in elite institutions and had been surrounded by standards and examples we take for granted as part of our training process, we may fail to realize that such opportunities and experiences are not equally distributed to all Moreover, some of us who have had access to discussion groups, study groups, and outside of coursework workshops and summer schools where we get exposed to a variety of articles published across a range of journals had allowed us to form a more discerning understanding of what constitute respectability and prestige within our fields, therefore allowing us the capital not to succumb to publication 'scams', although this is not obviously always the case.  Further, publishing in prestige journals is a very onerous and often confidence-breaking process (having been on both sides of the process myself, as the person who submits and the person who decides whether the article I have been asked to review is worthy of being published), and difficult for inexperienced and under-resourced junior scholars to navigate with limited support and knowledge of their politics. If scholars from more prestigious institutions with ample resources have as much challenge in being published in such venues in order to advance their careers, what more the academic who almost have 'no standing' or support of any kind in this unequal playing field.

At the same time, we have to be aware that even the (usually commercial) publishers of respectable journals could themselves take advantage of such grey-area practices by creating different classes of journals (usually of the OA kinds, although I recognize that not all OA journals are of the same class) that may or may not fall into the predatory scheme of things, but which create different classes of scholars: scholars who are able to publish in the prestigious journals held by these publishing houses (with some acceptance rates going well below 10% for any given year) and scholars whose work 'did not make the cut' and who were then 'persuaded' to try these alternative journals that may charge OA fees to the scholars upon acceptance. These journals are not the same as the journals that insist you pay them upon submission, or ones that go through a review period of a day or less, since they still adhere to some standards. Some new 'interdisciplinary' type journals tend to fall under this grey-area category (and you will find many scholars from developing economies published in these way more than scholars from developed economies, although these journals may not in themselves be illegitimate since the lack of prestige does not translate to illegitimacy, in the same sense that journals with very high acceptance rates or articles of questionable quality are not necessarily predatory). But how much would a scholar benefit from publishing in journals less read/known by their peers is something that they have to consider, but it may not even be something they may care about. Many highly-ranked, generalized, prestigious journals (particularly in the humanities and social sciences) often do not accept works in some of the areas that are consider to be of too provincial of interest, sufficiently 'scholarly' or even fit within the prescribed standard of rigour (what this means is another set of debate) - therefore, what you may consider acceptable within your local scholarly culture does not translate to similar acceptance elsewhere. Also, if you are covering a topic with minority interest in the scholarly world, you find yourself having to do more work than scholars working on well-represented topics to justify how useful your contribution is to the rest of the world. In the humanities and social sciences, general journals are usually the most highly-ranked but generality also irons over the recognition of differences and under-representation.

In the same vein, an increasing number of academic collectives in the humanities and social sciences had also started no-fee charging OA journals that tend not be to be indexed in the databases that universities who live by the metrics care about, only because many of the same academic collectives resist succumbing to the exploitative nature of these databases (these databases charge a handsome sum for journals to be included; it's not merely a  matter of having the journal aspiring for consideration being approved by the editors of other journals already included). For certain, one often gets a much better experience and even professional recognition of one's work by submitting to these no fee-paying OA journals than to seemingly high quartile journal that respected scholars in your do not even care about but such are scarcely recognized in the more metrics-obsessed academic cultures. That said, the world's obsession with such metrics-based ranking has its beginnings in major (almost monopolistic) publishing houses in Europe who saw good money to be made through the creation of such a system, which are then amplified by other organizations that are not completely disinterested on the matter, who saw profit to be made from creating a system of 'world-class' rankings. Technically, this had its beginnings in the more developed 'Commonwealth' countries of the UK and Australia. Interestingly enough, Canada never quite succumbed to it, probably because it tends to compare itself much more with the US system, which was never big on metrics outside of its own national sphere. And then when in 2007, some of the more 'developed' Asian countries decided that they wanted to be part of the 'world-class' game, the business of metrical assessments grew. I remembered being first inducted into this matter, without quite understanding what it meant, when I had a discussion scientific 'internationalism' in Singapore with a professor there. I won't go into too much detail as I too am still in the process of studying the matter, and at some point, would like to take on a minor side project that looks at the meaning of research impact and its assessment. Perhaps together with another collaborator!

That said, I would like to share something I first wrote on my social media page when responding to the aforementioned issue of predatory publishing, on what research impact meant to me, although some of these description of impact may not be measurable quant-centric metrics:


1. Research impact is when someone, upon reading your work (in whatever form it takes) in detail (not just as a gesture to cite), decide to invite you to speak, visit, or join in on a collaborative work. And then engage with you intelligently on your work.
2. Research impact is when a graduate student (from anywhere in the world) reaches out to you after having read your work, wanting to discuss the work you have produced.
3. Research impact is when your work is being recommended by one person to be read by another as an important contribution to the field.
4. Research impact is when society that you share your work with, tells you how important it is for such research to be carried out because of the import it has on their lives. If it would not benefit them immediately, it would benefit the generations after them.
5. Research impact can sometimes mean pioneering new ways of thinking, doing and being, meaning your work may be unciteable for a long time because nobody is doing the thing you are doing and therefore, nobody is having a direct conversation with what you are doing. But if it is truly excellent work, even if not perfect, it would have its time of glory (whether you live to see it is another story).
6. Research impact is when someone other then yourself assigns your work as class reading. Or even if you assigning your work (when relevant) adds value to the learning process (although that comes with a caveat).

Comments

Popular posts from this blog

Queering physics - musing part I

ArtScientist/ScienceArtist: Finding a Creative-Intellectual Room of One's Own

Speculative Physics according to Friedrich WJ Schelling