Thursday, September 24, 2009

The BBC responds.

A week had gone by and there was no response from BBC Education News to my email informing them of my blog post "Tech addiction 'harms learning' .....really??? $24.99 and I am no wiser". I probably would have left it at that.

The post was about an alarmingly titled BBC online news story on a study set in an English secondary school, that could only be accessed by spending $24.99. I did buy it and I used this blog to inform others who were interested but didn't purchase it. My finding was that this was a piece of poor research, done by people without backgrounds in education, and presented in a way that suggested that any peer-review process it had been near was 'light touch' in approach. In short, this Sigel Press "Special Report" didn't live up to the publisher's claim that it would contain "groundbreaking information", be "written by global experts", and be an "indispensible resource[s] to keep you up to speed in your field." For the record, my field is not secondary school education. I am a doctor and a university teacher and researcher. I maintain this blog as a way of connecting with others in the wider education community. Several of my past posts have criticised the methodology of peer-reviewed research on the use of new media in medicine; research which has been more or less reported in a positive way. I make it clear on this blog that I don't support the use of technology "for the sake of it", to the extent that I have on occasion gained the moniker "web 2.0 skeptic". And if there was evidence that the use of the internet or other tech really did harm learning, I would want to know about it. I'm not a push-over.



I emailed BBC Education News because I thought that anyone who had the report in their hands would have reached the same conclusions as me. I emailed Cranfield University PR department as well, and they thanked me and said they would pass my comments to the authors immediately. I didn't really expect to get any responses.



But Paul Bradshaw wasn't happy. He is a senior lecturer in journalism at Birmingham City University. He had emailed the BBC Education department as well and today he started chasing for a response. On the off chance I emailed the BBC again and 30 minutes later there was a reply to the email sent a week earlier. This is from Gary Eason, the BBC News website education editor:

"Hi Anne Marie
Thank you for your thoughts. The author of the article did have the whole report in front of her and interviewed one of the authors. I do not agree that our headline is "sensationalist".
best wishes
GE"

OK, we can agree to disagree I suppose. But then I saw Paul's blog post about the matter. His interaction with Mr. Eason was considerably longer and contains the following quote:

"It seems to me the results don’t fit her world view so she sets about rubbishing them. Is she seriously arguing that ‘cut-and-paste plagiarism’ is not a problem?”

Spot the logical fallacies. This study was not good science and should not have been reported by the BBC. My worldview has nothing to do with it and is simply a red herring. In any case as I have pointed out above, I am not dogmatic about the place of technology in education. I look for evidence to inform me about what we should be doing.

Next , we have the straw-man attempt to rubbish my blog post. I made no comment at all on whether plagiarism is a problem. All of us working in education know that this can be an issue if assessments are designed badly. But my argument was that this research told us nothing about the relationship between learning and 'addiction to technology'. It possibly could have done as the researchers had data which could have been analysed to tell us something about this. But they didn't. Yes, it was a small study with a dubious response rate but they failed to make the best of the data they had.

Tom Morris comments on Paul Bradshaw's blog that this is a "perfect example of a glaring editorial problem". I think I agree. What do you think?

Thursday, September 17, 2009

Email to BBC News Education Re: Tech Addiction "Harms Learning"

Dear BBC

I was disappointed when I read
this article as I
could immediately see that the research was likely not to be of good quality.
But I was more concerned that you had managed to construct a sensationalist
title to go along with it. A cross-sectional study could never establish the
kind of causative relationship that your title infers.

I paid $24.99 to download the full report and my suspicions of poor
standards in research were supported. Did the author of this article actually
read the report or simply base their story on a press release from Cranfield
University?

Since this report is not freely available to the public, I think that the
BBC, a publicly funded body, has an even greater onus to ensue high
quality reporting of such 'research'.

Here is my
blog
response
.

Yours faithfully,

Anne Marie Cunningham

.......................................................

So how do we go about starting a campaign for decent science journalism on the BBC?


Wednesday, September 16, 2009

Tech addiction 'harms learning' .....really??? $24.99 and I am no wiser




EDIT 11/12/09 This post has been nominated for an Edublog Award for "Most Influential Blog Post" You can vote here. Thank you to Sarah Stewart for her nomination.




Last night, I started noticing tweets about this BBC News Education story in my twitter stream. Researchers at Cranfield University had published a report "Techno Addicts: Young Person Addiction to Technology" about a study they had conducted where 267 secondary school pupils completed a written questionnaire about their mobile phone and internet use. Included in the BBC story is the statistic that 63% of respondents 'felt addicted' to the internet and 53% 'felt addicted' to their mobile phoneThe BBC headline ("Tech addiction 'harms learning'") suggests that the researchers have established a relationship between this feeling of addiction and poor learning. In fact, the headline suggests a causal relationship which a cross-sectional study could not establish, but the body of the text doesn't really support any relationship between addiction and learning.


I wanted to know more so I set out to find and read the report. Googling the full title pulled up a link to the Sigel Press site where the report could be purchased for $24.99. And a press release from Cranfield university confirmed that this was the only way to get my hands on it. It also was clear that none of the authors had an education background. The 2 main authors, Nadia and Andrew Kakabadse, have a blog showcasing their many interests but education doesn't feature amongst them. They descibe themselves as "experts in top team and board consulting, training and development". I bought the report.











I expected the report by university academics to follow a standard format but it doesn't. It is 24 pages long and contains no references and no appendices. The survey instrument is not included.


Mainly it consists of charts illustrating question responses. Unfortunately it contains some typos and poor grammar.


No response rate is given, although we are told that the single school contained 1277 students and that there were 267 respondents, so it may have been as low as 21%.

With regards to 'tech addiction' this seems to have been a self-assessment based on response to the question: How addicted are you to the internet or your mobile phone? The proportions given in the BBC report are those who stated they were 'quite' or 'very' addicted. Of course, we don't know what the students meant by 'addicted'.
With regards to this addiction harming learning, there is no analysis relating the perception of being addicted to outcomes in learning. In fact very few of the questions are related in any way to learning.
It is hard to understand several sections of the report because of lack of access to the questionnaire. For example, with regards to plagiarism the authors state that "A high proportion of students (84.3%) openly admitted that they inserted information from the Internet into their homework or projects on a number of occasions." The tone of this sentence reflects some of the bias which is found throughout the work. The authors don't seem to be aware that if referenced it is acceptable to insert information from the internet into work, so the students would have no reason to be ashamed and fear 'openly admitting' this. The finding that 59.2% of students have inserted information into work without reading it is more concerning. It is also reported that 28.5% of students "feel it acceptable to insert information from the Internet straight into schoolwork without editing or making adjustment, recognising that such behaviour is considered plagiarism." It would help a lot to see how that question was actually worded in the survey, as in the figure it is simply represented as "Ok to “insert” information from the Internet straight
into schoolwork- Yes/no". That's not quite the same!

But there is no analysis relating amount of time spent online (or perception of addiction) and likelihood to insert internet contents into work without reading it. It may be that those who spend less time online, have less skills in information literacy and are more likely to plagiarise.

In summary this report tells us very little about internet addiction or learning. Do you think that someone writing for the BBC website actually read the report? Many of those who tweeted about the BBC article thought there were no suprises in the findings, and that perhaps it suggested that teaching methods needed to change.

This evening Ben Goldacre and Lord Drayson were debating the state of science journalism in the UK. I wonder why do the BBC give space to research which is so poor? How did they manage to concoct such an alarming headline? And why do people believe it? Is it because as one person responded to me last night, there is the perception that "U may fault methodology, results true".
And the quotes from the authors are not even results, just their thoughts which may chime with readers. But it's definitely not science.



Image: "Playing with the new baby cell phone" http://www.flickr.com/photos/cwinters/2150107228/



EDIT: You can read the BBC response to this blog post here.


Apologies for quietness


It's 2 months to the day since my last post. First there were holidays and then I broke my wrist. I was whirling round in a dance tent at the Green Man festival and then suddenly I wasn't. I was rushing backwards towards the ground and put my left hand out to save myself.
I think a broken wrist is a pretty good excuse for a blogging hiatus, though my story is not quite as dramatic as Stephen Fry's.


Sunday, July 12, 2009

Web 2.0 tools and medical education - more sceptical comments

Last week, @drves described me as a web2.0 skeptic. Those who know me in 'real-life' would certainly agree that I ask many questions, and may doubt received wisdom. Now it seems this facet of my personality is more apparent in the online world too!



Lemley, T., & Burnham, J. (2009). Web 2.0 tools in medical and nursing school curricula*EC Journal of the Medical Library Association : JMLA, 97 (1), 50-52 DOI: 10.3163/1536-5050.97.1.010

The above paper was published in January 2009. It has been talked about a lot on twitter today because it was mentioned in a student BMJ article, which was then picked up in a blog post by Dr Ves. The finding that '45% of medical schools use Web 2.0 tools in their curricula' is that most often cited in twitter and elsewhere. So what does this mean and how did the authors draw their conclusions?

Method
A survey was conducted using Survey Monkey. Participants were identified by emailing a link to the survey to 3 different email lists:
DR-ED (for those involved in medical education) 1383 subscribers
AACN (for those involved in nursing education) 150 subscribers
AAHSL (for academic health librarians- who were asked to forward the survey to those responsible for curricula in their institution) 146 subscribers

The questionnaire is given in an appendix.Although the title and background to the article talk about web 2.0, the first questions asks about use of the following 'web 2.o/social networking tools':
  • Blog
  • Del.icio.us or some type of social bookmarking resource
  • Flickr or some type of photo-sharing resource
  • Moodle
  • MySpace, Facebook, or some type of online community
  • Podcasts
  • Videocasts
  • Wiki
  • YouTube or some type of video-sharing resource

I don't know why Moodle, which is an open-source,and flexible learning management system (LMS) or virtual learning environment (VLE) is included as a web 2.0 or social networking tool. Moodle does support the use of web 2.0 tools, but so can other VLEs, so it is unclear why it is listed here.

Results

There was no way of tracking how those who responded to the survey found it, ie did they find it on a list themselves, or was it passed to them by a librarian? In any case there were responses from 36 individuals involved in medical school education, and 19 individuals involved in nursing school education.

The response rate from the medical school list is no higher than 36/1383 or 2.6%, amd possibly lower if some of the responses came via the librarians' list.

Several responses may have came from individuals in the same medical or nursing school as responses were anonymous.

Despite this the authors go on to report results as the percentage of medical schools which are engaging in the use of web 2.0 tools, rather than the percentage of medical educator respondents. The individuals who responded that they did not use these tools may work in institutions where many others do, and the individuals who responded positively may be the sole educators in the institution to use the tools out of several hundred or more.

The questionnaire did contain a question ("Please briefly describe how these tools are incorporated into your instruction.") which allowed free-text response and could have provided some information for a qualitative data analysis, but no results are given.

My conclusion

Does this paper tell us anything about the use of web 2.0 tools in medical and nursing schools in the US? No.


Is the author's justification of validity despite low response rate, because the study is "to gain insight into an issue", appropriate? No, because exclusively quantitative results are published.

This paper is short. It is open-access. I think that with a cursory look, most people would have reached similar conclusions to me. So why were so many people referring to this paper today without any criticism of the severe weaknesses in methodology?

Thanks to @drcolinmitchell for drawing my attention to this research.
He has also published a great post about this paper.

Tuesday, July 7, 2009

Where do junior doctors look things up?

A short time after my post on where medical students look things up, @drcolinmitchell tweeted about a paper on where junior doctors look things up.
Hughes, B., Joshi, I., Lemonde, H., & Wareham, J. (2009). Junior physician’s use of Web 2.0 for information seeking and medical education: A qualitative study International Journal of Medical Informatics DOI: 10.1016/j.ijmedinf.2009.04.008
I have to admit that when I first glanced at this paper I thought the methodology was good. There is talk of triangulation and inter-coder reliability etc. But when it is read more deeply much of it simply does not make sense because key concepts are so loosely defined. In the past few days I have seen this paper mentioned several times on twitter and in blogs, but there has been little or no mention of the poor quality of this study. Therefore I thought I should add my thoughts to the debate.

Method

The study took place in the NHS in England. The subjects were junior doctors. 55 were identified through a stratified sample (of 10 different specialties) from 300 graduating from a London medical school. 50 of these agreed to participate but only 35 completed all three stages. More demographic data on the participants would have been useful.

Next, they were given a questionnaire, used in previous research on this topic, and asked to keep a diary over at least 5 days of every website they accessed for work. Finally each participant was interviewed although themes were saturated after 20 interviews.

Results

From the survey data, 32 of 35 said they used web2.0 sites and of these 28 used wikis (read the content, only one doctor contributed to wikis). Next, looking at the diary data, confusingly, google.com is now referred to as web2.0 content, whilst in the survey it was not. 80% (28/35) of physicians used google during the five days. 25/35 reported using wikipedia. Smaller percentages used yahoo.com, doctors.net.uk and Facebook. This data is presented in chart form with percentages of physicians accessing each site (eg google, wikipedia, NICE). Presentation in tabular form with absolute numbers of accesses would have given more information.

The participants were asked to state for each of the 444 events where they accessed information online whether this was on a wed2.0 or user-generated content site, or hybrid, or traditional content site. The doctors said that on 235 occasions they were accessing web 2.0 content. However, the authors have classed the 142 uses of google and 115 uses of wikipedia (total 257) as web 2.0 content. No absolute numbers for the access of yahoo, facebook and doctors.net are given so the agreement between the authors and participants over what constitutes web 2.0 is not clear.

The authors then present themes from the interview data. Here "using the internet" is confusingly equated with web2.0 content. There is mention that doctors look things up online because it is easily accessible, and up to date, but at times they are uncertain about the quality or usefulness of the information found. The authors introduce a taxonomy of information needs from the interviews which they then use to analyse the information needs addressed inthe diaries, categorising 237 out of 444 internet accesses/information needs:
  1. "to solve an immediate defined problem" "to advance an immediate task in the clinical context and forms a closed question with a specific answer" "closed questions" 107 of total information needs, of which 90 addressed through use of "hybrid or best evidence tools" (these tools are not specified)
  2. "background reading on a subject" 130 information needs, of which 107 addressed through the use of "web 2.0"
It is not clear why 207 web accesses were not classified, or which sites were accessed in those diary entries.

Futher information is given on the way that doctors used google. 21 out of 35 mentioned using google as a way of navigating between trusted sites. It is not stated if these trusted sites were named in the diary.

There is then some discussion of how these (web 2.0) sites could be better used in clinical contexts. Doctors mentioned :
  • patient education- comment is made of patient use of wikipedia and need to educate patients on different sites
  • physician education- awareness of "web2.0 sites" as difficulty is in finding out about sites(wikipedia and google? or were they referring to some other web 2.0 sites? or to trusted web 1.0 sites?) , not much training necessary as sites so easy to use.
  • remove blocks to web2.0 sites - it is reported that google is blocked in

I am not commenting on the discussion of the paper because I found the method and results section quite perplexing. No clear definition of web 2.0 content is given. It is not clear why the use of google is considered use of web 2.0. As Mark Hawker has pointed out google is a web 1.0 application. (Data is indexed by computers and pulled by humans. The content is not in any way user-generated or social. ) Previous researchers such as Sandars and Schroter, who this paper cite, did not consider google to be a web 2.0 application.

Because the authors did not use a clear definition of web 2.0 content this work can tell us very little about doctors use of web 2.0 content. It is possible that most doctors are using the same trusted websites that they have always uesd. They use wikipedia because it is easily accessible (free and no passwords needed) and is equivalent to an online textbook. The user-generated content of Wikipedia is not a factor for most doctors. Credibility of user-generated content for physicians did not emerge as a theme in the qualitative work. Instead they were concerned about how patients might use the same websites that they use.

Overall, I found the study very disappointing. We need debate and discussion on how best to address the informational needs of clinical staff. To me, the best description of these needs still seems to be Richard Smith's BMJ review in 1996. Now we should be asking, have the information needs of doctors changed in the last 13 years? How are these needs best addressed by current technologies and what tools should we be trying to develop.

What do you think? Am I being too harsh? Why did you like this paper?