Potential new datasets
UKMED is open to recommendations about the research questions that the project should be seeking to answer in order to promote excellence in medical education and to increase understanding of training pathways.
The pilot phase was designed to prove the value added to medical education research using joined up data, and to ensure that robust practices are in place for data linking, handling and release. In parallel with the pilot phase, other potential data contributors have been working on their permissions statements to ensure that they are in a position to contribute. It included selection tests at entry to medical school linked to performance in the first years of practice. After 2016 the database will encompass future cohorts and more recent versions of medical school selection tests. However, there is also scope to include new datasets, if guided by clear research priorities.
We are inviting comments on relevant research questions and related datasets. Please contact us. if you would like to:
- Help us identify issues for further exploration in the existing datasets
- Comment on the selection of additional datasets
The table below sets out potential new datasets identified by the UKMED Advisory Board and Research Subgroup members. Each of the datasets in the table includes commentary on expected benefits. Where the dataset would require further work before it could be considered for the current phase of the project, the status and additional work is described. Questions for consideration are as follows:
- Are there data sets that should be included, amended or disregarded?
- Are there additional considerations to explore for any of the data sets identified?
- Of the possible data sets and related benefits, which should UKMED prioritise?
Potential datasets for inclusion in UKMED 2017 onwards
|Dataset||Current status||Further work required||Benefits||Contacts|
|Common fields across test provider registration forms||To be discussed with all test providers in the review of their data sharing agreements.||To allow consistent data capture of key demographic variables of interest that are not available in the HESA extract.||BMAT, GAMSAT and UKCAT|
Individual medical school selection data – Multiple Mini Interviews (MMIs)
Medical school multiple mini interview data collection (2017) – Briefing note
|Approximately, 18 medical schools use MMIs in selection. The MMI data are held by the individual medical schools.||
Review of privacy notices.
||Ability to assess the predictive validity of MMIs, one of the most widely used selection tools.||One contact per medical school is required.|
|Individual medical school selection data – statements and references||Individual medical schools may hold data used in their selection processes, for example scoring of personal statements, work experience forms and/or references. Individual medical schools may flag applicants as being eligible for a contextual offer and may collect other data relevant to the widening participation agenda.||To clarify the range of selection tools used by different schools and the availability of data.||Ability to assess the predictive validity of other measures used in selection and for schools to demonstrate the validity of the tools they use. Ability to understand contextual admissions.||One contact per medical school is required.|
|E-portfolio data||Foundation trainees and trainees in each specialty use e-portfolios to record workplace based assessments. The system supplier and the available data vary by specialty and may vary by year.||To ascertain how useful e-portfolio data might be and the work involved, it might be best to select one or two e-portfolios for inclusion on a pilot basis. The foundation e-portfolio will be used by all UKMED cases and is an obvious candidate for any pilot.||Ability to assess the predictive validity of workplace based assessments used in national training programmes.||Deaneries for foundation e-portfolio. Individual colleges for specialties.|
|Clinical outcomes for individual consultants/GPs||
notes three major groups of data that could be used as criterion measures for predictive validity studies:
||Significant work to assess the availability, range and quality of the individually identifiable data sets. We would need to explore whether there are particular procedures where it would be reasonable to attribute the event to the responsible consultant (instead of a team/service).HSIC note that activity linked to the GMC number of the lead consultant responsible for the care of the patient will not be directly attributable to that consultant and can only be attributed to the ’consultant team' as it incorporates (although does not currently distinguish between) the work of the whole team including junior doctors, anaesthetists.||Potential to link doctors’ training outcomes to clinical practice, see for example Norcini et al (2014).||The Healthcare Quality Improvement Partnership|
|Full placement history||A full history of each trainee’s training placements, as opposed to the annual NTS snapshot.||This may be contingent on developments to the LETB and deanery systems to allow transfer of this volume of data to the GMC.||An understanding of whether the posts a trainee rotates through is associated with performance on particular elements of an exam. An understanding of whether exposure to a specialty is associated with specialty choice.||LETBs and Deaneries databases.|
|Electronic Staff Record data from each nation for primary and secondary care.||These data are held by the GMC for mapping doctors to responsible officers for revalidation purposes||Ensure the data sharing agreements would allow inclusion in UKMED. Ascertain how much preparation of these data would be required to make them useful for UKMED purposes.||The ability to look at employment post CCT, for example who goes on to become a consultant. To improve workforce planning, which the Public Accounts Committee has recently indicated requires improvement.||Departments of Health|
|Revalidation||GMC hold revalidation data, the following statuses are available for each doctor: recommendation to revalidate, approved to defer (insufficient evidence to support a recommendation to revalidate), recommendations to defer (participating in an ongoing process) or non-engagement.||Understanding of which factors predict revalidation status.||GMC Registration|