Leveraging Library Expertise for University Rankings

Leveraging Library Expertise for University Rankings

September 7, 2019 0 By Stanley Isaacs


Hi, everyone, I am Liz Bernal, I’m the Library
Assessment Officer at Case Western Reserve University and with me is Lauren Di Monte,
Director of Research Initiatives at University of Rochester. And today we’re going to be
talking about leveraging library expertise for university rankings. So why do rankings matter? International and
national rankings help students, parents, researchers, universities, and governments
worldwide determine which universities are the top ones in the world. These lists focus
on a variety of different criteria and methodology. The big three international rankers that we
all should know are QS, THE or Times Higher Ed, and ARWU, which is also known as Shanghai
Rankings. As national and international rankings help focus the students, faculty, researchers,
and governments, they determine which universities to work with, and this means that if your
university or institution is not, say, in the top 100 or 150 schools, you might lose
out on prospective students from other countries, researchers may not want to collaborate with
you because you do not have a strong international academic reputation, or you might lose out
on potential grants or funding from other countries. So for example, if you’re falling
below the 100 or 150 mark, you may wind up losing out on revenue, your academic reputation
may not be as stellar, to where students from other countries are unwilling to come to your
institution. It’s very important for us to realize that international students do have
to pay out of pocket to come to the U.S., to come to school here, so if they’re unable
to get that funding or they have to try harder for it, your university may fall below that
mark and they would then pick another school that has a stronger academic reputation. And
then always important, international collaboration in regards to who your faculty are then working
with, which researchers are working with you. You want to make sure that when you’re looking
at rankings, that that international collaboration is very strong. So if you have that area as
a weak point in your criteria, the methodology that the international rankers are using,
that is something that you definitely want to be looking at, as well. So we’re going
to go into more details on that, and then Lauren here is going to be talking about the
core expertise that the libraries can provide to you. [Lauren]: So when you hear about international
rankings in particular, you might wonder, what does the library have to do with this?
It feels like there’s not an obvious connection. But the thing that I think is really important
to remember is that bibliometric indicators are a core part of how rankings are determined.
Bibliometrics are used as a proxy for research productivity, and so because of that, there’s
a lot of opportunity for libraries to participate in rankings initiatives and grow your own
reputation on campus, ultimately. So a lot of the work that happens around rankings relates
back to very traditional library work – authority control, making sure that you have you have
the names for your authors, that they are affiliated correctly in various kinds of citation
and publication databases. This is super traditional, old-school library work, and we are the only
ones who know how to do this. Half the time – we’ll get into our collaborators later – when
I’m working with other folks on campus, they go, “Oh my god, how do we do this? This is
so hard, how can anyone know anything about this?” and we can always say, “Hey, the library
knows. We know how to do this.” Because it’s also based on bibliometric indicators and
we have that expertise, we’ve been doing this work for a long time, we can draw on lots
of different skills within the library to help us understand things about citation,
to help us understand things about, how do we actually measure productivity in a way
that makes sense? We just have those skills, and it’s not something that we necessarily
even have to hire for, you probably have folks in your collections departments or other kinds
of librarians out there who know how to do this work, so I think that’s a really powerful
thing for us to leverage again. Another piece of this puzzle is that a lot of the times
the databases that we use to do this work, we already have relationships with these vendors
or we are already paying for these things, so there’s this interesting collections opportunity
here, where we can, again, demonstrate that we are already ahead of the curve, we already
have our subscription to Scopus and SciVal, let’s say, or our subscription to Web of Science
and Insights and talk about that, but the other thing that’s really interesting is that
we can work at a higher level to coordinate resources and talk about the kinds of things
that we need to be purchasing. So in our case, for example, the university has a subscription
to Academic Analytics, which is very expensive, and our university went off and bought this
without any consultation with the library, and so we ended up talking about that in this
broader discussion around, “Should we purchase Scopus? Should we do this, should we do that?”
and we’re at the table now talking about these resources and these collections in a way that
is a much more holistic and I think really, really powerful. But on top of that, if we’re
doing all this work around bibliometrics and around understanding through the research
impact of our scholars, it actually helps us do better collection development as well.
The fringe benefit of this becomes, we have a more richer understanding of what is actually
being produced at our university, and so we can actually then tailor that collection development
in ways that make a lot of sense. And finally, with that deeper knowledge, it improves our
opportunities to do outreach to faculty, as well as outreach to our campus partners. So
there’s this nice set of skills that we already bring to the table and this nice set of benefits
that we can to draw from participating these kinds of initiatives. So I just wanted to
get that out on the table and say, you probably have the capacity to do this work already
at your institution. So for the next part of the presentation we’re
going to go a little deeper into what’s happening at our various institutions and talk about
some projects that have spun out related to rankings. So for the University of Rochester,
I wanted to start off by talking about the values that we bring to the table around rankings.
I think it’s really important to note that we do not want to change who we are to suit
rankings, we’re not trying to make changes to what we’re doing to rise in the rankings
explicitly. We want to continue to be the university that we are with the strengths
and strategic priorities that we are already pursuing. Having said that, we don’t want
our position in the global research environment to change because we’re not paying attention
to these things. So this is the push and pull that we’re always balancing, and I think it’s
important that we set these values from the very beginning, because when we’re having
these conversations, you can see it’s a slippery slope, you might be able to say, “Oh, if we
make this change we might be able to do this,” or, “This might be able to happen, we can
rise here, rise there.” We don’t want to get into that kind of thing, we don’t want to
get into this frame of mind where we’re telling people where to publish or we’re trying to
play around with any of these figures, we really just want to make sure that we’re representing
ourselves as best as possible. So just to get that on the table. When we’re doing this
work at the university, the library is involved, but we have a core team of collaborators that
are driving these projects, and so we work very, very closely with the Vice Provost for
Global Engagement. The international rankings piece is a strong driver, as Liz was saying,
for international students, for international faculty, for international graduate students,
and so there’s a strong interest from that office around understanding where we are in
terms of rankings and what we can do to improve how we are being represented. Another important
collaborator is the Office of Institutional Research. A lot of the data that gets submitted
to rankings agencies is institutional data, it’s coming from that office. We have another
key collaborator, our Assistant Dean for Data Analytics at our School of Arts, Science,
and Engineering. This is a really interesting role it’s a role exclusively focused on data
analytics and sits between institutional research and the Arts, Science, and Engineering school,
and then me, which is nice. So you can see that we’re part of this team that represents
a lot of different interests with a lot of different kinds of skills, so we meet, I would
say, weekly at this point trying to understand where we are and what we need to do next.
We do work, that is, on the one hand around cleaning data and around understanding where
we sit with the data, but we also strategize around outreach. How do we reach out to various
academic departments? How do we reach out to academic leadership to make sure that they
understand what we’re doing and why it matters? And so we end up spinning out lots of projects
together to improve rankings but also to help with our own projects locally, and so for
that reason, beyond the actual core team, we’ve had multiple collaborations with lots
of different people around campus. So the library is a key one here, our collections
department, metadata, outreach, liaison librarians, they’re always involved in aspects of projects
involved with rankings. Our Chief Data Officer, which we just hired is also – I don’t know
its title – Associate Vice Provost of Data Governance, and the data governance data piece
becomes really, really important, because one of the things – and I’ll talk about this
a little later – that was really challenging is just defining who is a faculty member.
If you can’t define who a faculty member is, how do you count how many faculty you have?
How do what to look at what you’re doing bibliometrics? So there’s this interlocking set of questions
and issues that begin to arise. Campus IT is another collaborator, we have this enterprise
application governance process, so if we’re buying systems, if we’re buying software,
if we’re buying tools to help us do this work, we have to involve them, which is interesting,
too. Another key collaborator that has emerged is actually our Associate Vice Provost for
Career Education Initiatives, and, again, that might not seem obvious, but there are
these reputational pieces around rankings, around employers and other kinds of … how
academics perceive you and how employers perceive you, and so receiving those names and understanding
who to talk to has become a collaborative effort. And then, of course, the Director
of Academic Affairs is really important when we’re thinking about faculty data. Making
sure you’re involving all the right people is really important because it streamlines
the process, but also it means that you get more buy-in for these projects. So we do try to follow a process when we’re
engaging in any rankings initiative, so really we always start with the data, we gather the
data, we try to understand, what are the current trends across the various ranking bodies?
And the ones that Liz mentioned, the international rankers, are the ones that we focus on the
most – QS, ARWU, and THE – and then we really do try to spend some time unpacking the methodologies.
They change every year a little bit, so you have to pay attention, which is challenging,
and then we try to understand what has contributed to any kind of shift in our ranking year to
year. That can be tricky and a lot of it is best guess, but you try to work from the data
that you have to make educated assumptions about why you are where you are. And then
we do try to launch those collaborative projects, so with all the people involved on the previous
slide, we tried to decide, who needs to be involved with this and what can we do to make
these marginal gains over time? And then we try to operationalize, because it’s all fine
and good to do a big data cleaning project one year, but you can’t necessarily do that
every year, so how do we leverage what we’ve learned to turn this into something that can
be done in a sustainable way over time? So, for example, we’re doing this big data cleaning
project now in Scopus, and now looking at how we can use our – we call them outreach
librarians, so subject liaisons, basically – how can we use them and build that into
their work to look at affiliations, to look at how author names are in our written down
in these databases so that becomes a group effort rather than me and a student trying
to do our best to make the data cleaning happen? And for timeline, I will say that we consider
ourselves fairly new to this, but this work has been going on since 2016, so in 2016 is
when the Office of Global Engagement really began to pay attention to our rank, and I’ll
be honest, it’s because we were slipping in the rankings and it was becoming an issue.
So Jane Gatewood grabbed that and took that on herself and they begin to look at what
institutional data could be cleaned, involved the Office of Institutional Research, and
then the library became involved mid -2017 or so, and so we are just getting going with
our core team now. So it does take time. I don’t know how long you guys have been doing
it for. [Liz]: 2017. [Lauren]: 2017, okay. So we consider ourselves
new, but Case is way ahead of us, I’m just going to say that. So we’ve recently been
focusing a lot on the QS rankings in particular, and we do this for a number of reasons. The
first is that it is the most popular ranking used by international students and their parents,
mostly. I think international faculty and international graduate students also do look
at this, but the other reason we focus on QS is because it’s one of the few rankings
that we can actually submit data to, which means we can intervene in what they’re looking
at and what’s actually going on here. And so from the library perspective, what we focus
on is this 20%, which is the citation per faculty metric. So we’ve spent a lot of time
over the past year and a half unpacking this QS methodology. So it looks like it should
be pretty simple, citations per faculty. What they do is they take a normalized citation
count and then divide that by the number of faculty. The trick here is how that normalization
takes place. So I’m not going to get into too many details,
but I want to show you what we had to look at, essentially. So when you look at the distribution
of citations in the Scopus database, which is the database that the QS rankers use, QS
divides the citations into these five faculty areas. You can see them here: arts and humanities,
natural sciences, so on and so forth. If you look at the distribution of citations in Scopus,
they look like this, so that 1% are arts and humanities, let’s say, and about 49% are life
science and medicine. What the QS methodology does, though, for citations per faculty is
turn these numbers into – they have this formula that makes it look like science, just thought
I’d flash that up for you – they turn that into 20%, so they try to make every faculty
area worth 20%, so one citation in an arts and humanities journal is worth 20 times more
than something in another faculty area. So this has implications for how your institution
is represented in these databases. So we spend a lot of time looking at how this methodology
affects how we are represented and found a lot of really interesting issues. So the first thing that really came to the
forefront is that we got some bad data in there, really bad data. I’ve been working
with a computer science student who’s developed an algorithm to help us begin to check and
understand where we are in terms of author data and author affiliations, and 60% of our
faculty have bad data of some kind. That’s a lot, that’s a huge amount. So we have incorrect
affiliation information, we have name disambiguation issues, we have incomplete or missing author
profiles. So if we are trying to get at a really nice citation for a faculty member
but 60% of our people are not being represented well, that’s not really going to help us improve
that score. We also, interestingly, found some misalignment, so in terms of what QS
considers, let’s say, engineering and what we do. So we think of ourselves as a fairly
strong engineering school, but the kind of engineering that we do gets lumped into the
natural sciences bucket, not into the engineering bucket. So what does that mean for us if we’re
losing out on that whole bucket because we work a lot with lasers and nuclear stuff and
we think of that as our engineering? It’s kind of an interesting thing to look at, because
we were always wondering why our engineering numbers were so low, and this helped us understand
that. We also really surfaced a need for data governance, a need for us to be able to understand
what we mean by who a faculty member is, understand what we mean by what a student is. We need
to also think about actually having systems of record, because if you need to get data,
you need to know where to get it. Right now we have to ask a lot of different people for
a lot of different datasets that we have to put together ourselves, so it really increases
the amount of time it takes for us to do this work. And then we also have to establish data
sharing processes. People weren’t used to the library coming and asking, “Hey, can you
give me a list of faculty members?” or, “Can you give me a list of students?” “You never
asked for this before, why are you asking me for this?” We had to establish these kinds
of relationships even to be able to do this work. So the strategies that we have at this
point are to obviously clean the data in Scopus, which is what we’re doing now, and so that’s
a mixture of algorithms that we’ve developed and manual data cleanup, which is laborious,
absolutely. We’ve also launched an Orcid project, this is one of those projects that spun out
from this rankings initiative. We’ve been able to do a huge outreach project through
the library, but we’ve embedded Orcid into the faculty annual reporting system, so we
have buy-in from the School of Arts, Science, and Engineering to get folks with Orcid IDs,
essentially, and that was because we could talk about the ways in which Scopus uses Orcid
as part of their disambiguation algorithms within the system. So we had to have that
technical knowledge to be able to make the case for what ended up being a really interesting
and robust outreach project for our librarians. For the misalignment piece, we’re not going
to do anything different, we’re not going to change how do we do our engineering, but
we can maybe talk to QS about some of the issues that we see around how they’re defining
what engineering means, for example. Whether this will be successful or not is to be seen,
but this is one of those areas where our values come into play again, we’re not going to say,
“Everybody, stop with the lasers and do civil engineering,” or something, that’s not really
what we’re going to be doing here, so all we can do is talk about what we’re seeing
when we do our own analysis. From the data governance perspective, again, we’re collaborating
very closely with the Chief Data Officer. The library is actually involved in helping
her develop a conceptual data model for the university, so we have a sense of how we’re
beginning to define these things, and then how the systems of record will be able to
employ that model to do the work we need to do around faculty and student information.
And then the library is really at the table now for lots of different kinds of projects
around the faculty information system, around faculty job codes. This is a huge project
that spun out of this, because, again, if you can’t define who a faculty member is,
where does that start from? It starts from the contracts that people sign, and so our
Head of Metadata is actually at the table developing this job code piece for the university.
So, again, the fringe benefits to the library for this work have been tremendous, and, again,
just drawing on really traditional expertise in many ways. So the results at this point. This is our
sad waterfall, this is where we are in terms of rankings, but the one I want to pay attention
to is the QS rank, which is the red line here. You can see that we’ve been able to stabilize
our position in QS because of the kinds of interventions we’ve been doing. And we have
a better understanding of the issues affecting rankings and we have increased collaboration
across units, which I think is really the good news story here. And then, finally, the
library is a key player in the strategy around this work, as well as in data analysis and
outreach. So I think from a reputational perspective for the library, the rankings piece has been
very, very helpful and effective. We are sitting at tables that we were not sitting at before
because of the expertise that we draw on to do this work. [Liz]: Great, thanks, Lauren. Lauren’s presentation
… we talked about who should go first and I said, “You have a lot of really good overview
and then I can take the next step and talk about the actual process that Case Western
has actually been doing in terms of working through the actual processes,” so you’ll be
able to see some of the actual work that we’ve been doing. Case Western has been actively
working through the international rankings issue since March 2017. My university librarian
had been talking with the Vice President of International Affairs, and they determined
that the library would be a really important player in trying to correct some of the issues,
so he came to me and said, “So what are we going to do to fix our international rankings?”
and the first thing I said was, “What are international rankings?” and then from there
I had to figure out what I needed to do. So I began with trying to break down the components
that make up the citation impact or the bibliometrics as related to international rankers, and was
determined that the variations of the institution and faculty names were absolutely critical,
so that’s where I started. I started looking at the institutional variations and affiliations.
Does the university count the professional schools, the hospitals in our cases, the labs,
the institutes of specific subject areas as part of Case Western, or are they considered
their own institutions? So we have several professional schools on campus, which includes
medicine, dental, nursing, School of Social Work, and then the School of Law, so we need
to really determine, should they be separated out from us or should they be considered part
of Case Western? We also needed to determine how the faculty were naming themselves, either
with full name, nickname, initials, and that really started to spiral since everything
seemed to be impacting how these things were being looked at as a university. So the faculty
impact, which disambiguated from naming variations and affiliation connection would not only
impact the citation area of an international rank but also the area of faculty impact of
the ranking methodology and criteria, so that’s when I realized that, even though we may only
impact that 20% for QS, we actually are the underlying database for everything else that
goes into it, which meant that project was going to be much, much larger than I anticipated. So I had to really keep track of everything,
and I certainly spiraled down into data and task overload, so I had to pull out of that
spiral. As an assessment officer, I needed to make a project plan. This project plan
has eight phases. It’s overwhelming but I’m going to go through it, and don’t worry, I’m
not really that good at getting all eight phases done. But each phase really touches
on each aspect of the faculty profile and aspect of what we needed to review, change,
and/or edit. Each page includes a review of each school, starting with the ones that Calvin
Smith Library directly impacts, which is School of Management, our engineering school, and
our College of Arts and Sciences. Then I followed that up with our professional schools, and
then we could target our institutional variations, faculty variations, and affiliations, which
would then include the review of the impact of the faculty before and after the changes.
I then added phases of integrating Orcid, submission to the institutional lists, to
the ranking organizations to ensure that they had that information. That’s important, I
think a lot of schools don’t realize you should really be contacting out to these international
rankers or even national rankers to make sure that they have the appropriate institutional
lists if that’s what they’re reviewing. Even if they don’t review it, I send it off and
say, “Hey, just in case you need this, here’s our institutional affiliations.” From there,
I can tell you that we created even more offshoots to this project. So not all eight phases are
done, really it’s the first two that are most important aspects, which is the institution
and faculty affiliation variations on the project cycle. Last December we had five weeks
to put together all of our international collaborators, which included a minimum of 1,000 over the
past five years, to submit to QS. And this year we started in March 2018, reviewed another
600 collaborators in painstaking detail, and were able to complete it. We were able to
complete it over a six-month period instead of the five weeks, and we were able to report
that up to leadership in a much more timely fashion with minimal stress for us. So we did all this using Clarivate Analytics’
Web of Science and Insights, as well as Elsevier’s Scopus and SciVal. Those are the two systems
that feed into the ranking agencies, so it’s important not only to have these tools and
have access to these tools, but to understand how they work. So the first thing we did is
we started with institutional variations. I figured this was the low-hanging fruit;
we knew what our variations were, we couldn’t have more than a couple of hundred, and according
to the first review of Web of Science’s Organizational Enhance section, we had about 215 variations.
I couldn’t imagine we had more than that, so I took this list, split it out into a spreadsheet,
sent it off to the professional schools, and emailed the respective library directors.
After a few weeks, I received another 200 variations from the schools, due to the institutes,
clinics, affiliations to the university hospitals and foundations, and then I went to the University
Archives to see if there were other names, because I knew we were called other names
previously, because we used to be two schools prior to the 1960s. So between a collaboration
of Western Reserve University and Case School of Applied Science prior to that, we had another
hundred years of variations to review. Almost two years later, I now have a running list
of 651 different variations, and it’s not just because of the name changes or the professional
schools. You also have to check out for the typos, in not only the name but the location
of the school. I actually had to check and confirm that, no, there is no Cleveland, Sweden
out there, and confirm that at one point we were not Case Western Reserved or Berserved
University. These are the things that make this project a continuous journey, and over
the past year, with the adjustments that we made just in Web of Science, we were able
to see an increase of documents from 89,000 to over 110,000. And, yes, some of these were
due to additional publications, but that would mean over 21,000 publications were published
in a year, and I can tell you after doing a lot of assessment on this, the institution’s
never published that many documents ever in a year. So using the list I created, starting
in Web of Science, I am now able to send these updates to Scopus, and now I am able to update
this quarterly and send new updates of the variations and different versions of the school
names that I find, to both Web of Science and Scopus. It’s important to keep track of
this information because you are always going to find more versions of the name. One thing
that we are trying to implement now is to make sure that we curb this from happening
by making sure that the faculty and researchers are using a very set, common name of Case
Western Reserve University, not putting in their institute or putting in the Cleveland
Clinic first. We want them to be using their primary affiliation, which is Case Western
Reserve University, and not adding in additional information. So now that I had the school variations, I
decided, well it’s time for the faculty. What are we going to find here? It became even
more complicated. An author can be seen as J. Smith, J. A. Smith, John Andrew Smith,
just John Smith, Johnny Smith, and so on and so forth, and you need to make sure that this
is all correct. You have to ensure that the name also belongs to your faculty members,
so there are a lot of John Smiths out there and you want to make sure that you’re only
counting your John Smith. So for example, this is one of Case Western’s most prolific
authors, Liming Dai, and for the longest time he was labeled as such. However, due to Scopus
and their request to merge authors, which I don’t know if you can necessarily see because
it is white on white pretty much. Scopus allows changes to faculty names and affiliations
without much oversight. Any institution can claim the faculty papers, so Liming Dai, whose
primary affiliation is Case Western but also works at other institutions, either in the
past or over the summer when he travels and he does summer sessions, those institutions
can now legitimately claim his papers, even though he is technically affiliated with Case
Western. The reason for this is because it’s not Scopus’s or Elsevier’s job to ensure that
the papers are affiliated to you, because all they see is, “Yeah, he did work for this
other school. How do we know?” It’s not their job to figure that out, it’s our job to make
sure that we’re accurate. So we need to make sure that we’re appropriately associating
those papers to our author. You also want to make sure that when you’re cleaning up
this profile, you’re only cleaning up your papers and not someone else’s. So I was able
to get Liming Dai’s profile corrected after several months of working with Elsevier, and
as of two weeks ago, he was correctly attributed back to Case Western and his other publications
were attributed to his other institutions. So you’re going to see that your Liming Dai
out there is going to have several different lines, because if he has a lot of papers being
published and he’s had a full, robust career at other institutions, he’s going to have
multiple lines coming out. So that’s something that you need to work for and make sure that
you’re correcting that. This is where I cannot stress enough Orcid IDs. Having that integration
for your faculty – and if you may or may not know, an Orcid ID follows this faculty member
wherever he goes, I’ve heard it called the social security number for their publications.
It sticks with them, it goes with them wherever they go, but they are the ones who have to
be updating it and cleaning that up wherever and whenever they go, which is another hurdle
to hit, and really I don’t have the time to go into all of that, maybe another day if
we want to really get into it. But the big thing is making sure that you’re cleaning
this information up as much as you can. So I’m going to go into a few of the updates
that we have seen so you can really see that the library can make impact doing cleanup
like this and making sure that you’re keeping up with it. So this is our QS rankings from 2014 to 2019.
As you can see, there was definitely a dip. I can tell you that, in the past year, with
just data cleanup we moved up 27 spots, which is actually pretty significant, at least for
the work that we’ve been doing in the past year. Next is THE, for the world rankings
we were at 158 last year and we are now 132, which means we moved up 26 spots. Now this
rank I’m especially proud of for the simple fact that the Leiden ranking is strictly bibliometrics.
We were sitting fairly low on this list at 143 last year. Now we are sitting at 57. We
moved up 86 spots. This is strictly bibliometric data, which means if you clean up your institutional
profiling, if you start cleaning up your author variations, you will see a major jump. This
is how we know that work we’ve been doing is actually fairly significant, because we
were able to move this much in one year, and that’s just bibliometric cleanup. It’s a lot
of work, it takes a lot of time to do it, but if you actually sit down and do it, even
for a couple hours a day or a couple hours a week, you’ll be able to see significant
changes within one year. And trust me, it was just me doing it for the full year of
bibliometric cleanup. Now I can say I actually have a team behind me, and our next steps
really have changed. We have more support on an institutional level, we have an international
ranking steering committee that my university librarian is a part of, I’m part of the working
group on the institutional level where we now have a full team of support from the International
Affairs, Institutional Research, there’s a variety of other departments. And we’re really
working to make sure that we have a strategic plan moving forward for Case Western to ensure
that we are not only impacting on a small level of bibliometrics, but we’re actually
targeting this in a much more robust way. So it’s really important to be involved as
part of the library to make sure that – trust me, Web of Science and Scopus are really expensive,
especially to have both, and the library cannot sustain that, especially since we saw a slight
budget cut this past year. So it was really important for the library to be a part of
the steering committee and the working group to say, “The school needs to help invest into
this. If you want to see these changes continue, we need to be able to work in these tools,
we need to be able to have good relationship with these vendors so that we can make sure
that you are seeing these new changes and you’re able to see our rankings increase.”
And on the library level, I now have the support of the research services librarians who are
liaisons who actually work directly with the faculty members so that they can actually
start pushing the Orcid ID integration, as well as continuous process improvement. Now
I’m not saying that we were fantastic and everything moved up and changed; I mean, ARWU,
we saw one spot move up. And it moved up, that’s great, but we need to go back and look
at the methodology and see what else we need to adjust. We actually saw our rankings drop
in NTU, as well as U.S. World. So we saw a slight drops there, but it’s just a matter
of figuring out what the methodology is, looking at what we need to be doing to make things
better. Like I said, we’re only about a year into this, and now that there’s a team behind
me able to make these movements move forward, we’re hoping to see more success in the future.
And the process is hard and it’s tiring, but anybody can do it, they really can. But we
hope to open the floor for discussion now. [Lauren ]: Our emails are there or you can
reach out to us right now and we can talk more about this. [Liz]: Thank you for your time. [Lauren]: Thank you.