4 Comments

"The vast majority of reviews are conducted to set the stage and motivate research contributions" is an unfortunate reality: Reviewing the literature never stops and has to be done at every level of the research with different goals: first the 'who' (purpose and authors' qualifications), the 'how' (method - reviewing how other people did it), the 'what' (reviewing what other people found out), and finally the 'why' (discussion - what do the results mean, if anything). The search techniques and places are of secondary importance, and because of the repetitive nature of the review, AI is well suited to help with automating the trivial parts of the process. There is also a class of literature reviews that are research papers in their own right, often produced by and for those trying to enter a field for the first time. AI can and will augment all of these systematic search and find activities from here on out but it is unlikely to be very useful as a fully autonomous agent. In practice, my (undergraduate computer and data science) students have the greatest difficulty with literature reviews - they just don't like to invest the time, and I get that. In my struggle to get them used to literature reviews, your Litmaps tool has been a god-send: it has boosted my students' literature review capabilities and motivation, and it has made their research much, much better - so thank you for that!

Expand full comment

Thanks for sharing! And it's great to hear Litmaps has been especially helpful.

It's interesting to hear your perspective, that AI probably won't be a fully autonomous agent capable of automating the entire lit review process. At this point in time, I find it very hard to predict the future. I used to feel confident that wherever there is creativity involved, then humans will be involved as well. People often argue that AI can only do what it is trained on, but humans are genuinely creative. But this argument also seems flawed -- after all, our creativity is limited to our own individual selves and life experiences. For example, maybe our input will always be critical to creating meaningful meta-analyses or coming up with novel ways to look at lit reviews. But, what if that was learnable by AI too?

Expand full comment

The past is not necessarily a good predictor of the future. Back before the invention of the steam engine, you could have predicted a stationary economy for the thousand years to come based on 2 millenia of zero growth/cap. This is not what happened.

AI has the potential to completely change our societies, rendering many jobs obsoletes. Historically new jobs have replaced old ones, but this is not an iron rule. It's an empirical rule over just two centuries of industrialization. Especially, as the evolution moves faster than any historical rate, and replaces intelligence as well as strength, we might not be able to adapt well enough.

Or at least, it's quite naïve to assert that there can be no first time in history.

Expand full comment

Indeed - it's very hard to predict. But it also seems the case that many times in history, groundbreaking things have happened at unprecedented rates and society was universally worried about it. Certainly growth accelerates, at the current exponential rate (for tech), but it's hard to have an unbiased opinion as to how "radical" it is, since none of us were present for all the other breakthroughs throughout history. So, it's sometimes I find it worth it to play devil's advocate and undercut the general "hype" regarding AI, to try and get a balanced idea of the real impact of these things.

Expand full comment