I've been thinking about this. I think the missing link there may be that it can tell you what's on the page, but not what you think is on the page---and that's where this exercise is so helpful--in identifying the gaps between what you thought you were doing and what you've actually done. But I'm sure there will be many ways to use LLMs to outline...
Maybe it could prompt you first to explain what you think is there and then perform some analysis to tell you what is there. All with the goal of helping you to think about what pieces might be missing and ways to consider making it stronger. I’ve often found that my use of ChatGPT in this way has given me some reasonable ideas to strengthen my writing. Giving it a specific menu of things to suggest might also help to constrain it a bit to good advice.
A big part of good feedback, I think, is drawing our attention to our writing in a new way. Even if the LLM doesn’t give suggestions on how to fix the gaps, having ways for it to help you reflect on the piece might be helpful. Maybe I’ll try to whip something up and see if it’s useful at all...
All this said, it’s hard to imagine that anything machine-generated will come close to the influence of Alan’s attention and feedback.
I imagine you can come up with something that does that. But I also don't know that automating that gives us something that we can't just get from going through the exercise ourselves...
Can’t help but wonder whether an LLM might be able to generate a reverse outline for you. Would be curious to see how a GPT would do...
Might also be something that could be integrated into a tool like Lex.
I've been thinking about this. I think the missing link there may be that it can tell you what's on the page, but not what you think is on the page---and that's where this exercise is so helpful--in identifying the gaps between what you thought you were doing and what you've actually done. But I'm sure there will be many ways to use LLMs to outline...
Maybe it could prompt you first to explain what you think is there and then perform some analysis to tell you what is there. All with the goal of helping you to think about what pieces might be missing and ways to consider making it stronger. I’ve often found that my use of ChatGPT in this way has given me some reasonable ideas to strengthen my writing. Giving it a specific menu of things to suggest might also help to constrain it a bit to good advice.
A big part of good feedback, I think, is drawing our attention to our writing in a new way. Even if the LLM doesn’t give suggestions on how to fix the gaps, having ways for it to help you reflect on the piece might be helpful. Maybe I’ll try to whip something up and see if it’s useful at all...
All this said, it’s hard to imagine that anything machine-generated will come close to the influence of Alan’s attention and feedback.
I imagine you can come up with something that does that. But I also don't know that automating that gives us something that we can't just get from going through the exercise ourselves...