You can outsource the grunt work to an LLM, not expertise R
您可以将咕unt作品外包给LLM,而不是专业知识r
The more I use LLMs for programming, the more it seems to me that they can only be used successfully if you ask them to do things that you could do yourself.
我使用LLM进行编程的越多,在我看来,只有当您要求他们做自己可以自己做的事情时,才能成功使用它们。
This seems to be the case because:
似乎是这样的,因为:
you know exactly what you want/need and thus can exactly describe it;
您确切地知道您想要/需要什么,因此可以准确地描述它;
you know exactly if the LLM is actually delivering quality code or not;
您确切知道LLM是否实际上提供了质量代码;
you know exactly if something the LLM suggests that you hadn’t thought about actually makes sense;
您确切地知道LLM暗示您没有考虑过的事情实际上是有道理的。
This reminds me of my consulting years, where it was quite easy to predict if a consulting project would be successful. If the client could do it themselves if they had time, the project would always be successful. They knew exactly what they needed and could describe it to us, and most importantly, there was a very tight feedback loop between our intermediary outputs and their review. But when we were brought in and clients didn’t even understand what their problem was (but thought they knew), this is where things were difficult.
这使我想起了我的咨询年代,很容易预测咨询项目是否会成功。如果客户有时间的话自己可以自己做,那么该项目将始终取得成功。他们确切地知道他们需要的东西,并可以向我们描述它,最重要的是,我们的中介产出与他们的审查之间存在非常紧密的反馈回路。但是,当我们被带进来,客户甚至不了解他们的问题是什么(但认为他们知道),这就是困难的地方。
It seems to me that as long as people cannot communicate their needs clearly, developers will keep their jobs.
在我看来,只要人们无法清楚地传达自己的需求,开发人员就会保留他们的工作。
Now, this doesn’t mean that you cannot do things outside of your expertise with LLMs, but you must then use the LLM to teach you enough (alongside more traditional methods), or you must do something so trivial and done a billion times before and low stakes enough that you can blindly trust the output.
现在,这并不意味着您不能在使用LLM的专业知识之外做事,但是然后您必须使用LLM来教您足够的教学(以及更多传统的方法),或者您必须做一些如此微不足道的事情,并且在不足之前和低赌注之前就做了十亿次,以至于您可以盲目信任输出。
I’ve used an LLM recently to write code to parse json and XML files, which is something I’ve done in the past and which I’m quite happy to likely never have to do myself again. The output was quite good, and only required minor correction before working. To help the LLM generate a correct output, I gave it one XML file as context.
我最近使用LLM编写代码来解析JSON和XML文件,这是我过去所做的事情,我很高兴可能再也不必再做自己。输出非常好,只需要在工作前进行较小的校正。为了帮助LLM生成正确的输出,我将其作为上下文提供了一个XML文件。
Another thing I ask the LLM to do is to write code to get data from the Openalex api using the {openalexR} package. To help it, I gave it the package’s and api’s documentation. Here again, the code worked flawlessly, and again, this is something I could have done myself, so my prompt was quite precise and I knew I had to give the LLM something to ensure it generated valid code.
我要求LLM要做的另一件事是使用{OpenAlexr}软件包编写代码以从OpenAlex API获取数据。为了帮助它,我给了它包装和API的文档。再次在这里,该代码完美无缺,这是我本可以做的事情,因此我的提示很精确,我知道我必须给LLM一些东西以确保其生成有效的代码。
Btw, I’ve been using Claude Sonnet 4 and it works quite well for R. But I also like Gemini because of its very large context window.
顺便说一句,我一直在使用Claude Sonnet 4,它对R的效果很好。但是,由于它的上下文窗口非常大,因此我也喜欢Gemini。