V2EX = way to explore
V2EX 是一个关于分享和探索的地方
现在注册
已注册用户请  登录
推荐学习书目
Learn Python the Hard Way
Python Sites
PyPI - Python Package Index
http://diveintopython.org/toc/index.html
Pocoo
值得关注的项目
PyPy
Celery
Jinja2
Read the Docs
gevent
pyenv
virtualenv
Stackless Python
Beautiful Soup
结巴中文分词
Green Unicorn
Sentry
Shovel
Pyflakes
pytest
Python 编程
pep8 Checker
Styles
PEP 8
Google Python Style Guide
Code Style from The Hitchhiker's Guide
wjMcat
V2EX  ›  Python

[Paper Reading]: Self-Improving Alignment with LLM-as-a-Meta-Judge

  •  
  •   wjMcat · 106 天前 · 743 次点击
    这是一个创建于 106 天前的主题,其中的信息可能已经有所发展或是发生改变。

    个人 Github Blog 地址: https://wj-mcat.github.io/agent-handbook/docs/paper-reading/2024/08/meta-reward-language-models-self-improving-alignment-with-llm-as-a-meta-judge

    现在 LLM-as-a-Judge 概念这么火,那 Judge 的能力绝对不能弱啊,所以作者提出了新的方法来提升模型 Judgement 的能力。

    方法简要介绍:

    1. 让模型推理得到结果
    2. 同时来评估答案的内容
    3. 用评估结果来调整训练模型

    效果:能够提升模型 judgement 和 Instruction following 的能力。

    方法详细介绍

    该方法使用一个 seed-model (已经 SFT 过,同时具备 Instruction Following 的能力),然后有如下流程:

    • As-a-Actor: 根据输入得到对应的 responses 。
    • As-a-Judge: 根据输入和 responses 进行打分( Judgement ),一般会提供一个 CoT 的思考过程,这个也是一个非常重要的计算依据。
    • As-a-Meta-Judge: 对它的 Judgement 进行比较打分。

    其中第三个阶段是核心工作内容,对应的 Prompt 如下所示:

    Review the user’s question and the corresponding response, along with two judgments. Determine which judgment is more accurate according to the rubric provided below. The rubric used for the initial judgments is as follows: - Add 1 point if the response is relevant and provides some information related to the user’s inquiry, even if it is incomplete or contains some irrelevant content. - Add another point if the response addresses a substantial portion of the user’s question, but does not completely resolve the query or provide a direct answer. - Award a third point if the response answers the basic elements of the user’s question in a useful way, regardless of whether it seems to have been written by an AI Assistant or if it has elements typically found in blogs or search results. - Grant a fourth point if the response is clearly written from an AI Assistant’s perspective, addressing the user’s question directly and comprehensively, and is well-organized and helpful, even if there is slight room for improvement in clarity, conciseness or focus. - Bestow a fifth point for a response that is impeccably tailored to the user’s question by an AI Assistant, without extraneous information, reflecting expert knowledge, and demonstrating a high-quality, engaging, and insightful answer. 
    
    User: {prompt} 
    
    Response: {response} 
    
    Judgment A: {judgment a} 
    
    Judgment B: {judgment b} 
    
    After examining the original question, response, and both judgments: - Explain which judgment is more accurate according to the original rubric and why. Consider factors such as adherence to the rubric, accuracy in evaluating the response, and consistency in applying the criteria. - Conclude with a clear statement of which judgment is better using the format: “Winner: [Judgement A | Judgement B]”
    

    最后质量最高的 Judgement 将参与到模型的训练当中,进而提升模型的 Judgement 能力。

    目前尚无回复
    关于   ·   帮助文档   ·   博客   ·   API   ·   FAQ   ·   实用小工具   ·   3015 人在线   最高记录 6679   ·     Select Language
    创意工作者们的社区
    World is powered by solitude
    VERSION: 3.9.8.5 · 21ms · UTC 13:06 · PVG 21:06 · LAX 05:06 · JFK 08:06
    Developed with CodeLauncher
    ♥ Do have faith in what you're doing.