• mmhmm@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    12 days ago

    I agree how these conclusions were developed is trash; however, there is real value to understanding the impact alignments have on a model.

    There is a reason public llms don’t disclose how to make illegal or patented drugs. Llms shy away from difficult topics like genocide, etc.

    It isnt by accident, they were aligned by corps to respect certain views of reality. All the llm does is barf out a statically viable response to a prompt. If they are weighted you deserve to know how