Highlights
- Pro
Pinned Loading
-
llm-misinformation/llm-misinformation
llm-misinformation/llm-misinformation PublicThe dataset and code for the ICLR 2024 paper "Can LLM-Generated Misinformation Be Detected?"
-
llm-misinformation/llm-misinformation-survey
llm-misinformation/llm-misinformation-survey PublicPaper list for the survey "Combating Misinformation in the Age of LLMs: Opportunities and Challenges" and the initiative "LLMs Meet Misinformation", accepted by AI Magazine 2024
-
llm-editing/editing-attack
llm-editing/editing-attack PublicCode and dataset for the paper: "Can Editing LLMs Inject Harm?"
-
llm-authorship/survey
llm-authorship/survey PublicPaper list for the paper "Authorship Attribution in the Era of Large Language Models: Problems, Methodologies, and Challenges"
TeX 7
-
camel-ai/agent-trust
camel-ai/agent-trust PublicThe code for "Can Large Language Model Agents Simulate Human Trust Behaviors?"
-
pygod-team/pygod
pygod-team/pygod PublicA Python Library for Graph Outlier Detection (Anomaly Detection)
If the problem persists, check the GitHub status page or contact support.