Apple's artificial intelligence feature is generating fake words in notification summaries, creating confusion for users who rely on these alerts for important information.
The tech giant rolled out Apple Intelligence across its devices with promises of smarter, more helpful summaries. Instead, users report that the AI occasionally invents words that don't exist in the original notifications. This hallucination problem undermines the feature's core purpose. Apple designed notification summaries to save users time by condensing messages into brief overviews. When the AI fabricates terminology, users lose trust in the summaries and must verify information by opening full notifications anyway.
The issue highlights a broader challenge with generative AI systems. These tools sometimes confidently produce plausible-sounding content that doesn't reflect reality. Apple built the feature to work entirely on-device, which should theoretically improve accuracy. Yet the problem persists.
Apple has not publicly addressed the hallucination issue or announced fixes. Users frustrated with unreliable summaries can disable the feature in settings. The company faces pressure to resolve this reliability gap before the technology becomes a standard part of iPhone workflows. For consumers considering upgrades specifically for Apple Intelligence capabilities, the current performance raises legitimate concerns about whether the feature delivers on its promises.
