Neural Networks Uncovering Fraud: A New Era in AI Rule Discovery

How a Neural Network Learned Its Own Fraud Rules: A Neuro-Symbolic AI Experiment

Most neuro-symbolic systems inject rules written by humans. But what if a neural network could discover those rules itself?

In this experiment, a hybrid neural network was extended with a differentiable rule-learning module that automatically extracts IF-THEN fraud rules during training. The model was tested on the Kaggle Credit Card Fraud dataset, which has a fraud rate of 0.17%. The model learned interpretable rules such as:

IF V14 < −1.5σ AND V4 > +0.5σ → Fraud

Here, σ denotes the feature standard deviation after normalization. The rule learner achieved a ROC-AUC of 0.933 ± 0.029, while maintaining 99.3% fidelity to the neural network’s predictions.

Most interestingly, the model independently rediscovered V14 — a feature long known by analysts to correlate strongly with fraud — without being instructed to look for it.

What the Model Discovered

This section outlines the findings after up to 80 epochs of training. The rule learner produced clear rules in two seeds:

  • Seed 42 — Cleanest Rule (5 conditions, conf=0.95)
    Learned Fraud Rule: IF V14 < −1.5σ AND V4 > +0.5σ AND V12 < −0.9σ AND V11 > +0.5σ AND V10 < −0.8σ THEN FRAUD
  • Seed 7 — Complementary Rule (8 conditions, conf=0.74)
    Learned Fraud Rule: IF V14 < −1.6σ AND V12 < −1.3σ AND V4 > +0.3σ AND V11 > +0.5σ AND V10 < −1.0σ AND V3 < −0.8σ AND V17 < −1.5σ AND V16 < −1.0σ THEN FRAUD

In both cases, low values of V14 sit at the heart of the logic — a striking convergence given zero prior guidance.

From Injected Rules to Learned Rules — Why It Matters

Every fraud model has a decision boundary. Fraud teams, however, operate using rules. The gap between them—what the model learned and what analysts can read, audit, and defend to a regulator—is critical for compliance.

Hand-coded rules encode existing knowledge, which works well when fraud patterns are stable. However, when fraud patterns shift or features are anonymized, it becomes essential for the model to surface signals that haven’t been previously identified.

The Architecture: Three Learnable Pieces

The architecture keeps a standard neural network intact while adding a second path that learns symbolic rules explaining the network’s decisions. The two paths run in parallel:

  • Learnable Discretizer: Converts continuous features into binary inputs using a soft sigmoid threshold.
  • Rule Learner Layer: Produces rules as weighted combinations of binarized features.
  • Temperature Annealing: Controls the learning process, allowing rules to crystallize into readable formats.

Three-Part Loss: Detection + Consistency + Sparsity

The full training objective includes:

  • L_BCE: Weighted Binary Cross-Entropy to focus on fraud samples.
  • L_consistency: Ensures rules agree with the MLP’s predictions where confidence is high.
  • L_sparsity: Encourages the model to keep rules simple by applying L1 penalties on raw weights.

Results: Does Rule Learning Work — and What Did It Find?

The experimental setup used the Kaggle Credit Card Fraud dataset, comprising 284,807 transactions. The results showed that the rule learner performed slightly below the pure neural baseline but provided valuable explainability.

Rule Fidelity: 0.993 ± 0.001 — Excellent
Rule Coverage: 0.811 ± 0.031 — Good
Rule Simplicity: 1.7 ± 2.1 — With active seeds using 5 and 8 conditions, comfortably readable.

The Extracted Rules — What the Gradient Found

Both rules produced by the model highlighted V14, confirming its significance without prior instruction. The model demonstrated the capacity to rediscover important features autonomously.

Four Things to Watch Before Deploying This

Considerations include:

  • Annealing speed: Too fast or slow can adversely affect rule clarity.
  • n_rules: Sets interpretability; balance to avoid too few or excessive rules.
  • Consistency threshold: Ensure the MLP is well-calibrated for effective rule extraction.
  • Rule auditing: Required after each retrain cycle to maintain compliance.

Rule Injection vs. Rule Learning — When to Use Which

The rule learner adds minimal code but requires careful validation for meaningful rule extraction. This approach serves as a tool rather than a solution, highlighting the need for multi-seed evaluation in claims about learned rule behavior.

Overall, this experiment demonstrates the potential of neuro-symbolic AI to combine statistical learning with human-readable logic, paving the way for more transparent and adaptable fraud detection systems.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...