Event Message Detector
Model Description
The Event Message Detector is a fine-tuned token classification model based on xlm-roberta-base
. It is designed to process real-time message streams from chat applications (e.g., Slack, IRC) to detect conversations that can be converted into calendar events. The model identifies event-related messages within a sliding window of recent messages, facilitating the extraction of meaningful interactions for scheduling purposes.
Intended Use
Direct Use
This model is intended for real-time detection of event-related conversations in multi-user chat environments. It can be integrated into chat applications to automatically identify and extract discussions pertinent to scheduling events, such as meetings or calls.
Downstream Use
Developers can fine-tune this model further for specific domains or integrate it into larger systems that manage event scheduling, automate calendar entries, or analyze communication patterns.
Out-of-Scope Use
The model is not designed for general-purpose natural language understanding tasks unrelated to event detection. It should not be used for sentiment analysis, topic modeling, or other unrelated NLP tasks without appropriate fine-tuning.
Model Details
- Model Type: Token Classification
- Base Model:
xlm-roberta-base
(multilingual, 277M parameters) - Training Data: Labeled chat messages indicating event-related conversations
- Training Procedure: Fine-tuned with a sliding window of 15 messages, using weighted cross-entropy loss
- Evaluation Metrics: ROC-AUC, F1-Score, Precision, Recall
Usage
from transformers import AutoModelForTokenClassification, AutoTokenizer
import torch
# Load model and tokenizer
model = AutoModelForTokenClassification.from_pretrained("oleksiydolgykh/event-message-detector")
tokenizer = AutoTokenizer.from_pretrained("oleksiydolgykh/event-message-detector")
tokenizer.truncation_side = "left"
# Example message
message = "[MESSAGE] [user1]: Let's have a meeting tomorrow at 10 AM."
# Tokenize input
inputs = tokenizer(message, return_tensors="pt")
# Get model predictions
with torch.no_grad():
outputs = model(**inputs)
# Process outputs
logits = outputs.logits
predictions = torch.softmax(logits, dim=-1)[:, 1].mean()
- Downloads last month
- 125