How to Detect Hate Speech in Text using PHP
It’s difficult for content moderation teams to prevent hate speech (in reviews, comments, etc.) on their own — especially when there’s a high volume of content to sift through.
Thankfully, with the free-to-use API provided in this article, you can implement a layer of programmatic content moderation for your applications which automatically detects hate speech from a text string. Each response will provide a score between 0.0–1.0 (with higher values indicating a higher likelihood that hate speech was used in a text string), helping flag content which human content moderation teams can subsequently assess as a top priority.
You can implement this API easily by following simple instructions below.
First, run the following command to install the SDK:
composer require cloudmersive/cloudmersive_nlp_api_client
After that, use the following copy-and-paste PHP code examples to structure your API call:
<?php
require_once(__DIR__ . '/vendor/autoload.php');
// Configure API key authorization: Apikey
$config = Swagger\Client\Configuration::getDefaultConfiguration()->setApiKey('Apikey', 'YOUR_API_KEY');
$apiInstance = new Swagger\Client\Api\AnalyticsApi(
new GuzzleHttp\Client(),
$config
);
$input = new \Swagger\Client\Model\HateSpeechAnalysisRequest(); // \Swagger\Client\Model\HateSpeechAnalysisRequest | Input hate speech analysis request
try {
$result = $apiInstance->analyticsHateSpeech($input);
print_r($result);
} catch (Exception $e) {
echo 'Exception when calling AnalyticsApi->analyticsHateSpeech: ', $e->getMessage(), PHP_EOL;
}
?>
That’s all the code you’ll need. To authenticate your API calls for free, register a free account on our website and copy & paste your API key into the appropriate input within the $config line above. This will provide you with a limit of 800 API calls per month and no additional commitments (your total will reset the following month once reached).