AI


Read Paper
Domain-Specific AI
  • Domain-Specific AI
  • Domain-Specific LLM
  • Domain-Specific Knowledge Graphs
  • Use Case: DarkBERT
  • Use Case: Others
Domain-Specific AI
S2W is dedicated to the research and development of domain-specific AI.

By leveraging proprietary and specialized data within organizations, S2W builds a unique AI data ecosystem tailored to each domain, creating new business value.

At the core of domain-specific AI lies the ability to define the data structure of a specific domain—whether industry or organization—and to systematically model the relationships between data points. This process enables the construction of a Knowledge Graph, which allows for deep and precise analysis of data interconnections.

The resulting domain-customized data operation system goes beyond simple search functions. It evaluates causal relationships and prioritizes data based on significance, providing advanced intelligence that directly supports strategic decision-making.

S2W’s domain-specific AI is powered by three core components: Domain-specialized language models, Knowledge Graphs, and Generative AI.

Domain-Specific LLM
Building Language Models That Understand Your Data and Environment

S2W develops domain-specialized Large Language Models (LLMs) with exceptional capabilities in processing and understanding data within specific industries and domains. Our LLMs are designed to reflect each organization’s unique data and operational environment, delivering highly customized and optimized data operation solutions.

The performance of an LLM is heavily influenced by the quality and quantity of its training data. Therefore, effective data analysis, preprocessing, and refinement are essential. Furthermore, each dataset varies in source, nature, and quality, requiring a flexible and adaptive approach to identify and collect the most relevant data for the target domain.

Building specialized LLMs for areas such as cybersecurity, healthcare, and finance often goes beyond the capabilities of publicly available internet data. S2W overcomes this limitation by leveraging domain-specific sources such as expert publications, research papers, technical reports, source code, and even data from dark web forums. We also incorporate proprietary internal organizational data into our training pipelines.

Our approach includes not only data collection but also a thorough data cleaning process—removing noise, filtering duplicates, and applying accurate labeling. In addition, we utilize data augmentation techniques to enrich underrepresented datasets. Through this combination of high-quality domain data acquisition and refinement, S2W builds LLMs that deliver high efficiency and high performance.

Domain-Specific Knowledge Graphs
S2W builds domain-specific Knowledge Graphs (KGs) that define and map relationships between each data.

These graphs are grounded in domain-specialized Large Language Models (LLMs), which provide the semantic foundation for accurately understanding complex domain data.

Our Knowledge Graphs integrate both structured and unstructured data from across internal and external sources. By doing so, they automatically identify and represent relationships between disparate data elements, enabling a unified understanding of organizational and industry-specific contexts.

With this foundation, S2W enables refined semantic analysis and automated data operations that reflect the nuances of each domain. These intelligently designed graphs break down traditional data silos, create consistent and connected data environments, and support advanced analytics and automation.

Use Case: DarkBERT

DarkBERT is the world's first Dark Web-specialized AI language model. A language model is an AI model that understands human language and has extensive pre-trained knowledge, making it highly capable of solving various natural language processing tasks. Among them, DarkBERT particularly excels in processing and analyzing unstructured data present on the Dark Web. Unlike other similar encoder language models that struggle with the diverse vocabulary and structural diversity found on the Dark Web, DarkBERT has been trained specifically to understand the illicit content present on the Dark Web. Furthermore, DarkBERT enhances its training by fine-tuning RoBERTa models through Masked Language Modeling (MLM) on text collected from the Dark Web.

The collection of the corpus is a fundamental challenge in training DarkBERT. S2W is renowned for its ability to collect and analyze data from the Dark Web, including its doppelgangers. It has accumulated a substantial Dark Web text corpus suitable for training by removing duplicates and low-density pages, resulting in a massive corpus of 5.83GB even after refinement.

DarkBERT utilized existing large-scale language models and further underwent post-training by incorporating specific domain data. It excels in handling unstructured data typically found on the anonymous web, where extraction can be challenging, and it is adept at inferring context. Additionally, DarkBERT can be employed for detecting/classifying various criminal activities occurring on the anonymous web and extracting crucial threat information.

Development Process of DarkBERT
What is NLP?
Use Case: Others
Use Case 1
Use Case 2
Use Case 3
Customer-Specific Fine-Tuning and Classification

DarkBert can be customized and tuned to meet the specific needs of users. It has the capability to process a vast array of both internal and external unstructured data, enabling it to effectively filter and refine only the desired information from extensive datasets according to user preferences.

Customer A (Industry: Construction)

[Pain point]
There is a wealth of diverse language data available on the web that is crucial for corporate decision-making. However, many companies face challenges in directly scraping and analyzing this data due to insufficient internal infrastructure, especially a lack of expertise in processing unstructured language data. Even when a company possesses language processing expertise, handling domain-specific data can be highly challenging, requiring specialized tuning techniques.
(Example: Creating a tuned DarkBERT model for the dark web)

[Challenge]
The need arose to classify specific data or extract insights for decision-making from the vast amount of unstructured language data generated internally within the company. However, this data is highly domain-specific, making it exceptionally difficult to process effectively with general-purpose technologies.

[Result of Adoption]
By utilizing domain-specific language models, users can significantly reduce the time spent on data refinement by automatically pre-selecting meaningful data when attempting to extract insights from large datasets. Furthermore, when extracting specific statistics from the data, using language models that have pre-refined the data can enhance the reliability of the extracted statistics. The classification and refinement of such domain-specific data play a crucial role in enabling companies to make effective decisions based on their data.

Integration with Open LLM

DarkBERT plays a crucial role in the adoption of Large Language Models (LLMs) like OpenAI's ChatGPT within the enterprise. Companies are increasingly seeking to leverage various datasets, both internally and externally, for conversational purposes, where LLMs like ChatGPT generate responses based on this data. To achieve this, "Retrieval-Augmented Generation" (RAG) technology, which focuses on answer generation through search, has gained significant attention. However, a challenge arises due to the sheer volume of data, strong domain-specific characteristics (including domain-specific terminology), and the presence of irrelevant data, leading to reduced search efficiency and accuracy.

DarkBERT, as a "domain-specific encoder model," can address these issues in two key aspects:

(1)Domain-Specific Data Refinement and Classification:
DarkBERT utilizes models tuned to match the characteristics of a company's data. This allows it to automatically classify essential data relevant to decision-making according to the specific data features of the enterprise. Consequently, it enhances search accuracy and improves the quality of LLM responses.

(2)Domain-Specific Embedding:
One critical element of RAG is performing meaning-based searches, which necessitates appropriately embedding documents. General language models often lack an adequate understanding of data with strong domain-specific characteristics, making it challenging to generate embeddings that reflect the correct meaning. Models like DarkBERT, which undergo domain-specific tuning, enable the creation of high-quality embeddings. This, in turn, significantly boosts search accuracy for user queries.

Dark Web Specialized Generative AI

DarkCHAT is a specialized generative AI model installed within a Dark Web monitoring solution called XARVIS. XARVIS required an effective system to refine and present information that users seek within the Dark Web. DarkCHAT enables users to effectively obtain threat intelligence related to their areas of interest. Leveraging the collected data, DarkCHAT derives new intelligence and grants users access to desired data through single commands.

Unlike commercially available language models, which cannot directly access the Dark Web and rely on curated Dark Web news from surface web sources, DarkCHAT stands apart as a real-time generative AI specialized for the Dark Web. It provides vivid, up-to-the-minute Dark Web information based on data collected from the Dark Web.


* Generative Artificial Intelligence is a technology that generates new data based on given data or inputs. It falls under deep learning and is also referred to as generative models. Generative AI can create various types of data, including text, images, audio, video, and more.