通過智能分塊和元數(shù)據(jù)集成獲得更好的搜索結(jié)果
通常,我們開發(fā)基于 LLM 的檢索應(yīng)用程序的知識(shí)庫(kù)包含大量各種格式的數(shù)據(jù)。為了向LLM提供最相關(guān)的上下文來回答知識(shí)庫(kù)中特定部分的問題,我們依賴于對(duì)知識(shí)庫(kù)中的文本進(jìn)行分塊并將其放在方便的位置。
分塊
分塊是將文本分割成有意義的單元以改進(jìn)信息檢索的過程。通過確保每個(gè)塊代表一個(gè)集中的想法或觀點(diǎn),分塊有助于保持內(nèi)容的上下文完整性。
在本文中,我們將討論分塊的三個(gè)方面:
· 糟糕的分塊如何導(dǎo)致結(jié)果相關(guān)性降低
· 良好的分塊如何帶來更好的結(jié)果
· 如何通過元數(shù)據(jù)進(jìn)行良好的分塊,從而獲得具有良好語(yǔ)境的結(jié)果
為了有效地展示分塊的重要性,我們將采用同一段文本,對(duì)其應(yīng)用 3 種不同的分塊方法,并檢查如何根據(jù)查詢檢索信息。
分塊并存儲(chǔ)至 Qdrant
讓我們看看下面的代碼,它展示了對(duì)同一文本進(jìn)行分塊的三種不同方法。
Python
import qdrant_client
from qdrant_client.models import PointStruct, Distance, VectorParams
import openai
import yaml
# Load configuration
with open('config.yaml', 'r') as file:
config = yaml.safe_load(file)
# Initialize Qdrant client
client = qdrant_client.QdrantClient(config['qdrant']['url'], api_key=config['qdrant']['api_key'])
# Initialize OpenAI with the API key
openai.api_key = config['openai']['api_key']
def embed_text(text):
print(f"Generating embedding for: '{text[:50]}'...") # Show a snippet of the text being embedded
response = openai.embeddings.create(
input=[text], # Input needs to be a list
model=config['openai']['model_name']
)
embedding = response.data[0].embedding # Access using the attribute, not as a dictionary
print(f"Generated embedding of length {len(embedding)}.") # Confirm embedding generation
return embedding
# Function to create a collection if it doesn't exist
def create_collection_if_not_exists(collection_name, vector_size):
collections = client.get_collections().collections
if collection_name not in [collection.name for collection in collections]:
client.create_collection(
collection_name=collection_name,
vectors_config=VectorParams(size=vector_size, distance=Distance.COSINE)
)
print(f"Created collection: {collection_name} with vector size: {vector_size}") # Collection creation
else:
print(f"Collection {collection_name} already exists.") # Collection existence check
# Text to be chunked which is flagged for AI and Plagiarism but is just used for illustration and example.
text = """
Artificial intelligence is transforming industries across the globe. One of the key areas where AI is making a significant impact is healthcare. AI is being used to develop new drugs, personalize treatment plans, and even predict patient outcomes. Despite these advancements, there are challenges that must be addressed. The ethical implications of AI in healthcare, data privacy concerns, and the need for proper regulation are all critical issues. As AI continues to evolve, it is crucial that these challenges are not overlooked. By addressing these issues head-on, we can ensure that AI is used in a way that benefits everyone.
"""
# Poor Chunking Strategy
def poor_chunking(text, chunk_size=40):
chunks = [text[i:i + chunk_size] for i in range(0, len(text), chunk_size)]
print(f"Poor Chunking produced {len(chunks)} chunks: {chunks}") # Show chunks produced
return chunks
# Good Chunking Strategy
def good_chunking(text):
import re
sentences = re.split(r'(?<=[.!?]) +', text)
print(f"Good Chunking produced {len(sentences)} chunks: {sentences}") # Show chunks produced
return sentences
# Good Chunking with Metadata
def good_chunking_with_metadata(text):
chunks = good_chunking(text)
metadata_chunks = []
for chunk in chunks:
if "healthcare" in chunk:
metadata_chunks.append({"text": chunk, "source": "Healthcare Section", "topic": "AI in Healthcare"})
elif "ethical implications" in chunk or "data privacy" in chunk:
metadata_chunks.append({"text": chunk, "source": "Challenges Section", "topic": "AI Challenges"})
else:
metadata_chunks.append({"text": chunk, "source": "General", "topic": "AI Overview"})
print(f"Good Chunking with Metadata produced {len(metadata_chunks)} chunks: {metadata_chunks}") # Show chunks produced
return metadata_chunks
# Store chunks in Qdrant
def store_chunks(chunks, collection_name):
if len(chunks) == 0:
print(f"No chunks were generated for the collection '{collection_name}'.")
return
# Generate embedding for the first chunk to determine vector size
sample_text = chunks[0] if isinstance(chunks[0], str) else chunks[0]["text"]
sample_embedding = embed_text(sample_text)
vector_size = len(sample_embedding)
create_collection_if_not_exists(collection_name, vector_size)
for idx, chunk in enumerate(chunks):
text = chunk if isinstance(chunk, str) else chunk["text"]
embedding = embed_text(text)
payload = chunk if isinstance(chunk, dict) else {"text": text} # Always ensure there's text in the payload
client.upsert(collection_name=collection_name, points=[
PointStruct(id=idx, vector=embedding, payload=payload)
])
print(f"Chunks successfully stored in the collection '{collection_name}'.")
# Execute chunking and storing separately for each strategy
print("Starting poor_chunking...")
store_chunks(poor_chunking(text), "poor_chunking")
print("Starting good_chunking...")
store_chunks(good_chunking(text), "good_chunking")
print("Starting good_chunking_with_metadata...")
store_chunks(good_chunking_with_metadata(text), "good_chunking_with_metadata")
上面的代碼執(zhí)行以下操作:
· embed_text方法接收文本,使用 OpenAI 嵌入模型生成嵌入,并返回生成的嵌入。
· 初始化用于分塊和后續(xù)內(nèi)容檢索的文本字符串
· 糟糕的分塊策略: 將文本分成每 40 個(gè)字符的塊
· 良好的分塊策略:根據(jù)句子拆分文本以獲得更有意義的上下文
· 具有元數(shù)據(jù)的良好分塊策略:向句子級(jí)塊添加適當(dāng)?shù)脑獢?shù)據(jù)
· 一旦為塊生成了嵌入,它們就會(huì)存儲(chǔ)在 Qdrant Cloud 中相應(yīng)的集合中。
請(qǐng)記住,創(chuàng)建不良分塊只是為了展示不良分塊如何影響檢索。
下面是來自 Qdrant Cloud 的塊的屏幕截圖,您可以看到元數(shù)據(jù)被添加到句子級(jí)塊中以指示來源和主題。
基于分塊策略的檢索結(jié)果
現(xiàn)在讓我們編寫一些代碼來根據(jù)查詢從 Qdrant Vector DB 中檢索內(nèi)容。
Python
import qdrant_client
from qdrant_client.models import PointStruct, Distance, VectorParams
import openai
import yaml
# Load configuration
with open('config.yaml', 'r') as file:
config = yaml.safe_load(file)
# Initialize Qdrant client
client = qdrant_client.QdrantClient(config['qdrant']['url'], api_key=config['qdrant']['api_key'])
# Initialize OpenAI with the API key
openai.api_key = config['openai']['api_key']
def embed_text(text):
print(f"Generating embedding for: '{text[:50]}'...") # Show a snippet of the text being embedded
response = openai.embeddings.create(
input=[text], # Input needs to be a list
model=config['openai']['model_name']
)
embedding = response.data[0].embedding # Access using the attribute, not as a dictionary
print(f"Generated embedding of length {len(embedding)}.") # Confirm embedding generation
return embedding
# Function to create a collection if it doesn't exist
def create_collection_if_not_exists(collection_name, vector_size):
collections = client.get_collections().collections
if collection_name not in [collection.name for collection in collections]:
client.create_collection(
collection_name=collection_name,
vectors_config=VectorParams(size=vector_size, distance=Distance.COSINE)
)
print(f"Created collection: {collection_name} with vector size: {vector_size}") # Collection creation
else:
print(f"Collection {collection_name} already exists.") # Collection existence check
# Text to be chunked which is flagged for AI and Plagiarism but is just used for illustration and example.
text = """
Artificial intelligence is transforming industries across the globe. One of the key areas where AI is making a significant impact is healthcare. AI is being used to develop new drugs, personalize treatment plans, and even predict patient outcomes. Despite these advancements, there are challenges that must be addressed. The ethical implications of AI in healthcare, data privacy concerns, and the need for proper regulation are all critical issues. As AI continues to evolve, it is crucial that these challenges are not overlooked. By addressing these issues head-on, we can ensure that AI is used in a way that benefits everyone.
"""
# Poor Chunking Strategy
def poor_chunking(text, chunk_size=40):
chunks = [text[i:i + chunk_size] for i in range(0, len(text), chunk_size)]
print(f"Poor Chunking produced {len(chunks)} chunks: {chunks}") # Show chunks produced
return chunks
# Good Chunking Strategy
def good_chunking(text):
import re
sentences = re.split(r'(?<=[.!?]) +', text)
print(f"Good Chunking produced {len(sentences)} chunks: {sentences}") # Show chunks produced
return sentences
# Good Chunking with Metadata
def good_chunking_with_metadata(text):
chunks = good_chunking(text)
metadata_chunks = []
for chunk in chunks:
if "healthcare" in chunk:
metadata_chunks.append({"text": chunk, "source": "Healthcare Section", "topic": "AI in Healthcare"})
elif "ethical implications" in chunk or "data privacy" in chunk:
metadata_chunks.append({"text": chunk, "source": "Challenges Section", "topic": "AI Challenges"})
else:
metadata_chunks.append({"text": chunk, "source": "General", "topic": "AI Overview"})
print(f"Good Chunking with Metadata produced {len(metadata_chunks)} chunks: {metadata_chunks}") # Show chunks produced
return metadata_chunks
# Store chunks in Qdrant
def store_chunks(chunks, collection_name):
if len(chunks) == 0:
print(f"No chunks were generated for the collection '{collection_name}'.")
return
# Generate embedding for the first chunk to determine vector size
sample_text = chunks[0] if isinstance(chunks[0], str) else chunks[0]["text"]
sample_embedding = embed_text(sample_text)
vector_size = len(sample_embedding)
create_collection_if_not_exists(collection_name, vector_size)
for idx, chunk in enumerate(chunks):
text = chunk if isinstance(chunk, str) else chunk["text"]
embedding = embed_text(text)
payload = chunk if isinstance(chunk, dict) else {"text": text} # Always ensure there's text in the payload
client.upsert(collection_name=collection_name, points=[
PointStruct(id=idx, vector=embedding, payload=payload)
])
print(f"Chunks successfully stored in the collection '{collection_name}'.")
# Execute chunking and storing separately for each strategy
print("Starting poor_chunking...")
store_chunks(poor_chunking(text), "poor_chunking")
print("Starting good_chunking...")
store_chunks(good_chunking(text), "good_chunking")
print("Starting good_chunking_with_metadata...")
store_chunks(good_chunking_with_metadata(text), "good_chunking_with_metadata")
上面的代碼執(zhí)行以下操作:
· 定義查詢并生成查詢的嵌入
· 搜索查詢?cè)O(shè)置為"ethical implications of AI in healthcare"。
· 該retrieve_and_print函數(shù)搜索特定的 Qdrant 集合并檢索最接近查詢嵌入的前 3 個(gè)向量。
現(xiàn)在讓我們看看輸出:
python retrieval_test.py
Results from 'poor_chunking' collection for the query: 'ethical implications of AI in healthcare':
Result 1:
Text: . The ethical implications of AI in heal
Source: N/A
Topic: N/A
Result 2:
Text: ant impact is healthcare. AI is being us
Source: N/A
Topic: N/A
Result 3:
Text:
Artificial intelligence is transforming
Source: N/A
Topic: N/A
Results from 'good_chunking' collection for the query: 'ethical implications of AI in healthcare':
Result 1:
Text: The ethical implications of AI in healthcare, data privacy concerns, and the need for proper regulation are all critical issues.
Source: N/A
Topic: N/A
Result 2:
Text: One of the key areas where AI is making a significant impact is healthcare.
Source: N/A
Topic: N/A
Result 3:
Text: By addressing these issues head-on, we can ensure that AI is used in a way that benefits everyone.
Source: N/A
Topic: N/A
Results from 'good_chunking_with_metadata' collection for the query: 'ethical implications of AI in healthcare':
Result 1:
Text: The ethical implications of AI in healthcare, data privacy concerns, and the need for proper regulation are all critical issues.
Source: Healthcare Section
Topic: AI in Healthcare
Result 2:
Text: One of the key areas where AI is making a significant impact is healthcare.
Source: Healthcare Section
Topic: AI in Healthcare
Result 3:
Text: By addressing these issues head-on, we can ensure that AI is used in a way that benefits everyone.
Source: General
Topic: AI Overview
同一搜索查詢的輸出根據(jù)實(shí)施的分塊策略而有所不同。
· 分塊策略不佳:您可以注意到,這里的結(jié)果不太相關(guān),這是因?yàn)槲谋颈环殖闪巳我獾男K。
· 良好的分塊策略:這里的結(jié)果更相關(guān),因?yàn)槲谋颈环殖删渥?,保留了語(yǔ)義含義。
· 使用元數(shù)據(jù)進(jìn)行良好的分塊策略:這里的結(jié)果最準(zhǔn)確,因?yàn)槲谋窘?jīng)過深思熟慮地分塊并使用元數(shù)據(jù)進(jìn)行增強(qiáng)。
從實(shí)驗(yàn)中得出的推論
· 分塊需要精心制定策略,并且塊大小不宜太小或太大。
· 分塊不當(dāng)?shù)囊粋€(gè)例子是,塊太小,在非自然的地方切斷句子,或者塊太大,同一個(gè)塊中包含多個(gè)主題,這使得檢索非常混亂。
· 分塊的整個(gè)想法都圍繞著為 LLM 提供更好的背景的概念。
· 元數(shù)據(jù)通過提供額外的上下文層極大地增強(qiáng)了結(jié)構(gòu)正確的分塊。例如,我們已將來源和主題作為元數(shù)據(jù)元素添加到我們的分塊中。
· 檢索系統(tǒng)受益于這些附加信息。例如,如果元數(shù)據(jù)表明某個(gè)區(qū)塊屬于“醫(yī)療保健部分”,則系統(tǒng)可以在進(jìn)行與醫(yī)療保健相關(guān)的查詢時(shí)優(yōu)先考慮這些區(qū)塊。
· 通過改進(jìn)分塊,結(jié)果可以結(jié)構(gòu)化和分類。如果查詢與同一文本中的多個(gè)上下文匹配,我們可以通過查看塊的元數(shù)據(jù)來確定信息屬于哪個(gè)上下文或部分。
牢記這些策略,并在基于 LLM 的搜索應(yīng)用程序中分塊取得成功。