File size: 7,046 Bytes
0e73f41 20ee462 b0f80a8 832a58a b0f80a8 6a093fa 832a58a 6a5bd9b 5d58497 a38ecff 5d58497 d2a532d 6a5bd9b d2a532d 6a5bd9b 832a58a 5027cc8 b58f805 5027cc8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 |
---
license: apache-2.0
task_categories:
- question-answering
- text-generation
language:
- en
tags:
- vector search
- semantic search
- retrieval augmented generation
pretty_name: hackernoon_tech_news_with_embeddings
size_categories:
- 100K<n<1M
---
## Overview
[HackerNoon](https://huggingface.co/datasets/HackerNoon/tech-company-news-data-dump/tree/main) curated the internet's most cited 7M+ tech company news articles and blog posts about the 3k+ most valuable tech companies in 2022 and 2023.
To further enhance the dataset's utility, a new embedding field and vector embedding for every datapoint have been added using the OpenAI EMBEDDING_MODEL = "text-embedding-3-small", with an EMBEDDING_DIMENSION of 256.
**Notably, this extension with vector embeddings only contains a portion of the original dataset, 1576528 data points, focusing on enriching a selected subset with advanced analytical capabilities.**
## Dataset Structure
Each record in the dataset represents a news article about technology companies and includes the following fields:
- _id: A unique identifier for the news article.
- companyName: The name of the company the news article is about.
- companyUrl: A URL to the HackerNoon company profile page for the company.
- published_at: The date and time when the news article was published.
- url: A URL to the original news article.
- title: The title of the news article.
- main_image: A URL to the main image of the news article.
- description: A brief summary of the news article's content.
- embedding: An array of numerical values representing the vector embedding for the article, generated using the OpenAI EMBEDDING_MODEL.
## Data Ingestion (Partioned)
[Create a free MongoDB Atlas Account](https://www.mongodb.com/cloud/atlas/register?utm_campaign=devrel&utm_source=community&utm_medium=organic_social&utm_content=Hugging%20Face%20Dataset&utm_term=richmond.alake)
```python
import os
import requests
import pandas as pd
from io import BytesIO
from pymongo import MongoClient
# MongoDB Atlas URI and client setup
uri = os.environ.get('MONGODB_ATLAS_URI')
client = MongoClient(uri)
# Change to the appropriate database and collection names for the tech news embeddings
db_name = 'your_database_name' # Change this to your actual database name
collection_name = 'tech_news_embeddings' # Change this to your actual collection name
tech_news_embeddings_collection = client[db_name][collection_name]
hf_token = os.environ.get('HF_TOKEN')
headers = {
"Authorization": f"Bearer {hf_token}"
}
# Downloads 228012 data points
parquet_files = [
"https://huggingface.co/api/datasets/AIatMongoDB/tech-news-embeddings/parquet/default/train/0000.parquet",
"https://huggingface.co/api/datasets/AIatMongoDB/tech-news-embeddings/parquet/default/train/0001.parquet",
"https://huggingface.co/api/datasets/AIatMongoDB/tech-news-embeddings/parquet/default/train/0002.parquet",
"https://huggingface.co/api/datasets/AIatMongoDB/tech-news-embeddings/parquet/default/train/0003.parquet",
"https://huggingface.co/api/datasets/AIatMongoDB/tech-news-embeddings/parquet/default/train/0004.parquet",
"https://huggingface.co/api/datasets/AIatMongoDB/tech-news-embeddings/parquet/default/train/0005.parquet",
]
all_dataframes = []
combined_df = None
for parquet_file_url in parquet_files:
response = requests.get(parquet_file_url, headers=headers)
if response.status_code == 200:
parquet_bytes = BytesIO(response.content)
df = pd.read_parquet(parquet_bytes)
all_dataframes.append(df)
else:
print(f"Failed to download Parquet file from {parquet_file_url}: {response.status_code}")
if all_dataframes:
combined_df = pd.concat(all_dataframes, ignore_index=True)
else:
print("No dataframes to concatenate.")
# Ingest to database
dataset_records = combined_df.to_dict('records')
tech_news_embeddings_collection.insert_many(dataset_records)
```
## Data Ingestion (All Records)
[Create a free MongoDB Atlas Account](https://www.mongodb.com/cloud/atlas/register?utm_campaign=devrel&utm_source=community&utm_medium=organic_social&utm_content=Hugging%20Face%20Dataset&utm_term=richmond.alake)
```python
import os
from pymongo import MongoClient
import datasets
from datasets import load_dataset
from bson import json_util
# MongoDB Atlas URI and client setup
uri = os.environ.get('MONGODB_ATLAS_URI')
client = MongoClient(uri)
# Change to the appropriate database and collection names for the tech news embeddings
db_name = 'your_database_name' # Change this to your actual database name
collection_name = 'tech_news_embeddings' # Change this to your actual collection name
tech_news_embeddings_collection = client[db_name][collection_name]
# Load the "tech-news-embeddings" dataset from Hugging Face
dataset = load_dataset("AIatMongoDB/tech-news-embeddings")
insert_data = []
# Iterate through the dataset and prepare the documents for insertion
# The script below ingests 1000 records into the database at a time
for item in dataset['train']:
# Convert the dataset item to MongoDB document format
doc_item = json_util.loads(json_util.dumps(item))
insert_data.append(doc_item)
# Insert in batches of 1000 documents
if len(insert_data) == 1000:
tech_news_embeddings_collection.insert_many(insert_data)
print("1000 records ingested")
insert_data = []
# Insert any remaining documents
if len(insert_data) > 0:
tech_news_embeddings_collection.insert_many(insert_data)
print("Data Ingested")
```
## Usage
The dataset is suited for a range of applications, including:
- Tracking and analyzing trends in the tech industry.
- Enhancing search and recommendation systems for tech news content with the use of vector embeddings.
- Conducting sentiment analysis and other natural language processing tasks to gauge public perception and impact of news on specific tech companies.
- Educational purposes in data science, journalism, and technology studies courses.
## Notes
### Sample Document
```
{
"_id": {
"$oid": "65c63ea1f187c085a866f680"
},
"companyName": "01Synergy",
"companyUrl": "https://hackernoon.com/company/01synergy",
"published_at": "2023-05-16 02:09:00",
"url": "https://www.businesswire.com/news/home/20230515005855/en/onsemi-and-Sineng-Electric-Spearhead-the-Development-of-Sustainable-Energy-Applications/",
"title": "onsemi and Sineng Electric Spearhead the Development of Sustainable Energy Applications",
"main_image": "https://firebasestorage.googleapis.com/v0/b/hackernoon-app.appspot.com/o/images%2Fimageedit_25_7084755369.gif?alt=media&token=ca7527b0-a214-46d4-af72-1062b3df1458",
"description": "(Nasdaq: ON) a leader in intelligent power and sensing technologies today announced that Sineng Electric will integrate onsemi EliteSiC silic",
"embedding": [
{
"$numberDouble": "0.05243798345327377"
},
{
"$numberDouble": "-0.10347484797239304"
},
{
"$numberDouble": "-0.018149614334106445"
}
]
}
``` |