id
int64 5
1.93M
| title
stringlengths 0
128
| description
stringlengths 0
25.5k
| collection_id
int64 0
28.1k
| published_timestamp
timestamp[s] | canonical_url
stringlengths 14
581
| tag_list
stringlengths 0
120
| body_markdown
stringlengths 0
716k
| user_username
stringlengths 2
30
|
---|---|---|---|---|---|---|---|---|
1,912,008 | 22 amazing 🤯 NPM packages you should Try | chalk Description: Style your terminal output with colors and styles. Example: const chalk =... | 0 | 2024-07-04T21:35:59 | https://dev.to/r4jv33r/22-amazing-npm-packages-you-should-know-20kl | webdev, javascript, beginners, npm |
1. **chalk**
- **Description**: Style your terminal output with colors and styles.
- **Example**:
```jsx
const chalk = require('chalk');
console.log(chalk.blue('Hello world!'));
```
2. **figlet**
- **Description**: Create ASCII art text in the terminal.
- **Example**:
```jsx
const figlet = require('figlet');
figlet('Hello World!', function(err, data) {
if (err) {
console.log('Something went wrong...');
console.dir(err);
return;
}
console.log(data);
});
```
3. **ora**
- **Description**: Elegant terminal spinner.
- **Example**:
```jsx
const ora = require('ora');
const spinner = ora('Loading unicorns').start();
setTimeout(() => {
spinner.color = 'yellow';
spinner.text = 'Loading rainbows';
}, 1000);
```
4. **inquirer**
- **Description**: Interactive command-line user interface.
- **Example**:
```jsx
const inquirer = require('inquirer');
inquirer.prompt([
{
type: 'input',
name: 'name',
message: "What's your name?"
}
]).then(answers => {
console.log(`Hello, ${answers.name}!`);
});
```
5. **randomcolor**
- **Description**: Generate attractive random colors.
- **Example**:
```jsx
const randomColor = require('randomcolor');
const color = randomColor();
console.log(color);
```
6. **faker**
- **Description**: Generate massive amounts of fake data.
- **Example**:
```jsx
const faker = require('faker');
console.log(faker.name.findName());
```
7. **axios**
- **Description**: Promise-based HTTP client for the browser and Node.js.
- **Example**:
```jsx
const axios = require('axios');
axios.get('<https://jsonplaceholder.typicode.com/posts/1>')
.then(response => console.log(response.data))
.catch(error => console.error(error));
```
8. **moment**
- **Description**: Parse, validate, manipulate, and display dates and times.
- **Example**:
```jsx
const moment = require('moment');
console.log(moment().format('MMMM Do YYYY, h:mm:ss a'));
```
9. **boxen**
- **Description**: Create boxes in the terminal.
- **Example**:
```jsx
const boxen = require('boxen');
console.log(boxen('Hello, Box!', { padding: 1 }));
```
10. **node-fetch**
- **Description**: A lightweight module that brings window.fetch to Node.js.
- **Example**:
```jsx
const fetch = require('node-fetch');
fetch('<https://jsonplaceholder.typicode.com/posts/1>')
.then(response => response.json())
.then(data => console.log(data))
.catch(error => console.error(error));
```
11. **lodash**
- **Description**: A modern JavaScript utility library delivering modularity, performance, & extras.
- **Example**:
```jsx
const _ = require('lodash');
const array = [1, 2, 3, 4, 5];
console.log(_.shuffle(array));
```
12. **node-notifier**
- **Description**: A Node.js module for sending notifications on native Mac, Windows (post and pre 8) and Linux (or Growl as fallback).
- **Example**:
```jsx
const notifier = require('node-notifier');
notifier.notify({
title: 'My Notification',
message: 'Hello, there!'
});
```
13. **dotenv**
- **Description**: Loads environment variables from a `.env` file into `process.env`.
- **Example**:
```jsx
require('dotenv').config();
console.log(process.env.DB_HOST);
```
14. **crypto-random-string**
- **Description**: Generate a cryptographically strong random string.
- **Example**:
```jsx
const cryptoRandomString = require('crypto-random-string');
console.log(cryptoRandomString({ length: 10 }));
```
15. **ascii-art**
- **Description**: Generate ASCII art from text.
- **Example**:
```jsx
const ascii = require('ascii-art');
ascii.font('Hello World!', 'Doom', function(rendered) {
console.log(rendered);
});
```
16. **node-emoji**
- **Description**: Simple emoji support for Node.js projects.
- **Example**:
```jsx
const emoji = require('node-emoji');
console.log(emoji.get('coffee'));
```
17. **is-online**
- **Description**: Check if the internet connection is available.
- **Example**:
```jsx
javascriptCopy code
const isOnline = require('is-online');
isOnline().then(online => {
console.log(online ? 'Online' : 'Offline');
});
```
18. **number-to-words**
- **Description**: Convert numbers to words.
- **Example**:
```jsx
javascriptCopy code
const numberToWords = require('number-to-words');
console.log(numberToWords.toWords(123));
```
19. **nodemailer**
- **Description**: It is a module for sending emails from Node.js applications.
- **Example**:
```jsx
javascriptCopy code
const nodemailer = require('nodemailer');
const transporter = nodemailer.createTransport({
service: 'gmail',
auth: {
user: '[email protected]',
pass: 'your-password'
}
});
const mailOptions = {
from: '[email protected]',
to: '[email protected]',
subject: 'Sending Email using Node.js',
text: 'Hello from Node.js!'
};
transporter.sendMail(mailOptions, function(error, info) {
if (error) {
console.error(error);
} else {
console.log('Email sent: ' + info.response);
}
```
20. **beeper**
- **Description**: makes your terminal beep.
- **Example**:
```jsx
javascriptCopy code
const beeper = require('beeper');
beeper();
```
21. **funny-quotes**
- **Description**: fetches random funny quotes.
- **Example**:
```jsx
javascriptCopy code
const funnyQuotes = require('funny-quotes');
console.log(funnyQuotes.getRandomQuote());
```
22. **random-puppy**
- **Description**: fetches random puppy pictures from Reddit.
- **Example**:
```jsx
javascriptCopy code
const randomPuppy = require('random-puppy');
randomPuppy()
.then(url => console.log(url))
.catch(err => console.error(err));
```
| r4jv33r |
1,910,380 | Continued Learnings | After writing my original post Learnings in Raku and Pg Concurrency I learned a few more things to... | 0 | 2024-07-04T21:30:31 | https://dev.to/ssotka/continued-learnings-33mo | rakulang, postgres, programming, learning | After writing my original post [Learnings in Raku and Pg Concurrency](https://dev.to/ssotka/learnings-in-raku-and-pg-concurrency-35a0) I learned a few more things to make the script work better and become more _raku-ish_.
**Learning 1**. Try was not needed. In the previous post I showed this snippet of the code doing the main work.
```
my $q = @batch.join("\n");
try {
my @res = $dbh.execute($q).values;
}
CATCH {
default {
say "Error: $_";
}
}
```
In this snippet you see paired _try/catch_ blocks. In other languages I'm familiar with that support try and catch, the two almost always go together. If code in the _try_ block throws an exception, the exception will be caught in the catch _block_. However in Raku when you have a catch block in a given scope, any exceptions thrown in the scope can be caught and handled by that _catch_ block.
A _try_ can be used alone if you simply want to keep your code from dying because of an exception. In the documentation it is explained as "_a try block is a normal block which implicitly turns on the use fatal pragma and includes an implicit CATCH block that drops the exception, which means you can use it to contain them_".
When I learned this I changed that block to simplify it.
```
my $q = @batch.join("\n");
$dbh.execute($q);
CATCH {
default {
say "Error: $_";
}
}
}
```
**Learning 2**. DB Indexes. After refreshing the data in our non-prod environment I was working with fresh indexes on the tables I was updating. I mentioned in the previous article that I was able to update 4.5M rows within 30 minutes. This was from my testing on the non-refreshed DB and I had not run a fresh ANALYZE on the tables before the updates. Working with fresh indexes those 4.5M updates finished in 6 minutes.
So keep your indexes up to date. As we don't do many deletes I am assuming there were very few zombie rows (deleted but not expunged by the database) that had to be dealt with in the tests.
**Learning 3**. Atomicints. The lesson I learned about using an atomicint when concurrent threads each need to update a value, in this case the number of rows that have been processed, still applies. However I found an alternative to this in my case.
I decided to set up a Supply to be used in the threads to emit the number of rows that thread had just processed. I also set up a tap for that Suppy which would then add those values to the overall number of rows processed.
At first I worried that this would simply have the same problem and would also require an atomicint. However, the documentation for Supply indicated that "_A supply is a thread-safe, asynchronous data stream_". So I decided to give it a try. This probably means that The Supply is using locking behind the scenes, however, it's now hidden from me and doesn't slow down the update threads. I removed the _atomicint_ type from **$lines-done** and set up the _suppier_ and _tap_.
```
my $supplier = Supplier.new;
my $supply = $supplier.Supply;
$supply.tap: -> $batch {
$lines-done += $batch;
}
```
Then in the main thread code I added an emit call.
```
$supplier.emit(@batch.elems);
```
Testing did not turn up any problems (which I realize is not proof that it won't at sometime in the future).
Here is the cleaned up code:
```
#!/usr/bin/env raku
use v6;
use DB::Pg;
use Terminal::Print <T>;
multi sub MAIN ( :$host = 'host.docker.internal', :$port = 8888, :$script-path, :$batch-size, :$dbname, :$user, :$password ) {
T.initialize-screen;
my @columns = ^T.columns;
my @rows = ^T.rows;
my $conninfo = join " ",
('dbname=' ~ $dbname),
('host=' ~ $host),
('port=' ~ $port),
('user=' ~ $user),
('password=' ~ $password);
# to get the total number of lines in the file shell out to wc -l
my $lines-total = 0 + qqx{ wc -l $script-path }.words[0];
my $lines-done = 0;
T.print-string( 1, @rows.elems - 8, $*PROGRAM-NAME );
T.print-string( 1, @rows.elems - 7, "Script-path: $script-path");
T.print-string( 1, @rows.elems - 6, "Total lines: $lines-total");
#every second print the progress
my $start-time = now;
sub format-elapsed-time($elapsed) {
my $hours = $elapsed.Int div 3600;
my $minutes = ($elapsed.Int mod 3600) div 60;
my $seconds = $elapsed.Int mod 60;
return $hours.fmt("%02d") ~ ':' ~ $minutes.fmt("%02d") ~ ':' ~ $seconds.fmt("%02d");
}
my $update-line = @rows.elems - 5;
my $doneline = @rows.elems - 1;
my $progress = start {
loop {
#show elapsed time
my $elapsed = now - $start-time;
my $local-lines-done = $lines-done;
my $local-lines-total = $lines-total;
my $pct = (($local-lines-done / $local-lines-total) * 100).fmt("%02.2f");
T.print-string( 1, $update-line,"Progress: $local-lines-done/$local-lines-total {$pct}% - " ~ format-elapsed-time($elapsed) ~ " elapsed");
sleep 1;
last if $local-lines-done == $local-lines-total;
}
T.print-string( 1, $doneline, "All Queries Queued. Waiting on Database...");
}
my @batch;
my $dbh = DB::Pg.new(:$conninfo);
# check the connection
my $res = $dbh.execute("SELECT 1");
$dbh.finish;
#say "Connection: $res";
my $supplier = Supplier.new;
my $supply = $supplier.Supply;
$supply.tap: -> $batch {
$lines-done += $batch;
}
$script-path.IO.lines.rotor($batch-size, :partial).race.map: -> @batch {
my $q = @batch.join("\n");
$dbh.execute($q);
CATCH {
default {
say "Error: $_";
}
}
$supplier.emit(@batch.elems);
};
T.shutdown-screen;
}
```
| ssotka |
1,912,006 | Cybrotronics Integration Protocols | Unlocking the Power of Cybrotronics Integration Protocols: A Revolutionary Approach to Harmonizing... | 0 | 2024-07-04T21:28:10 | https://dev.to/julian_272a30cb8de8a0/cybrotronics-integration-protocols-123k | Unlocking the Power of Cybrotronics Integration Protocols: A Revolutionary Approach to Harmonizing Human-Technology Interfaces
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q4952qzccyjm1lqj2ihi.jpeg)
## Introduction
**Cybrotronics Integration Protocols (CIP)** are a new approach to integrating human and technological systems.
CIP is needed in today's fast-paced digital world.
This article will explain CIP, its principles, benefits, and challenges, and provide examples of its application.
## What is Cybrotronics?
**Cybrotronics** combines cybernetics, robotics, and electronics.
- **Cybernetics:** The study of communication and control in living beings and machines, started by Norbert Wiener.
- **Robotics:** Focuses on designing and operating robots, automating complex tasks.
- **Electronics:** Controls electrical energy, the backbone of modern technology.
Cybrotronics merges these fields to improve human-technology interactions.
Enhance your focus and efficiency with our [supplement range](https://supplementsolutions.quora.com/) designed for high-performance tasks.
## Understanding Integration Protocols
**Integration protocols** allow different technological systems to communicate.
Traditional protocols include APIs and SDKs, but they have limitations.
- **APIs:** Enable software applications to interact, but lack flexibility and scalability.
- **SDKs:** Provide tools for specific platforms, but often lack interoperability across systems.
CIP aims to overcome these limitations, creating a new approach to integration.
## Cybrotronics Integration Protocols (CIP): A Deep Dive
CIP focuses on creating a seamless interface between humans and technology.
Here are the key components and benefits of CIP:
### Core Principles:
- **Adaptability:** CIP systems adjust to users' needs and preferences.
- **Interoperability:** CIP ensures compatibility across platforms and devices.
- **Real-Time Processing:** CIP processes information instantly for immediate feedback.
### Benefits of CIP:
- **Enhanced Human-Technology Harmony:** CIP promotes intuitive interaction, reducing cognitive load.
- **Increased Efficiency:** CIP streamlines processes, boosting productivity.
- **Improved User Experience:** CIP focuses on ease of use and accessibility.
## Examples and Case Studies
### 1. Healthcare:
- **Scenario:** A CIP-enabled healthcare system integrates medical devices, patient records, and diagnostic tools.
- **Outcome:** Doctors access real-time patient data, make informed decisions, and provide personalized care with fewer errors.
### 2. Smart Homes:
- **Scenario:** A CIP-powered smart home integrates lights, thermostats, and security systems.
- **Outcome:** Users control their home with natural language commands, and the system adapts to their habits.
### 3. Automotive Industry:
- **Scenario:** An autonomous vehicle ecosystem uses CIP to integrate navigation, communication, and entertainment systems.
- **Outcome:** Passengers enjoy a safer, more comfortable ride, with the vehicle anticipating and responding to their needs.
## Implementing CIP: Challenges and Opportunities
Implementing CIP comes with challenges:
### Challenges:
- **Complexity:** Developing CIP systems requires advanced technology and understanding of human-machine interactions.
- **Cost:** The initial investment in CIP technology is high.
- **Security:** Ensuring data security and privacy in CIP systems is crucial.
### Opportunities:
- **Innovation:** CIP opens new avenues for innovation across industries.
- **Competitive Advantage:** Early adopters of CIP gain a market edge.
- **Improved Quality of Life:** CIP enhances human-technology synergy, improving quality of life.
## Best Practices for Overcoming Challenges
- **Invest in Research and Development:** Continuous R&D is essential to refine CIP technologies and reduce costs.
- **Collaborate with Experts:** Partner with experts in cybernetics, robotics, and electronics for valuable insights.
- **Focus on Security:** Implement robust security protocols to protect data and build user trust.
Explore how maintaining optimal mental health can improve your ability to work with complex systems [here](https://supplementsolutions.quora.com/).
## Conclusion
**Cybrotronics Integration Protocols** represent a new way of interacting with technology.
By harmonizing human and machine systems, CIP enhances efficiency and user experience.
As we develop CIP, its applications and benefits will grow.
Embrace the future of human-technology interaction and contribute to the evolution of CIP. | julian_272a30cb8de8a0 |
|
1,912,005 | My Pen on CodePen | Check out this Pen I made! | 0 | 2024-07-04T21:25:24 | https://dev.to/akifozyahyaoglu/my-pen-on-codepen-162f | codepen | Check out this Pen I made!
{% codepen https://codepen.io/endipdeveloperakif/pen/mdZbMKd %} | akifozyahyaoglu |
1,912,004 | Git Gotcha: How to Commit Empty Folders with Ease | So you’re in the same situation as me, right? You have some empty folders that you don’t have content... | 0 | 2024-07-04T21:21:58 | https://dev.to/thewhitechild/git-gotcha-how-to-commit-empty-folders-with-ease-13hg | ---
title: Git Gotcha: How to Commit Empty Folders with Ease
published: true
description:
tags:
# cover_image: https://direct_url_to_image.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2024-07-04 21:18 +0000
---
So you’re in the same situation as me, right? You have some empty folders that you don’t have content for yet, but you want to commit them and make them available on your remote repository.
Well, you can easily achieve this by creating a .gitkeep file in each of the folders.
When you commit, changes get included. When you push, they get pushed too.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5j4ottkjpzsl0ywjtvva.png)
| thewhitechild |
|
1,909,373 | Getting Started with Node JS in MacOS | Hello Dev.to Readers 👋, I will make this post come to the point directly, I will first list all the... | 0 | 2024-07-04T21:21:16 | https://dev.to/lalit_kumar_f50f9cb0482b8/getting-started-with-node-js-in-macos-2cc8 | webdev, javascript, node, beginners | Hello Dev.to Readers 👋,
I will make this post come to the point directly, I will first list all the things which you should do to get started with Node js in your machine with MacOS.
1. [Install the Homebrew (brew)](#brew)
2. [Install NVM using brew](#nvm)
3. [Install NODE using NVM](#node)
4. [Install Visual Studio Code using brew](#vscode)
---
## Let's start with installing Homebrew <a name="brew"></a>
You can read more about homebrew [here](https://brew.sh/)
### Simply follow the steps below to have it installed
Type the below command in your terminal.
`curl -fsSL -o install.sh https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh`
Then carefully follow the instructions on your terminal.<sub>
Instruction are only to create your profile files if not already existing and if these files already exists, just add the code which your terminal says to add into the profile's file
</sub>
#### Confirm the installation by using the following command
`brew doctor`
If the commands runs without any error it means your brew is installed properly.
---
## Let's Install NVM <a name="nvm"></a>
Once you have "brew" installed, installing any other software is just a piece of cake 🍰
To read more about nvm (Node Version Manager) [click here](https://github.com/nvm-sh/nvm)
Type the following command to install NVM
`brew install nvm`
Once done, type the below command to confirm if you have installed it successfully
`nvm -v`
Get ready to install the Node js now.
---
## Let's Install Node <a name="node"></a>
As you already have "nvm" installed
Simply type the below command to install latest stable version of node js [know more](https://nodejs.org/en)
`nvm install`
To confirm the installation, use the following command
`node -v`
Now, that everything is ready, you need a good IDE to write your code. See instruction below.
---
## Let's Install Visual Studio Code <a name="vscode"></a>
As you have super powers of "brew" at your disposal, you can get visual studio code installed in matter of a single command.
`brew install --cask visual-studio-code`
---
If you have reached till here in this setup guide, then Congratulation 🪁 for this achievement.
Now you can start your journey of a Nodejs Development.
If you need a friend along in this Journey, you can consider connecting with me on [Linkedin](https://www.linkedin.com/in/developwithlalit/)
This is my first ever blog post, so any kind of feedback will be appreciated.
| lalit_kumar_f50f9cb0482b8 |
1,912,001 | Expand Tanstack Table Row to display non-uniform data | You can expand row in Tanstack Table using getCanExpand api as long as the row and sub rows data are... | 0 | 2024-07-04T21:13:26 | https://dev.to/yangerrai/expand-tanstack-table-row-to-display-non-uniform-data-39og | tanstack, expandedrow, table, react | You can **expand row** in **Tanstack Table** using [getCanExpand](https://tanstack.com/table/v8/docs/api/features/expanding) api as long as the row and sub rows data are uniform in structure.
**_"But what if you want to show non-uniform data in the sub row?"_**
That is exactly what we will achieve today. :smiley: :cyclone:
<img width="100%" style="width:100%" src="https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExNGFweWpzdzl2ZHRlNHh2c29oZGdkcnQwMmU1bm52amx0OHBsYnV1MSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/5m47o8kmAxJYeZ9SXI/giphy.gif">
Still, if your use case is the later, you can follow tanstack example [here](https://tanstack.com/table/latest/docs/framework/react/examples/expanding) to achieve it.
## Lets begin!:rocket:
The complete code implementation can be found [here](https://github.com/Yanger-Rai/tanstack-table-tutorial), you can follow along.
We will make use of `getIsExpanded` api which is suitable for our use case.
For this tutorial, the table row object look like this; _amount_, _status_ and _email_ are the table column, the rest will be displayed in the expanded row:
```json
{
id: "728ed52f",
amount: 100,
status: "pending",
email: "[email protected]",
date: "2023-07-01",
method: "credit_card",
description: "Payment for subscription",
transactionId: "TXN728ED52F",
},
```
## Step 1:
Begin by defining the column, add a click handler and use `getIsExpanded` to get the expanded state of the row.
```typescript
export const columns: ColumnDef<Payment>[] = [
// this handle the row expands
{
id: "select",
cell: ({ row }) => (
<button onClick={() => row.toggleExpanded()}>
{row.getIsExpanded() ? <CircleChevronDown /> : <CircleChevronUp />}
</button>
),
},
//rest of the table columns
{
accessorKey: "status",
header: "Status",
},
{
accessorKey: "email",
header: "Email",
},
{
accessorKey: "amount",
header: "Amount",
},
];
```
## Step 2:
Defined your data table component. We will use `getIsExanded` api to get the state of row that is expanded and render rest of the data inside the table cell.
```typescript
export function DataTable<TData, TValue>({
columns,
data,
}: DataTableProps<TData, TValue>) {
//define the expanded state
const [expanded, setExpanded] = useState<ExpandedState>({});
const table = useReactTable({
//.. rest of the code
onExpandedChange: setExpanded,
getExpandedRowModel: getExpandedRowModel(),
state: {
expanded,
},
});
return (
<div className="rounded-md border">
<Table>
<TableHeader>
//.. rest of the code
</TableHeader>
<TableBody>
{table.getRowModel().rows?.length ? (
table.getRowModel().rows.map((row) => (
<>
{/* this render the main columns */}
<TableRow
key={row.id}
data-state={row.getIsSelected() && "selected"}
>
//.. rest of the code
</TableRow>
{/* this render the expanded row */}
{row.getIsExpanded() && (
<TableRow>
<TableCell colSpan={columns.length} className="h-24">
{/* You can create a seperate component to display the expanded data */}
<div className="grid px-10">
<h2 className=" font-bold">More Details</h2>
<p>
Transaction Id:{" "}
<span className=" text-slate-500">
{row.original.transactionId}
</span>
</p>
<p>
Transaction Date:{" "}
<span className=" text-slate-500">
{row.original.date}
</span>
</p>
<p>
Transaction Method:{" "}
<span className=" text-slate-500">
{row.original.method}
</span>
</p>
<p>
Description:{" "}
<span className=" text-slate-500">
{row.original.description}
</span>
</p>
</div>
</TableCell>
</TableRow>
)}
</>
))
) : (
<TableRow>
<TableCell colSpan={columns.length} className="h-24 text-center">
No results.
</TableCell>
</TableRow>
)}
</TableBody>
</Table>
</div>
);
}
```
and you should be able to take it from here, incase you missed it, you will find the complete code [here](https://github.com/Yanger-Rai/tanstack-table-tutorial). Do give a star :blush:
Thanks for dropping by 🌟
Add a ❤️ if you liked it.
| yangerrai |
1,912,000 | Cross-Compilation in Go for AWS Lambda | Go’s cross-compilation capabilities are one of its strengths, allowing developers to easily build... | 0 | 2024-07-04T20:58:47 | https://dev.to/gatij/cross-compilation-in-go-for-aws-lambda-1io9 | go, webdev, aws, lambda | Go’s `cross-compilation` capabilities are one of its strengths, allowing developers to easily build binaries for different target environments from their development machines. This is particularly useful when `deploying applications to cloud environments like AWS Lambda`, which may run on different OS and architecture combinations compared to your local development environment.
Example:
If you are developing on a Windows machine with an x86 architecture, you can still compile the Go binary for AWS Lambda as follows:
```
GOOS=linux GOARCH=amd64 go build -o main main.go
```
Above command will produce a binary named main that is compatible with the `Linux OS` and `AMD64 architecture`, suitable for deployment to AWS Lambda.
Yes, setting `GOOS=linux` and `GOARCH=amd64` ensures that the Go binary is built for the Linux operating system and the AMD64 architecture, which are the environments that AWS Lambda functions run on. This build process is independent of the developer's machine OS and architecture. The Go compiler can cross-compile binaries for different operating systems and architectures, allowing you to build a binary that will run in the AWS Lambda environment even if your development machine is running a different OS or architecture.
Here’s a more detailed explanation:
GOOS: This environment variable sets the target operating system for the Go binary. Setting GOOS=linux ensures that the binary will be compatible with the Linux OS, which is what AWS Lambda uses.
GOARCH: This environment variable sets the target architecture for the Go binary. Setting GOARCH=amd64 ensures that the binary will be compatible with the AMD64 architecture, which is used by AWS Lambda.
Even if you are developing on a different operating system (e.g., Windows or macOS) or architecture (e.g., ARM), setting these environment variables will instruct the Go compiler to produce a binary for the specified target environment.
| gatij |
1,905,414 | Joining an existing team can be quite challenging | Strategy for EM's to ramp up quickly | 0 | 2024-07-04T20:57:47 | https://dev.to/akotek/joining-an-existing-team-can-be-quite-challenging-47cm | software, leadership, management | ---
title: Joining an existing team can be quite challenging
published: true
description: Strategy for EM's to ramp up quickly
tags: software, leadership, management
# cover_image: https://direct_url_to_image.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-29 08:37 +0000
---
Joining an **existing team** can be quite challenging.
**Work is already in progress**, you lack significant (sometimes basic) knowledge, have no pre-existing relationships and are expected to make an impact in a relatively short period of time.
**A useful strategy for ramping up quickly** as an Engineering Manager is running Boz Algorithm (Thanks Gilad!).
**On your second day at work**, meet with a team member and:
1. Ask him to tell you everything he thinks you should know. Listen and take notes.
2. Ask about the biggest challenges the team has right now.
3. Ask who else you should talk to: write down all the names.
Recursively run the algorithm on anyone listed at (3).
**From (1)**, you can quickly identify issues that bother team right now and **require action**. It will also get you familiar with current terminology team uses ("That XY Pipeline...."). This can be tricky, as not everyone will answer right away, so try rephrasing your question ("Is there anything technical you think I should know?", "Product/People/Process?")
**From (2)**, you can identify **low-effort** items that are important to the team. This will allow you to make a quick impact.("Kanban board is a mess", "So many meetings....", "Daily is so long...", etc).
**From (3)**, you can quickly create a map of influence within the organization. The frequency and context in which names appear will let you know on who to focus more at the very beginning.
![Andrew Boz](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7zzejbh18jbgu5un39ed.png)
| akotek |
1,911,999 | Baking Up the Moxfield of Cookie Run: Braverse | Hey gamers! As a deck-building enthusiast, I got hooked on Cookie Run: Braverse after my first match,... | 0 | 2024-07-04T20:56:56 | https://dev.to/nicktsan/baking-up-the-moxfield-of-cookie-run-braverse-e59 | typescript, nextjs, expressots, cookierun | Hey gamers! As a deck-building enthusiast, I got hooked on Cookie Run: Braverse after my first match, But there are no English deck-building sites! So, I plan on creating one for the community. Check out my blog for details!
🔗 https://nicktsan.com/blog/initial-buildspace-blog | nicktsan |
1,911,998 | Ford Computer Diagnostic Software: Revolutionizing Automotive Repairs | In today's technologically driven world, automotive repairs have evolved far beyond wrenches and... | 0 | 2024-07-04T20:55:00 | https://dev.to/mechnicianco/ford-computer-diagnostic-software-revolutionizing-automotive-repairs-37hj | In today's technologically driven world, automotive repairs have evolved far beyond wrenches and screwdrivers. At Mechnician, we recognize the critical role that advanced diagnostic tools play in maintaining and repairing modern vehicles. One such tool that has transformed the landscape is the Ford Computer Diagnostic Software.This powerful software is a game-changer for mechanics and car enthusiasts alike, providing unparalleled insights into vehicle performance and health.
What is [Ford Computer Diagnostic Software](https://mechnician.com/collections/mack-truck-diagnostic-tools/ford-custom-chassis)?
Ford Computer Diagnostic Software is an advanced system designed to interface with a vehicle's onboard computer. This software allows mechanics to access a wealth of information about the car's systems and performance. It can read trouble codes, monitor live data, and perform a variety of tests to diagnose issues with pinpoint accuracy.
The Benefits of Using Ford Computer Diagnostic Software
1.Accurate Diagnostics
One of the primary advantages of using Ford Computer Diagnostic Software is the precision it offers. Traditional diagnostic methods often involve a lot of guesswork, but this software eliminates much of the uncertainty. By connecting directly to the vehicle's computer, it can quickly identify specific issues, saving time and reducing the likelihood of misdiagnosis.
2.Comprehensive Vehicle Data
The software provides access to a comprehensive array of data. From engine performance metrics to transmission statistics, mechanics can monitor virtually every aspect of the vehicle's operation. This level of detail is invaluable for both routine maintenance and troubleshooting complex problems.
3.Cost Efficiency
Accurate diagnostics mean fewer unnecessary repairs and replacements. By identifying the exact issue, mechanics can focus their efforts where it matters most, reducing labor costs and minimizing the expense of trial-and-error repairs. This efficiency translates into cost savings for vehicle owners.
4.Enhanced Maintenance Capabilities
Regular maintenance is crucial for the longevity of any vehicle. Ford Computer Diagnostic Software enables mechanics to keep track of maintenance schedules and monitor the health of critical components. This proactive approach helps prevent major issues before they arise, ensuring that vehicles remain in top condition for longer.
How Does It Work?
Using Ford Computer Diagnostic Software is straightforward. Here’s a step-by-step overview of how it works:
Connection: The software is connected to the vehicle via the OBD-II (On-Board Diagnostics) port, a standardized interface found in most modern cars.
Data Retrieval: Once connected, the software communicates with the vehicle’s onboard computer to retrieve data and trouble codes.
Analysis: The software analyzes the retrieved data, identifying any anomalies or issues. It provides detailed information about potential problems and their severity.
Reporting: Mechanics receive a comprehensive report, detailing the diagnostic findings. This report guides them in making informed decisions about necessary repairs or maintenance.
Real-World Applications
At Mechnician, we’ve seen firsthand the transformative impact of Ford Computer Diagnostic Software. Here are a few real-world applications:
Engine Diagnostics: Quickly identify issues such as misfires, sensor failures, and fuel **system problems.**
**Transmission Analysis:** Diagnose transmission issues that might otherwise require extensive manual inspection.
Emissions Testing: Ensure vehicles comply with emission standards by identifying and addressing exhaust system problems.
**Electrical System Checks:** Troubleshoot complex electrical issues, from battery performance to alternator health.
**Why Choose Mechnician?
**At Mechnician, we are committed to staying at the forefront of automotive technology. Our team of skilled technicians is trained in the latest diagnostic tools and techniques, including Ford Computer Diagnostic Software. We understand that accurate diagnostics are the cornerstone of effective vehicle repair and maintenance, and we strive to provide our customers with the best service possible.
By choosing Mechnician, you’re not just getting a repair service; you’re gaining a partner dedicated to the health and longevity of your vehicle. Our investment in advanced diagnostic tools ensures that we can offer you the highest level of precision and efficiency in all our services.
**Conclusion
**[Ford Computer Diagnostic](https://mechnician.com/collections/mack-truck-diagnostic-tools/ford-custom-chassis) Softwarerepresents a significant advancement in automotive diagnostics. Its ability to provide accurate, comprehensive, and cost-effective solutions makes it an indispensable tool for modern mechanics. At Mechnician, we leverage this technology to offer top-tier service, ensuring that your vehicle receives the care it deserves. Embrace the future of automotive repairs with us and experience the difference that cutting-edge diagnostics can make.
For more information on our services and how we can assist you, visit Mechnician.com. Let's drive into the future together, ensuring your vehicle runs smoothly for years to come. | mechnicianco |
|
1,911,997 | how to list our projects in our resume through github link or just screen short please answer | A post by Mnk666 | 0 | 2024-07-04T20:52:23 | https://dev.to/khan444/how-to-list-our-projects-in-our-resume-through-github-link-or-just-screen-short-please-answer-210e | khan444 |
||
1,911,996 | shadcn-ui/ui codebase analysis: How does shadcn-ui CLI work? — Part 2.6 | I wanted to find out how shadcn-ui CLI works. In this article, I discuss the code used to build the... | 0 | 2024-07-04T20:51:04 | https://dev.to/ramunarasinga/shadcn-uiui-codebase-analysis-how-does-shadcn-ui-cli-work-part-26-1i07 | javascript, opensource, nextjs, react | I wanted to find out how shadcn-ui CLI works. In this article, I discuss the code used to build the shadcn-ui/ui CLI.
In part 2.5, we looked at function getTailwindFile that returns the main css file that contains tailwind base imports.
Let’s move on to the next line of code.
![](https://media.licdn.com/dms/image/D4E12AQEocw_CnBBNZg/article-inline_image-shrink_1500_2232/0/1720125894794?e=1725494400&v=beta&t=ZOFQ52jDCYbQpGnQcKNKjrigT827I18Lp-Q_W1R7mFA)
After getting the tailwindCSS file, the next step is to get the tsConfig alias prefix.
const tsConfigAliasPrefix = await getTsConfigAliasPrefix(cwd)
[getTsConfigAliasPrefix](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L157) is imported from [ui/packages/cli/src/utils/get-project-info.ts](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L157) and this function returns the alias used in your tsConfig for “./\*” or “./src/\*”. Let’s find out how.
```js
export async function getTsConfigAliasPrefix(cwd: string) {
const tsConfig = await loadConfig(cwd)
if (tsConfig?.resultType === "failed" || !tsConfig?.paths) {
return null
}
// This assume that the first alias is the prefix.
for (const \[alias, paths\] of Object.entries(tsConfig.paths)) {
if (paths.includes("./\*") || paths.includes("./src/\*")) {
return alias.at(0)
}
}
return null
}
```
loadConfig
----------
This function loads the tsconfig.json or jsconfig.json. It will start searching from the specified cwd directory. Passing the tsconfig.json or jsconfig.json file directly instead of a directory also works.
Read more about [loadConfig from the docs](https://www.npmjs.com/package/tsconfig-paths#loadconfig).
Returning alias:
----------------
```js
// This assume that the first alias is the prefix.
for (const \[alias, paths\] of Object.entries(tsConfig.paths)) {
if (paths.includes("./\*") || paths.includes("./src/\*")) {
return alias.at(0)
}
}
```
[.at](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/at) is an [Array.prototype.at()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/at), using at(0) is equivalent to using alias\[0\].
From the docs, the at() method is equivalent to the bracket notation when index is non-negative. For example, array\[0\] and array.at(0) both return the first item. However, when counting elements from the end of the array, you cannot use array\[-1\] like you may in Python or R, because all values inside the square brackets are treated literally as string properties, so you will end up reading array\["-1"\], which is just a normal string property instead of an array index.
The usual practice is to access length and calculate the index from that — for example, array\[array.length - 1\]. The at() method allows relative indexing, so this can be shortened to array.at(-1).
Conclusion:
-----------
Judging by this function name “getTsConfigAliasAsPrefix” in shadcn-ui CLI related source code, it is obvious that this function returns the alias prefix from your project based on what you provided in your project’s tsconfig.json.
It uses loadConfig provided by tsconfig-paths ([https://www.npmjs.com/package/tsconfig-paths#loadconfig](https://www.npmjs.com/package/tsconfig-paths#loadconfig)) package that returns an object that looks like below:
```js
{
resultType: "success";
absoluteBaseUrl: string;
paths: { \[key: string\]: Array<string> };
}
```
These paths are from your tsconfig.json.
> _Want to learn how to build shadcn-ui/ui from scratch? Check out_ [_build-from-scratch_](https://tthroo.com/)
About me:
---------
Website: [https://ramunarasinga.com/](https://ramunarasinga.com/)
Linkedin: [https://www.linkedin.com/in/ramu-narasinga-189361128/](https://www.linkedin.com/in/ramu-narasinga-189361128/)
Github: [https://github.com/Ramu-Narasinga](https://github.com/Ramu-Narasinga)
Email: [email protected]
[Build shadcn-ui/ui from scratch](https://tthroo.com/)
References:
-----------
1. [https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L82C37-L82C59](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L82C37-L82C59)
2. [https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L157](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L157)
3. [https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L11](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L11) | ramunarasinga |
1,911,995 | Unlocking the Power of Cybrotronics Integration Protocols | Unlocking the Power of Cybrotronics Integration Protocols ... | 0 | 2024-07-04T20:49:35 | https://dev.to/michaeljohn_a199820188b79310/unlocking-the-power-of-cybrotronics-integration-protocols-2be2 | ## Unlocking the Power of Cybrotronics Integration Protocols
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/soxe0jrkwcbcjwt7sbr5.jpeg)
## Introduction
**Cybrotronics Integration Protocols (CIP)** are a new approach to integrating human and technological systems.
CIP is needed in today's fast-paced digital world.
To support your cognitive functions while working on advanced technological projects, check out our [brain-boosting supplements](https://supplementsolutions.quora.com/).
This article will explain CIP, its principles, benefits, and challenges, and provide examples of its application.
## What is Cybrotronics?
**Cybrotronics** combines cybernetics, robotics, and electronics.
- **Cybernetics:** The study of communication and control in living beings and machines, started by Norbert Wiener.
- **Robotics:** Focuses on designing and operating robots, automating complex tasks.
- **Electronics:** Controls electrical energy, the backbone of modern technology.
Cybrotronics merges these fields to improve human-technology interactions.
Enhance your focus and efficiency with our [supplement range](https://supplementsolutions.quora.com/) designed for high-performance tasks.
## Understanding Integration Protocols
**Integration protocols** allow different technological systems to communicate.
Traditional protocols include APIs and SDKs, but they have limitations.
- **APIs:** Enable software applications to interact, but lack flexibility and scalability.
- **SDKs:** Provide tools for specific platforms, but often lack interoperability across systems.
Explore how maintaining optimal mental health can improve your ability to work with complex systems [here](https://supplementsolutions.quora.com/).
CIP aims to overcome these limitations, creating a new approach to integration.
## Cybrotronics Integration Protocols (CIP): A Deep Dive
CIP focuses on creating a seamless interface between humans and technology.
Here are the key components and benefits of CIP:
### Core Principles:
- **Adaptability:** CIP systems adjust to users' needs and preferences.
- **Interoperability:** CIP ensures compatibility across platforms and devices.
- **Real-Time Processing:** CIP processes information instantly for immediate feedback.
### Benefits of CIP:
- **Enhanced Human-Technology Harmony:** CIP promotes intuitive interaction, reducing cognitive load.
- **Increased Efficiency:** CIP streamlines processes, boosting productivity.
- **Improved User Experience:** CIP focuses on ease of use and accessibility.
## Examples and Case Studies
### 1. Healthcare:
- **Scenario:** A CIP-enabled healthcare system integrates medical devices, patient records, and diagnostic tools.
- **Outcome:** Doctors access real-time patient data, make informed decisions, and provide personalized care with fewer errors.
### 2. Smart Homes:
- **Scenario:** A CIP-powered smart home integrates lights, thermostats, and security systems.
- **Outcome:** Users control their home with natural language commands, and the system adapts to their habits.
### 3. Automotive Industry:
- **Scenario:** An autonomous vehicle ecosystem uses CIP to integrate navigation, communication, and entertainment systems.
- **Outcome:** Passengers enjoy a safer, more comfortable ride, with the vehicle anticipating and responding to their needs.
## Implementing CIP: Challenges and Opportunities
Implementing CIP comes with challenges:
### Challenges:
- **Complexity:** Developing CIP systems requires advanced technology and understanding of human-machine interactions.
- **Cost:** The initial investment in CIP technology is high.
- **Security:** Ensuring data security and privacy in CIP systems is crucial.
Find out how our [wellness products](https://supplementsolutions.quora.com/) can help you stay sharp and secure when dealing with advanced technologies.
### Opportunities:
- **Innovation:** CIP opens new avenues for innovation across industries.
- **Competitive Advantage:** Early adopters of CIP gain a market edge.
- **Improved Quality of Life:** CIP enhances human-technology synergy, improving quality of life.
## Best Practices for Overcoming Challenges
- **Invest in Research and Development:** Continuous R&D is essential to refine CIP technologies and reduce costs.
- **Collaborate with Experts:** Partner with experts in cybernetics, robotics, and electronics for valuable insights.
- **Focus on Security:** Implement robust security protocols to protect data and build user trust.
## Conclusion
**Cybrotronics Integration Protocols** represent a new way of interacting with technology.
By harmonizing human and machine systems, CIP enhances efficiency and user experience.
As we develop CIP, its applications and benefits will grow.
Embrace the future of human-technology interaction and contribute to the evolution of CIP.
## Best Practices for Overcoming Challenges
- **Invest in Research and Development:** Continuous R&D is essential to refine CIP technologies and reduce costs.
- **Collaborate with Experts:** Partner with experts in cybernetics, robotics, and electronics for valuable insights.
- **Focus on Security:** Implement robust security protocols to protect data and build user trust.
## Conclusion
**Cybrotronics Integration Protocols** represent a new way of interacting with technology.
By harmonizing human and machine systems, CIP enhances efficiency and user experience.
As we develop CIP, its applications and benefits will grow.
Embrace the future of human-technology interaction and contribute to the evolution of CIP. | michaeljohn_a199820188b79310 |
|
1,909,931 | Submission for the [Wix Studio Challenge ] | This is a submission for the Wix Studio Challenge What I Built I built an innovative... | 0 | 2024-07-04T20:43:38 | https://dev.to/shyam-raghuwanshi/submission-for-the-wix-studio-challenge--6b4 | devchallenge, wixstudiochallenge, webdev, javascript | *This is a submission for the* _[Wix Studio Challenge](https://dev.to/challenges/wix)_
## What I Built
I built an innovative eCommerce website called NovaMart. NovaMart offers a dynamic and user-friendly shopping experience, designed to cater to modern shoppers looking for convenience, personalization, and a seamless online experience.
## Demo
**Live-Link** - _[NovaMart](https://sr9993663832.wixstudio.io/novamart)_
**Github-Link** - _[NovaMart Github](https://github.com/Shyam-Raghuwanshi/NovaMart)_
## Development Journey
**Leveraging Wix Studio’s JavaScript Development Capabilities**
Creating NovaMart was an exciting journey that allowed me to fully utilize Wix Studio's robust JavaScript development platform. I began by using the powerful visual builder to design a clean and appealing interface that ensures an intuitive user experience
Local Development with @wix/cli
One of the most valuable tools during development was the npm package @wix/cli. This package allowed for efficient local development, enabling me to develop and test features locally before deploying them. The ability to work in a local environment significantly streamlined the development process, allowing for rapid iteration and testing.
Seamless Integration with GitHub
Another crucial aspect of the development process was the integration with GitHub. Wix Studio provides functionality to connect your project with a GitHub repository. This integration was incredibly helpful as it allowed for continuous integration and deployment. Whenever changes were pushed to the main branch, the site would automatically update with the latest changes, ensuring that the development and production environments were always in sync.
**Demo Video** [Loom Demo Video](https://www.loom.com/share/99953f5302a74e989ca3435cfac81d2e?sid=f5381f03-5316-4ce9-b1b8-76857817bd09)
**ScreenShorts**
**User Account page**
![User Account page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/naba7l91o3ri1tcv99ti.png)
**Login Page**
![Login Page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/so2c9eoqd75fli7mud98.png)
**Home page**
![Home page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mpopod8yzeobrjnzyzhg.png)
**Products Page**
![Products Page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4qe17opyxr4ljtv223lw.png)
**Product Page**
![Product Page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yl7shox1vzxvfc69i18n.png)
**Donation Page**
![Donation Page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x2pu9iqu3z661bxiy44w.png)
**Refer & Earn Page**
![Refer & Earn Page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b5ylh8j8gtp1t8wye88z.png)
**To enhance the functionality of NovaMart, I integrated several of Wix’s APIs and libraries:**
Wix Stores API: This was crucial for managing product listings, inventory, and categories.
Wix Users API: Enabled user authentication and personalized experiences based on user data.
Wix Pay API: Facilitated secure and smooth payment processing(But unfortunatily, I am not able to apply this because it need to have a plan).
Wix Animations: Added dynamic content and animations to create an engaging shopping experience.
Wix SEO API: Ensured that NovaMart is optimized for search engines, improving visibility and reach.
| shyam-raghuwanshi |
1,911,992 | crypto currency | Hi everyone. I’m rosela knott from montreal , canada. I was one of the victims of a crypto... | 0 | 2024-07-04T20:40:53 | https://dev.to/harrigan_knott_501369b061/crypto-currency-4429 | how, to, recover, my | Hi everyone. I’m rosela knott from montreal , canada. I was one of the victims of a crypto investment scam that saw me lose $302,000 worth of crypto. The crypto market is very volatile and a lot of individuals have lost some of their crypto coins and crypto assets to these online scams. My wallet address was compromised and I don’t know how they got a hold of my recovery phrase, but I was wiped out by these scam attacks. I lost all my crypto coins ( ethereum, Btc & USDT) and this left me devastated and depressed. I could have been homeless if it was not for the intervention of CYBER Computer specialist a well-known and highly trusted crypto company whom I contacted through a colleague of mine, I filed a complaint to CYBER Computer specialist and provided them with all the information including screenshots of the transactions. CYBER Computer specialist was able to recover my crypto assets in less than 48 hours. I was totally in awe of how quickly and efficiently CYBER Computer specialist was able to access my stolen crypto and recover my crypto wallet and assets. I’m highly impressed and I recommend their services to everyone. Truly information is very valuable and without the help of CYBER Computer specialist, I could have been homeless and left in a pitiful state. I’m very thankful for their help and I’m glad there is a safe way to fully recover crypto assets since the authorities cannot do anything to help get back lost funds. CYBER Computer specialist was able to help me trace these conmen with their sophisticated CYBER Computer specialist system. You can easily contact CYBER Computer specialist with the following information. Email <[email protected] > | harrigan_knott_501369b061 |
1,911,990 | HTML Forms- In details | Forms are essential components in HTML, allowing users to submit data to a server. This guide will... | 0 | 2024-07-04T20:32:43 | https://dev.to/ridoy_hasan/html-forms-in-details-514m |
Forms are essential components in HTML, allowing users to submit data to a server. This guide will explore the creation and handling of HTML forms, providing examples and best practices.
#### 1. Basic Form Structure
A basic HTML form is created using the `<form>` element. Within the form, various input elements like text fields, radio buttons, and submit buttons are used to collect user data.
**Example: Basic Form**
```html
<!DOCTYPE html>
<html>
<head>
<title>Basic Form Example</title>
</head>
<body>
<h1>Basic HTML Form</h1>
<form action="/submit" method="post">
<label for="name">Name:</label>
<input type="text" id="name" name="name"><br><br>
<label for="email">Email:</label>
<input type="email" id="email" name="email"><br><br>
<input type="submit" value="Submit">
</form>
</body>
</html>
```
**Output:**
```
Name: [__________]
Email: [__________]
[Submit]
```
In this example, the form includes text fields for the name and email, and a submit button to send the data.
#### 2. Radio Buttons and Checkboxes
Radio buttons allow users to select one option from a group, while checkboxes allow multiple selections.
**Example: Radio Buttons and Checkboxes**
```html
<!DOCTYPE html>
<html>
<head>
<title>Radio Buttons and Checkboxes Example</title>
</head>
<body>
<h1>Favorite Fruit</h1>
<form action="/submit" method="post">
<p>Please select your favorite fruit:</p>
<input type="radio" id="apple" name="fruit" value="apple">
<label for="apple">Apple</label><br>
<input type="radio" id="banana" name="fruit" value="banana">
<label for="banana">Banana</label><br>
<input type="radio" id="cherry" name="fruit" value="cherry">
<label for="cherry">Cherry</label><br><br>
<p>Please select your hobbies:</p>
<input type="checkbox" id="reading" name="hobby" value="reading">
<label for="reading">Reading</label><br>
<input type="checkbox" id="traveling" name="hobby" value="traveling">
<label for="traveling">Traveling</label><br>
<input type="checkbox" id="cooking" name="hobby" value="cooking">
<label for="cooking">Cooking</label><br><br>
<input type="submit" value="Submit">
</form>
</body>
</html>
```
**Output:**
```
Please select your favorite fruit:
( ) Apple
( ) Banana
( ) Cherry
Please select your hobbies:
[ ] Reading
[ ] Traveling
[ ] Cooking
[Submit]
```
In this example, users can select one fruit and multiple hobbies.
#### 3. Dropdown Lists
Dropdown lists allow users to select an option from a predefined list.
**Example: Dropdown List**
```html
<!DOCTYPE html>
<html>
<head>
<title>Dropdown List Example</title>
</head>
<body>
<h1>Select Your Country</h1>
<form action="/submit" method="post">
<label for="country">Country:</label>
<select id="country" name="country">
<option value="usa">USA</option>
<option value="canada">Canada</option>
<option value="uk">UK</option>
</select><br><br>
<input type="submit" value="Submit">
</form>
</body>
</html>
```
**Output:**
```
Country: [v USA ]
[ Canada ]
[ UK ]
[Submit]
```
In this example, the dropdown list includes three countries.
#### 4. Text Areas
Text areas allow users to enter multiple lines of text.
**Example: Text Area**
```html
<!DOCTYPE html>
<html>
head>
<title>Text Area Example</title>
</head>
<body>
<h1>Leave a Comment</h1>
<form action="/submit" method="post">
<label for="comment">Comment:</label><br>
<textarea id="comment" name="comment" rows="4" cols="50"></textarea><br><br>
<input type="submit" value="Submit">
</form>
</body>
</html>
```
**Output:**
```
Comment:
[______________________________________________]
[______________________________________________]
[______________________________________________]
[______________________________________________]
[Submit]
```
In this example, the text area allows users to enter a comment.
#### Benefits of Using HTML Forms
- **Interactivity**: Forms enable user interaction and data submission.
- **Data Collection**: They facilitate the collection of user information.
- **Functionality**: Forms are essential for functionalities like user registration, login, and search.
### Conclusion
Understanding how to use HTML forms is essential for creating interactive and functional web applications. Whether you're using basic forms, radio buttons, checkboxes, dropdown lists, or text areas, mastering these elements will enhance the interactivity and usability of your web pages.
follow me on LinkedIn - https://www.linkedin.com/in/ridoy-hasan7 | ridoy_hasan |
|
1,911,987 | Facebook-OpenAI Knowledge Base Chatbot | This project is a Node.js REST API that integrates OpenAI's GPT-4 with Facebook Messenger to create a... | 0 | 2024-07-04T20:23:30 | https://dev.to/tahsin000/facebook-openai-knowledge-base-chatbot-23ih |
This project is a Node.js REST API that integrates OpenAI's GPT-4 with Facebook Messenger to create a knowledge-based chatbot. The chatbot uses predefined knowledge embedded in the prompt to respond to user messages on Facebook.
## Installation
1. **Initialize the project**:
```bash
npm init -y
```
2. **Install required packages**:
```bash
npm install express axios body-parser dotenv
```
## Environment Variables
Create a `.env` file in the root of your project and add the following environment variables:
```plaintext
FACEBOOK_PAGE_ID=your_page_id
FACEBOOK_APP_ID=your_app_id
FACEBOOK_APP_SECRET=your_app_secret
FACEBOOK_VERIFY_TOKEN=your_verify_token
OPENAI_API_KEY=your_openai_api_key
```
Replace `your_page_id`, `your_app_id`, `your_app_secret`, `your_verify_token`, and `your_openai_api_key` with your actual credentials.
## Usage
1. **Create the main server file `index.js`**:
```javascript
const express = require('express');
const bodyParser = require('body-parser');
const axios = require('axios');
require('dotenv').config();
const app = express();
app.use(bodyParser.json());
const { FACEBOOK_PAGE_ID, FACEBOOK_APP_ID, FACEBOOK_APP_SECRET, FACEBOOK_VERIFY_TOKEN, OPENAI_API_KEY } = process.env;
// Define your predefined knowledge base
const knowledgeBase = `
You are a chatbot with the following predefined knowledge:
1. The capital of France is Paris.
2. The Python programming language was created by Guido van Rossum and first released in 1991.
3. The tallest mountain in the world is Mount Everest, standing at 8,848 meters (29,029 feet).
4. The theory of relativity was developed by Albert Einstein.
5. The Great Wall of China is over 13,000 miles long.
Answer questions based on this knowledge.
`;
// Endpoint for Facebook to verify the webhook
app.get('/webhook', (req, res) => {
const mode = req.query['hub.mode'];
const token = req.query['hub.verify_token'];
const challenge = req.query['hub.challenge'];
if (mode && token) {
if (mode === 'subscribe' && token === FACEBOOK_VERIFY_TOKEN) {
console.log('WEBHOOK_VERIFIED');
res.status(200).send(challenge);
} else {
res.sendStatus(403);
}
}
});
// Endpoint to handle messages
app.post('/webhook', async (req, res) => {
const body = req.body;
if (body.object === 'page') {
body.entry.forEach(async entry => {
const webhookEvent = entry.messaging[0];
const senderId = webhookEvent.sender.id;
const messageText = webhookEvent.message.text;
// Process the message with OpenAI
try {
const openaiResponse = await axios.post('https://api.openai.com/v1/completions', {
model: 'text-davinci-003',
prompt: `${knowledgeBase}\n\nUser: ${messageText}\nChatbot:`,
max_tokens: 150,
stop: ['User:', 'Chatbot:']
}, {
headers: {
'Authorization': `Bearer ${OPENAI_API_KEY}`
}
});
const openaiMessage = openaiResponse.data.choices[0].text.trim();
// Send response back to Facebook Messenger
await axios.post(`https://graph.facebook.com/v11.0/me/messages?access_token=${FACEBOOK_APP_ID}|${FACEBOOK_APP_SECRET}`, {
recipient: {
id: senderId
},
message: {
text: openaiMessage
}
});
res.status(200).send('EVENT_RECEIVED');
} catch (error) {
console.error('Error processing message:', error);
res.sendStatus(500);
}
});
} else {
res.sendStatus(404);
}
});
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
console.log(`Server is running on port ${PORT}`);
});
```
## Code Explanation
- **Webhook Verification**: Facebook sends a GET request to verify the webhook. The API responds with a challenge token to confirm the webhook.
- **Message Handling**: When a user sends a message to your Facebook page:
- The API receives a POST request.
- It extracts the message and sender ID.
- It sends the message to OpenAI's GPT-4 model to generate a response using the predefined knowledge base.
- It sends the generated response back to the user on Facebook Messenger.
## Run the Server
To start the server, run:
```bash
node index.js
``` | tahsin000 |
|
1,910,188 | Any vedio downloder | In the digital age, video content is a cornerstone of online interaction and entertainment. As the... | 0 | 2024-07-03T13:19:59 | https://dev.to/shamiya_akther_0d21e98896/any-vedio-downloder-408i | In the digital age, video content is a cornerstone of online interaction and entertainment. As the consumption of videos increases, so does the need for effective [any video downloader](https://anyvideodownloader.online/)s. These tools allow users to save videos from various platforms, including YouTube, Vimeo, and social media sites, for offline viewing. This guide will cover the essential features, advantages, and important considerations when selecting a video downloader.
Essential Features of Video Downloaders
1. Platform Compatibility: Leading any video downloaders support a wide array of platforms. This includes not just YouTube but also Facebook, Instagram, Dailymotion, and more. This broad compatibility ensures that users can download content from multiple sources without needing different tools.
2. Quality Selection: Top-tier downloaders offer various resolution options, from standard definition (SD) to high definition (HD) and 4K. This allows users to choose the best quality for their device’s display and storage capacity, enhancing the viewing experience.
3. Batch Downloading: The ability to download multiple videos at once is a significant time-saver. Batch downloading capabilities mean users can queue several videos, which the downloader will process sequentially or simultaneously, depending on its design.
4. Conversion Tools: Many video downloaders feature built-in converters that allow users to change video formats (e.g., MP4, AVI, or MP3). This is especially useful for extracting audio from videos or ensuring compatibility with different devices.
5. User-Friendly Design: A simple, intuitive interface is crucial for a positive user experience. The best video downloaders are designed with ease of use in mind, allowing even those with limited technical skills to navigate the software effortlessly.
Advantages of Using Video Downloaders
1. Offline Access: One of the main benefits of any video downloaders is the ability to watch videos offline. This is ideal for those with limited internet access or who wish to save on data usage, providing uninterrupted viewing without the need for a constant internet connection.
2. Content Preservation: Videos can be removed or become unavailable on their original platforms. Downloading ensures you have a personal copy that remains accessible regardless of changes to the online availability of the content. This is particularly valuable for educational or instructional videos that you may need in the future.
3. Ad-Free Viewing: Many online platforms include ads that can disrupt the viewing experience. Downloading videos allows you to watch them without interruptions from ads, leading to a more enjoyable and seamless experience.
4. Convenient Viewing: Saving videos on your device means you can watch them anytime, anywhere, without being tied to an internet connection. Whether you’re traveling, commuting, or in a location with poor connectivity, your downloaded videos are always ready for viewing.
1. Legal and Ethical Issues: It’s crucial to understand that downloading videos from some platforms might violate their terms of service or copyright laws. Always ensure you have the right to download the content and that your actions comply with legal standards.
2. Security and Safety: Downloading software from reputable sources is vital to avoid malware and security threats. Ensure the downloader you choose has positive reviews and comes from a trusted developer.
3. Cost: While many video downloaders are free, some offer premium features at a cost. Assess whether the additional features justify the investment based on your needs.
4. Device Compatibility: Ensure the video downloader is compatible with your operating system and device. Some downloaders are specifically designed for Windows, macOS, or mobile platforms, so select one that meets your specific requirements.
In summary, any video downloaders are essential tools for avid video consumers. By understanding their key features, benefits, and necessary considerations, you can choose the right downloader to enhance your viewing experience.
[https://anyvideodownloader.online/](url) | shamiya_akther_0d21e98896 |
|
1,911,985 | Why Losing Weight is Only Possible with Programming | A few years ago, I read an article that detailed numerous calculations, formulas, and explanations... | 0 | 2024-07-04T20:19:24 | https://dev.to/ivanbalmasov/why-losing-weight-is-only-possible-with-programming-4ifo | A few years ago, I read an article that detailed numerous calculations, formulas, and explanations about why it is so difficult for us developers to lose weight. I don't understand why there's so much material and discussion about weight loss when there's only one way in our universe to shed those extra pounds (maybe different laws of physics exist in another universe): consume fewer calories than you expend. There is no other way to lose weight. Nature hasn't invented one, and it hasn't emerged in the evolutionary process. So, where does programming come into play?
This article is divided into two sections: the technical part, about a Java application written for AWS, and a discussion on weight loss, motivation, etc. If you're only interested in the technical part, you can skip to "Technical Part", otherwise, I invite you to read on.
**Rationale**
I don't believe that losing weight is a hard and complicated process. Losing excess weight is a long process that ideally (for me) evolves into a lifelong journey that never ends. The truth is, the process shifts to another category where you're not trying to lose weight but rather you're becoming an athlete for your own health and longevity. So, does this mean you'll never achieve your desired result? That's exactly right. Once you set a goal like: "I need to lose weight," you’re doomed to fail. The goal should be rephrased: "I want to live a healthy and fulfilling life." The goal of losing weight will never lead to satisfaction because your reflection in the mirror will never look the way you want, and it will never be like photos of fitness bloggers on Instagram, simply because you're probably not a fitness blogger.
When you set the goal to lose weight, questions immediately arise: how quickly will you lose weight and how? Most likely you're aiming for as quickly as possible, right? This means either starting to drink some health-wrecking fat burners or going on a strict diet and eventually gaining back more than you lost, repeating this cycle over and over. How many times have I seen this cycle at the gym: people start a new life, only to then return to their old one, gaining weight faster and falling into an even deeper pit. Let's even assume you've lost the desired weight. Do you look better? Probably not, because losing water, fat, and muscle doesn't mean looking better; for most people, it's the opposite effect. And also, look better for whom? For yourself, for Instagram photos, or for your partner? In my 11 years at various gyms, I have never met anyone satisfied with the result of such a goal in the long run. NEVER! I could go on and on with examples. For myself, I decided once and for all: “losing weight” is not the goal and never should be. I dare to suggest you agree with me, at least for the duration of this article.
**Goal**
Our goal is to be an athlete for the rest of our lives and to improve our quality of life. Why should the goal be lifelong? Because I am 100 percent sure that the overwhelming majority of people on earth share one thing in common: we want to live happily. What this means is up to you, and I'm not talking about destructive or unattainable desires, maniacs or psychopaths, but about most people. So, if we want to live happily, why not set a goal that we can maintain throughout our lives, not just from ages 23 to 31? Maintaining such a goal is impossible without daily consistency, so we’ll now shift our discussion to focus solely on food. Not on workouts, being more active, running, getting in your steps, etc., but on what we cook and eat every day. Proper nutrition is probably 80% of success in achieving the goal, if not more. If you can control what you eat, you can control your weight, your well-being to some extent, improve your sleep, and ultimately your life.
I know it sounds crazy, but just consider how much of your life is spent eating, and you quickly realize the immense impact of your diet over the years. I would also add sleep to this, but that's a whole different conversation, so I'll just leave this [link ](https://www.amazon.com/Why-We-Sleep-Unlocking-Dreams/dp/1501144316/ref=asc_df_1501144316/?tag=hyprod-20&linkCode=df0&hvadid=693363255003&hvpos=&hvnetw=g&hvrand=15961384331939870146&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=9012391&hvtargid=pla-385931137145&psc=1&mcid=2e138cd1bb4938ac9d7cf58fc82c10e1&gad_source=1)to a great book on that topic.
**A Simple Strategy for Success**
I'll now describe a strategy that, in my opinion, is foolproof: changes should happen very slowly and start with the simplest steps. Currently, I know my calorie intake, which is 2300 kcal per day, and I count every intake. If I want to lose weight, I gradually reduce calories; if I want to gain weight, I increase protein and carbs. But it's challenging and time-consuming for many people to start counting if they've never done it before, and most likely, they won't last even a couple of months. Let's simplify the strategy. Let your first step be to start recording what you eat, maybe using a note on your phone, and continue for a few weeks. Then, start analyzing: how can I change my diet? For example, if you usually eat two pieces of candy or drink a soda on your way home from work, you can cut that out, and so on, step by step. Remove some things entirely, replace others with protein and fiber. Does this still seem too complicated? No problem, simplify your first step even more, so that you can make it a lifelong habit. For example: I will drink 2-3 liters of water every day. I have a 2-liter bottle, and by the end of the day, it should be empty. I think you get the pattern: it’s doing something that you can do for a week, two, a month, and so on, without stopping and never quitting. And add such changes as you get used to the previous ones.
**Technical Part**
Now, let's transition to programming and how it's related. As mentioned earlier, I count calories, which means my wife and I weigh the raw food we eat. We don't always have time to cook every day, so we cook for a week. The challenge is, for example, we cook 500 grams of dry rice, but how much cooked rice is that? It varies every time due to many factors, but the main point is: you need to know the raw weight of the food on your plate if you're counting calories. Do you see the potential for automation? Every time we take cooked rice (or any other food), we need to calculate it using the same formula:
(X / N) * Y where:
X is the raw weight you want to eat,
N is the total raw weight of the all the food item you cooked,
Y is the total cooked weight of all the food item you cooked.
**Automation Potential**
We realized that this could be more convenient, which led to the idea of creating an app that remembers and calculates this for us. The idea is simple: we record the food, raw weight, cooked weight, and when eating, we choose the dish and the desired raw weight, and the app gives you the corresponding cooked weight. For example, if you cooked 500 grams of dry rice, resulting in 1210 grams of cooked rice, to eat 40 grams raw, you need to take 96 grams cooked.
![Saving prepared food](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x4t3c87yfffyiauict7d.jpg)
![Eating process](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o761rh0sqx4dz0x91ifd.png)
![Eating process](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0rd6tvkse8fezxuyalfi.jpg)
**Implementation**
For the implementation, I didn't want a separate app for such a minor function, so I chose something I use constantly: a Telegram Bot. I wanted to learn AWS Lambda, so that was an easy choice. For dependency injection, I used Dagger. For database connection, I used standard tools from java.sql, but in retrospect, I should have spent more time finding a lightweight ORM solution. Writing a lot of auxiliary code took a lot of time and made the project more complex than necessary.
**Core Functions**
I needed to achieve two things: remembering the prepared food and calculating the equivalent raw weight in cooked form. The bot has two buttons: "Cooking" and "Eating". Pressing one of these sends a message to the Telegram server, which then sends it to the webhook API Gateway, triggering a Lambda function. This request goes to the handleRequest method, where the message and additional information are processed into an update object. With each new user message, we need to know the user's current step to send the correct response and perform the corresponding action. The exceptions are commands like /start, /help, and /cancel. /start and /help are standard commands suggested by Telegram for a better user experience, displaying messages to the user without database interaction. The /cancel command resets the user to the beginning, canceling all previous actions. Unfortunately, I didn't implement a rollback for deleted foods; it just returns the user to the main menu. The /delete command also initiates a chain of actions, saving the user's step in the database.
**Project Structure**
Package com.nutritrack.bot.service.step contains the logic for all user steps.
Each class implementing StepService represents a user step: SteplessServiceImpl, RawFoodWeighingServiceImpl, etc.
UserStep class presents all existing user steps.
At the end of each service, the user's step is updated and saved in the database, ensuring the correct service is called with the correct message to it’s step.
**Conclusion**
This seemingly simple project involved significant thought and learning. The code is available on GitHub, and you can try the bot to test response time and see how it's implemented. If you want to join and contribute, I'd be thrilled.
**Final Thoughts**
Why did I create this project and write this article? First, for convenience and saving time in my family's daily life. Second
, for motivation: I created a project that's not perfect, but it helps me stay on track with my goals every day. Isn't that a worthy reason for the time spent? Additionally, there are many positive side-effects, such as learning new technologies (Dagger and AWS Lambda), finishing a pet project that’s on GitHub rather than forgotten on a external hard drive. If this work inspires or helps even one person, it was worth it! If you're lacking motivation to start something big, begin with something small. Your own project, article, or contribution to someone else's application could lead to positive changes, so why not to try it? Thanks for your attention!
**P.S.**
As Master Yoda said: "Fewer tests you write in applications, more calories you burn during debugging. May the Force be with you."
Book: [Matthew Walker; Why We Sleep](https://www.amazon.com/Why-We-Sleep-Unlocking-Dreams/dp/1501144316/ref=asc_df_1501144316/?tag=hyprod-20&linkCode=df0&hvadid=693363255003&hvpos=&hvnetw=g&hvrand=15961384331939870146&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=9012391&hvtargid=pla-385931137145&psc=1&mcid=2e138cd1bb4938ac9d7cf58fc82c10e1&gad_source=1)
Connect with me: [LinkedIn](https://www.linkedin.com/in/balmasov-ivan/)
Project on [GitHub](https://github.com/balmasov/nutri-track-bot)
[Telegram Bot](https://t.me/NutriTrackBot)
| ivanbalmasov |
|
1,911,984 | Introduction to Blockchain and AI | Blockchain and Artificial Intelligence (AI) are two of the most revolutionary technologies in the... | 27,673 | 2024-07-04T20:15:15 | https://dev.to/rapidinnovation/introduction-to-blockchain-and-ai-5g7p | Blockchain and Artificial Intelligence (AI) are two of the most revolutionary
technologies in the modern digital era. Both have the potential to transform
industries, enhance operational efficiencies, and create new opportunities for
innovation. While blockchain offers a decentralized and secure platform for
transactions and data sharing, AI contributes intelligence and adaptability to
automated processes. The convergence of these technologies can lead to the
development of more secure, transparent, and efficient systems.
## Overview of Blockchain Technology
Blockchain technology is fundamentally a decentralized digital ledger that
records transactions across multiple computers in such a way that the
registered transactions cannot be altered retroactively. This technology is
the backbone of cryptocurrencies like Bitcoin and Ethereum, but its potential
applications span far beyond just financial transactions. Industries such as
healthcare, supply chain management, and cybersecurity are also beginning to
adopt blockchain to secure data, manage records, and ensure transparency.
## Overview of Artificial Intelligence
Artificial Intelligence (AI) refers to the simulation of human intelligence in
machines that are programmed to think like humans and mimic their actions. AI
can be categorized into two primary types: narrow AI, which is designed to
perform a narrow task (like facial recognition or internet searches) and
general AI, which performs any intellectual task that a human being can. AI
works through a combination of large datasets, machine learning algorithms,
and computational power to learn from patterns or features in the data.
## Smart Contracts
Smart contracts are self-executing contracts with the terms of the agreement
directly written into lines of code. The concept was first proposed by Nick
Szabo in 1994, but it has gained significant traction with the advent of
blockchain technology, particularly Ethereum. Smart contracts automatically
enforce and execute the terms of an agreement based on a predefined set of
rules, eliminating the need for intermediaries and reducing the potential for
disputes.
## Decentralized AI Models
Decentralized AI involves distributing the tasks of artificial intelligence
systems across multiple decentralized nodes, typically leveraging blockchain
technology. This approach not only enhances the security and privacy of data
but also democratizes access to AI technologies. Decentralized AI models can
operate transparently and without a single point of failure, making them
robust against attacks and biases that might affect a centralized AI system.
## Improved Efficiency and Automation
The integration of technologies like AI, blockchain, and IoT has significantly
improved efficiency and automation across various sectors. These technologies
automate complex processes, reduce human error, and streamline operations,
leading to cost savings and enhanced productivity.
## AI in Blockchain Transactions
The integration of Artificial Intelligence (AI) into blockchain transactions
represents a significant advancement in the technology sector. AI can enhance
blockchain technology by improving the efficiency and security of
transactions. For instance, AI algorithms can analyze patterns and detect
fraudulent activities or anomalies in blockchain networks, thereby increasing
the security of digital transactions.
## Blockchain for AI Data Integrity
Blockchain technology offers a robust solution for ensuring the integrity and
security of data used in AI systems. By storing data across a decentralized
network, blockchain provides a tamper-proof data management system, which is
crucial for maintaining the accuracy and reliability of AI applications.
## Innovative Applications
The convergence of AI and blockchain is spawning a range of innovative
applications across various industries. In the financial sector, AI-enhanced
blockchain technology is being used to streamline processes, enhance customer
service, and improve risk management. In the supply chain industry, blockchain
and AI are being combined to create more transparent and efficient systems.
## Healthcare
The healthcare sector has undergone significant transformations over the past
few years, primarily driven by technological advancements and the global
pandemic. The integration of AI and machine learning has revolutionized
patient care, making diagnostics faster and more accurate. Telemedicine has
become a staple, providing access to healthcare services for people in remote
areas and reducing the strain on traditional healthcare facilities.
## Finance
The finance sector has seen a dramatic shift towards digitalization, with
fintech companies leading the way in revolutionizing financial services.
Mobile banking, digital wallets, and peer-to-peer payment systems have become
the norm, offering consumers convenience and flexibility in managing their
finances. Cryptocurrencies and blockchain technology are also gaining
traction, providing new ways for secure and transparent financial
transactions.
📣📣Drive innovation with intelligent AI and secure blockchain technology! Check
out how we can help your business grow!
[Blockchain Development](https://www.rapidinnovation.io/service-
development/blockchain-app-development-company-in-usa)
[AI Development](https://www.rapidinnovation.io/ai-software-development-
company-in-usa)
[Blockchain Development](https://www.rapidinnovation.io/service-
development/blockchain-app-development-company-in-usa) [AI
Development](https://www.rapidinnovation.io/ai-software-development-company-
in-usa)
## URLs
* <https://www.rapidinnovation.io/post/blockchain-and-ai-leading-trends-and-investment-opportunities-today>
## Hashtags
#BlockchainRevolution
#AIInnovation
#SmartContracts
#DecentralizedAI
#TechIntegration
| rapidinnovation |
|
1,911,983 | Hong kong company registration | Hong Kong is a leading global financial hub, renowned for its business-friendly environment,... | 0 | 2024-07-04T20:11:23 | https://dev.to/capitalfund07/hong-kong-company-registration-1f9 | Hong Kong is a leading global financial hub, renowned for its business-friendly environment, strategic location, and robust legal framework. For entrepreneurs and corporations looking to establish a presence in Asia, forming a company in Hong Kong presents numerous advantages. This guide will provide an in-depth look at the process of company formation in Hong Kong, including company registration, setup, offshore company formation, and incorporation.
Why Choose Hong Kong for Company Formation?
1. Business-Friendly Environment:
Hong Kong is known for its low tax regime, free trade policies, and minimal bureaucratic hurdles. The region's commitment to maintaining a transparent and efficient business environment makes it an attractive destination for entrepreneurs.
2. Strategic Location:
Situated at the heart of Asia, Hong Kong serves as a gateway to Mainland China and other major Asian markets. This strategic location facilitates easy access to a vast consumer base and numerous business opportunities.
**_[Hong kong company registration](https://onlinecompanyregister.com/hong-kong-company-formation/)_**
3. Robust Legal Framework:
Hong Kong's legal system, based on common law, offers strong protection for businesses and investors. The region's judicial independence and adherence to international legal standards provide a secure environment for business operations.
4. Ease of Company Formation:
The process of setting up a company in Hong Kong is straightforward and can be completed quickly. The government has streamlined procedures to make it easy for both local and foreign entrepreneurs to establish their businesses.
Company Registration in Hong Kong
1. Choosing a Company Structure:
The first step in company registration is to choose an appropriate company structure. The most common types are:
Private Limited Company (Ltd)
Public Limited Company (Plc)
Branch Office
Representative Office
The Private Limited Company is the most popular choice due to its limited liability protection and flexibility.
2. Company Name Reservation:
Before registering a company, you need to ensure that the desired company name is available. The name must not be identical to any existing company name in Hong Kong.
3. Preparing Documentation:
To register a company, you will need to prepare and submit the following documents:
Articles of Association
Incorporation Form (Form NNC1)
Copies of directors' and shareholders' identification documents
Proof of registered office address
4. Submitting the Application:
The completed documents must be submitted to the Companies Registry along with the registration fee. The process can be done online or by visiting the Companies Registry office in person.
5. Obtaining the Certificate of Incorporation:
Once the application is approved, the Companies Registry will issue a Certificate of Incorporation. This certificate serves as proof that the company is legally registered and can commence business operations.
Offshore Company Formation in Hong Kong
For businesses looking to take advantage of Hong Kong's favorable tax regime and strategic location, setting up an offshore company is an attractive option.
1. Benefits of Offshore Companies:
Tax Efficiency: Offshore companies in Hong Kong benefit from low tax rates and exemptions on foreign-sourced income.
Privacy: Offshore companies provide a higher level of confidentiality for business owners and shareholders.
Asset Protection: Offshore companies offer robust protection of assets from legal and political risks.
2. Steps for Offshore Company Setup:
Choose a company structure that suits your business needs.
Reserve a unique company name.
Prepare and submit the required incorporation documents to the Companies Registry.
Obtain a Certificate of Incorporation upon approval.
3. Compliance and Reporting:
Offshore companies in Hong Kong must comply with local regulations, including annual filing of financial statements, tax returns, and maintaining proper records of business transactions.
Key Considerations for Company Formation
1. Registered Office:
Every company in Hong Kong must have a registered office address where official correspondence can be sent. This address must be a physical location within Hong Kong.
2. Company Secretary:
A company secretary must be appointed within six months of incorporation. The secretary can be an individual residing in Hong Kong or a corporate entity with a registered office in Hong Kong.
3. Shareholders and Directors:
A Hong Kong company must have at least one shareholder and one director. The shareholder can be an individual or a corporate entity, and there are no residency requirements for directors.
4. Annual Requirements:
Companies in Hong Kong are required to hold an Annual General Meeting (AGM) and file an Annual Return with the Companies Registry. Additionally, audited financial statements must be prepared and submitted annually.
Conclusion
Forming a company in Hong Kong offers numerous advantages, from a business-friendly environment and strategic location to a robust legal framework and ease of registration. Whether you are looking to establish a local business or set up an offshore company, Hong Kong provides a secure and efficient platform for business operations. By following the outlined steps and understanding the key considerations, entrepreneurs can successfully navigate the process of company formation and leverage the opportunities available in this dynamic region. | capitalfund07 |
|
1,911,982 | Case Study: Generic Matrix Class | This section presents a case study on designing classes for matrix operations using generic types.... | 0 | 2024-07-04T20:09:15 | https://dev.to/paulike/case-study-generic-matrix-class-4bb0 | java, programming, learning, beginners | This section presents a case study on designing classes for matrix operations using generic types. The addition and multiplication operations for all matrices are similar except that their element types differ. Therefore, you can design a superclass that describes the common operations shared by matrices of all types regardless of their element types, and you can define subclasses tailored to specific types of matrices. This case study gives implementations for two types: **int** and **Rational**. For the **int** type, the wrapper class **Integer** should be used to wrap an **int** value into an object, so that the object is passed in the methods for operations.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8xab4hxbut97p2el9frt.png)
The class diagram is shown in Figure above. The methods **addMatrix** and
**multiplyMatrix** add and multiply two matrices of a generic type **E[][]**. The static method **printResult** displays the matrices, the operator, and their result. The methods **add**, **multiply**, and **zero** are abstract, because their implementations depend on the specific type of the array elements. For example, the **zero()** method returns **0** for the **Integer** type and **0/1** for the **Rational** type. These methods will be implemented in the subclasses in which the matrix element type is specified.
**IntegerMatrix** and **RationalMatrix** are concrete subclasses of **GenericMatrix**. These two classes implement the **add**, **multiply**, and **zero** methods defined in the **GenericMatrix** class.
The code below implements the **GenericMatrix** class. **<E extends Number>** in line 3 specifies that the generic type is a subtype of **Number**. Three abstract methods—**add**, **multiply**, and **zero**—are defined in lines 5, 8, and 11. These methods are abstract because we cannot implement them without knowing the exact type of the elements. The **addMaxtrix** (lines 14–28) and **multiplyMatrix** (lines 31–52) methods implement the methods for adding and multiplying two matrices. All these methods must be nonstatic, because they use generic type **E** for the class. The **printResult** method (lines 55–78) is static because it is not tied to specific instances.
The matrix element type is a generic subtype of **Number**. This enables you to use an object of any subclass of **Number** as long as you can implement the abstract **add**, **multiply**, and **zero** methods in subclasses.
The **addMatrix** and **multiplyMatrix** methods (lines 14–52) are concrete methods. They are ready to use as long as the **add**, **multiply**, and **zero** methods are implemented in the subclasses.
The **addMatrix** and **multiplyMatrix** methods check the bounds of the matrices before performing operations. If the two matrices have incompatible bounds, the program throws an exception (lines 17, 34).
```
package demo;
public abstract class GenericMatrix<E extends Number> {
/** Abstract method for adding two elements of the matrices */
protected abstract E add(E o1, E o2);
/** Abstract method for multiplying two elements of the matrices */
protected abstract E multiply(E o1, E o2);
/** Abstract method for defining zero for the matrix element */
protected abstract E zero();
/** Add two matrices */
public E[][] addMatrix(E[][] matrix1, E[][] matrix2) {
// Check bounds of the two matrices
if((matrix1.length != matrix2.length) || (matrix1[0].length != matrix2[0].length)) {
throw new RuntimeException("The matrices do not have the same size");
}
E[][] result = (E[][])new Number[matrix1.length][matrix1[0].length];
// Perform addition
for(int i = 0; i < result.length; i++)
for(int j = 0; j < result[i].length; j++) {
result[i][j] = add(matrix1[i][j], matrix2[i][j]);
}
return result;
}
/**Multiply two matrices */
public E[][] multiplyMatrix(E[][] matrix1, E[][] matrix2) {
// Check bounds
if(matrix1[0].length != matrix2.length) {
throw new RuntimeException("The matrices do not have compatible size");
}
// Create result matrix
E[][] result = (E[][])new Number[matrix1.length][matrix2[0].length];
// Perform multiplication of two matrices
for(int i = 0; i < result.length; i++) {
for(int j = 0; j < result[0].length; j++) {
result[i][j] = zero();
for(int k = 0; k < matrix1[0].length; k++) {
result[i][j] = add(result[i][j], multiply(matrix1[i][k], matrix2[k][j]));
}
}
}
return result;
}
/** Print matrices, the operator, and their operation result */
public static void printResult(Number[][] m1, Number[][] m2, Number[][] m3, char op) {
for(int i = 0; i < m1.length; i++) {
for(int j = 0; j < m1[0].length; j++)
System.out.print(" " + m1[i][j]);
if(i == m1.length / 2)
System.out.print(" "+ op + " ");
else
System.out.print(" ");
for(int j = 0; j< m2.length; j++)
System.out.print(" " + m2[i][j]);
if(i == m1.length / 2)
System.out.print(" = ");
else
System.out.print(" ");
for(int j = 0; j < m3.length; j++)
System.out.print(m3[i][j] + " ");
System.out.println();
}
}
}
```
The code below implements the **IntegerMatrix** class. The class extends
**GenericMatrix<Integer>** in line 3. After the generic instantiation, the **add** method in **GenericMatrix<Integer>** is now **Integer add(Integer o1, Integer o2)**. The **add**, **multiply**, and **zero** methods are implemented for **Integer** objects. These methods are still protected, because they are invoked only by the **addMatrix** and **multiplyMatrix** methods.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kv7nhztqdu20qbu0v497.png)
The code below implements the **RationalMatrix** class. The **Rational** class was introduced in [Rational.java](https://dev.to/paulike/case-study-the-rational-class-1o41). **Rational** is a subtype of **Number**. The **RationalMatrix** class extends **GenericMatrix<Rational>** in line 3. After the generic instantiation, the **add** method in **GenericMatrix<Rational>** is now **Rational add(Rational r1, Rational r2)**. The **add**, **multiply**, and **zero** methods are implemented for **Rational** objects. These methods are still protected, because they are invoked only by the **addMatrix** and **multiplyMatrix** methods.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/my5jz6aynaf35bwgtwur.png)
The code below gives a program that creates two **Integer** matrices (lines 7–8) and an **IntegerMatrix** object (line 11), and adds and multiplies two matrices in lines 14 and 17.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o6prdtj6rejccpstu7yf.png)
The code below gives a program that creates two **Rational** matrices (lines 7–13) and a **RationalMatrix** object (line 16) and adds and multiplies two matrices in lines 19 and 22.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qy472tiq28lijqd2lyj8.png) | paulike |
1,911,981 | Event loop, V8 engine and Libuv in node.js | Could someone explain to me what an event loop is in the context of Node.js? and what is the role of... | 0 | 2024-07-04T20:09:07 | https://dev.to/azuli_jerson_86d70f94325d/event-loop-v8-engine-and-libuv-in-nodejs-4ab3 | help | Could someone explain to me what an event loop is in the context of
Node.js? and what is the role of the V8 engine and the libuv library in
Does Node.js work? | azuli_jerson_86d70f94325d |
1,911,976 | Get online certification help now | Introduction Gaining your GED is a big step that can lead to better job prospects, higher... | 0 | 2024-07-04T20:01:13 | https://dev.to/john_hoffmann/get-online-certification-help-now-884 | ## Introduction
Gaining your GED is a big step that can lead to better job prospects, higher education, and personal growth. However, studying for the [GED test](https://www.ged.com/en/) can be challenging, especially if you have other obligations like work, family, or school. That's where our service to help with the GED test can help when asked if I can [pay someone to take my GED test online](https://onlinecertificationhelp.com/). We offer all-around, adaptable, and low-cost help to help you achieve.
## All The Study Materials You Need
Having access to good study tools is one of the most important parts of getting ready for the GED test. Our [GED online test](https://onlinecertificationhelp.com/) help service has a huge collection of materials for all test subjects, including language arts, math, science, and history. Our tools are made to be very useful and easy to understand so that you can get the information and skills you need to do well when you tell us to [pay someone to take my GED exam](https://onlinecertificationhelp.com). We have tools that will help you learn in any way you like, whether you learn best by watching videos or reading through detailed study guides. Our practice questions and quizzes also help you remember what you've learned, making sure you're fully ready for the real test.
## Help And Advice From Experts
It can be hard to study for the GED test by yourself, but you don't have to. Our team of experienced teachers and trainers is here to help you prepare and give you expert advice along the way. We offer one-on-one tutoring sessions that are designed to fit your goals and speed of learning. It is easier for you to understand and remember things when our teachers break down difficult ideas into manageable pieces. In addition to helping you with your schoolwork, our teachers also give you positive advice to help you stay on track and feel good about your progress. With our help, you'll feel confident and know what you need to know to do well on the GED test.
## Options For Flexibility
We at [OnlineCertificationHelp](onlinecertificationhelp.com) know that everyone has their own routines and ways they like to learn. That's why our GED online test help service gives you a variety of ways to learn that fit your schedule. Our platform is made to fit easily into your schedule, whether you like to study at your own pace or in real-time online lessons. You can study at your own pace with our self-paced lessons, which gives you the freedom to study while also taking care of other things. If you do better in a more structured setting, our live online classes let you learn in an engaging way and get feedback from teachers in real time. You can study for the GED test on your own time with our flexible learning choices, no matter what works best for you.
## We Take Practice Tests
Take practice tests. They are one of the best ways to get ready for the GED test. Our online GED test help service comes with a number of sample tests that are very much like the real test. These tests help you get used to the style and timing of the real test, which lowers your stress and boosts your performance. After each practice test, you'll get thorough feedback that shows you what you did well and what you could do better. This real-time feedback lets you spend your study time on the areas that need it the most, which helps you keep getting better and see how you're doing over time. You'll feel ready and sure of yourself for the GED test after taking our practice tests and getting feedback.
## Reasonably Priced Service
We think that everyone, no matter how much money they have, should be able to get a good education. Our prices for our GED online test help service are low enough that all students can afford it. We have different payment plans and price plans to fit different budgets, which makes it easier for you to save for the future. Also, our online platform can be accessed from anywhere with an internet link, so you don't have to pay for expensive moves or commutes. We offer cheap and easy entry to our GED online test help service, which will help you pass the test and open the door to a better future.
## Final Words
our GED online test help service is meant to give you all the help, expert advice, and flexible learning choices you need to do well. You can feel confident as you study for the GED test with our help, which will open up new career and personal growth possibilities for you. Start getting better right away with our GED online test help.
| john_hoffmann |
|
1,911,974 | Usuários Android: preciso da sua ajuda! | Estou tentando lançar uma versão do meu novo jogo (Touch me When) para Android, mas como sou um... | 0 | 2024-07-04T20:00:39 | https://dev.to/marciofrayze/usuarios-android-preciso-da-sua-ajuda-2f09 | android, app | Estou tentando lançar uma versão do meu novo jogo (Touch me When) para Android, mas como sou um desenvolvedor individual, o Google impõe um processo de verificação mais rigoroso antes de permitir que eu publique um novo App.
Exigem um período de 14 dias de testes beta fechados com pelo menos 20 testadores. Você pode me ajudar com isso??
Tudo o que você precisa fazer é baixar a versão beta e abri-la de vez em quando (uma vez por dia) para contar como usuário ativo.
Para se tornar um testador, basta entrar em um grupo do Google (usam isso como uma forma de controlar quem pode baixar o App, então deve ser com o mesmo email que você usa na sua conta do Android) em: [https://groups.google.com/g/touchmewhen-testers](https://groups.google.com/g/touchmewhen-testers).
E então você deve conseguir baixar o App em: [https://play.google.com/store/apps/details?id=tech.segunda.touchwhen](https://play.google.com/store/apps/details?id=tech.segunda.touchwhen).
Muito obrigado pela sua ajuda! 😄 | marciofrayze |
1,911,969 | Understanding git bisect! | Have you ever been put in a situation where a bug has been spotted on your project, which you know... | 0 | 2024-07-04T19:52:52 | https://dev.to/miguelparacuellos/understanding-git-bisect-hki | Have you ever been put in a situation where a bug has been spotted on your project, which you know was not there some time ago but you do not know when it was introduced.
This situation forces you into, at least, 2 possible paths 👇🏻
- Trying to debug your codebase to find out where the bug is coming from, which in large codebases or certain situations can become difficult.
- Going back to previous commits one by one, which in big projects or those shared by lots of people may imply going back tons of commits, which can become really time consuming to say the least.
If this seems like the situation you’re facing or have ever faced, keep reading!
Imagine you’re facing the following case 👇🏻
![Example Image 1](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g4o5jkybx5ohgitfyg6n.png)
If you needed to spot where the issue was introduced at, you would start a linear search where you would go back 1 commit by 1 commit and seeing where the issue was introduced
![Example Image 2](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x7wurxi77nmxx2k660ui.png)
Imagine the issue was introduced 40 commits ago, you would have been forced to checkout 40 times until you found the first commit where the bug did not exist 😱
**What does `git bisect` offer?**
The idea behind **`git bisect`** is to replace the linear search by [a binary search](https://www.geeksforgeeks.org/binary-search/). Let’s look at it through an example (for the sake of simplicity, the example will be reduced to a use case with 8 commits) 👇🏻
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v9saxs2a2i9nm1wn5w3u.png)
When running `git bisect` we will be asked to give a commit in which the issue was not happening (start commit) and a commit where the issue has been spotted (end commit):
- Start commit → _Commit 1_ (just try some random older commits and take the first one that works)
- End commit → _Commit 8_
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/viz61ax4ogsu6ht6jiz1.png)
Binary search will take a random commit in the middle called pivot, Commit 5 for example, and check if the issue still appeared there or not.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ecsaluosl0n1j3o3mca.png)
On our case, _Commit 5_ still presents the issue so we can ensure that _Commit 6 & Commit 7 & Commit 8_ won’t be the ones that introduced it.
Just like this, we have avoided checking _Commit 6 & Commit 7_. Way to go!
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/09ohx6wpuen849vw1jar.png)
We will just take a pivot around the middle again, on this case _Commit 2_.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zzs6mh762flbpzvpfia0.png)
In this case, _Commit 2_ does not present the issue! Therefore, we can ensure that _Commit 1_ did not introduce it neither. Another commit less to check again!
We just need to check if the bug was introduced by *Commit 3* or *Commit 4* 🏃🏻
We’ll repeat the process!
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rbi8cq5eu4pfupyg6rb7.png)
Same as before is done and therefore we’ll easily find out that *Commit 3* introduced our bug, and now we just need to look through the changes done in that specific commit, easy! 🥳
**You might be wondering…**
Okay the drawings look good but, is this really faster?
In general terms, the costs usually are something like 👇🏻
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ua6wvxf8zbn8rncwjdva.png)
**That’s it!**
I hope you found this post useful and that it might help you next time you’re facing something similar! | miguelparacuellos |
|
1,911,970 | Knowing These 6 Secrets Will Make Your Flowers Look Amazing | Flowers bring beauty, color, and joy into our lives. Whether you're a seasoned gardener or a... | 0 | 2024-07-04T19:50:44 | https://dev.to/rana_yasir_7bdde2e1316ec7/knowing-these-6-secrets-will-make-your-flowers-look-amazing-13eh | Flowers bring beauty, color, and joy into our lives. Whether you're a seasoned gardener or a beginner, there's always something new to learn about keeping your blooms vibrant and healthy. Here are seven secrets that will make your flowers look amazing.
1. Choose the Right Flowers for Your Climate
Selecting flowers suited to your local climate is crucial. Different flowers thrive in different environments. Research which flowers do well in your area’s temperature, humidity, and soil type. For instance, if you live in a dry climate, consider drought-resistant plants like succulents or lavender. In contrast, hydrangeas and ferns thrive in more humid conditions.
2. Prepare Your Soil
Healthy soil is the foundation of beautiful flowers. Before planting, enrich your soil with organic matter such as compost or well-rotted manure. This improves soil structure, provides essential nutrients, and enhances moisture retention. Conduct a soil test to determine its pH and nutrient levels, then amend it accordingly to suit the needs of your chosen flowers.
3. Water Wisely
Proper watering is essential for vibrant flowers. Most flowers need about an inch of water per week. Water your plants early in the morning to reduce evaporation and fungal growth. Avoid overhead watering; instead, water at the base of the plant to ensure the roots get enough moisture. Using mulch can help retain soil moisture and keep roots cool.
4. Deadhead Regularly
Deadheading, or removing spent flowers, encourages plants to produce more blooms. This process prevents plants from putting energy into seed production and redirects it toward creating more flowers. Regularly inspect your plants and snip off faded blooms to keep them looking fresh and vibrant.
5. Feed Your Flowers
Flowers need nutrients to grow and bloom abundantly. Use a balanced fertilizer that contains nitrogen, phosphorus, and potassium. Follow the instructions on the fertilizer package to avoid over-fertilizing, which can harm your plants. Additionally, organic options like fish emulsion, bone meal, and compost tea can provide a gentle nutrient boost.
6. Protect Against Pests and Diseases
Keep your flowers healthy by protecting them from pests and diseases. Regularly inspect your plants for signs of trouble, such as discolored leaves or holes in the foliage. Use natural remedies like neem oil or insecticidal soap to control pests.
https://blunturiblog.com/
https://ranamana.tumblr.com/
https://ranamana23.blogspot.com/2024/07/behind-screen-lives-and-careers-of-top.html
| rana_yasir_7bdde2e1316ec7 |
|
1,911,919 | React, Typescript, and CD to GitHub Pages (2024) | How to set up a typescript-based React application and deploy it directly to GitHub Pages using a CD... | 27,959 | 2024-07-04T19:44:28 | https://medium.com/@kinneko-de/92d4f19d71d7 | react, tutorial, typescript, githubactions | How to set up a typescript-based React application and deploy it directly to GitHub Pages using a CD GitHub action. A practical guide for 2024.
***
## Motivation
I was new to React, Typescript, and GitHub Pages when I started this. I fell into some traps, mainly due to outdated suggestions, while creating my proof of concept. To avoid this for you, I will give you a quick and easy walkthrough.
## Create React app
[Install the following prerequisites: Git and Node.js](https://code.visualstudio.com/docs/nodejs/reactjs-tutorial).
**Create the React app**
I will use [create-react-app](https://create-react-app.dev/docs/getting-started/) to create the initial project setup. I start a terminal in the root folder where all my git repositories are located. A subfolder with the name of your application will be created.
`npx create-react-app sample-react-typescript-githubpages --template typescript`
[The _cra-template-typescript_ package does not need to be installed globally](https://stackoverflow.com/questions/65505209/create-react-app-with-template-typescript-not-creating-tsx-files). I have uninstalled it for this tutorial. [There is also no more switch _typescript_ anymore for _create-react-app_](https://xfor.medium.com/setting-up-your-create-react-app-project-with-typescript-vscode-d83a3728b45e).
The folder now contains a _.git_ folder. I am removing it to push directly from Visual Studio Code to GitHub. I also renamed my ‘_master_’ to ‘_main_’ branch which has implications for the GitHub workflow I will introduce later.
**Vulnerability warnings from npm**
I get a notice that vulnerabilities have been found.
`8 vulnerabilities (2 moderate, 6 high)`
Npm suggests that I should run ‘_npm audit fix_’ to fix this. But this only introduces more vulnerabilities. [Instead, I move the ‘react-scripts’ dependency in the _package.json_ to ‘devDependencies’](https://github.com/facebook/create-react-app/issues/11174). After this step, I can run npm audit without the dependencies that are only used to develop the page.
`npm audit -omit=dev`
**Build and continuous deployment**
I create a [.github/workflows folder and a _builddeploy.yaml_ with the following content in it](https://docs.github.com/en/actions/using-workflows/about-workflows).
```
name: Deploy React to GitHub Pages
on:
push:
branches:
- main # name of the branch you are pushing to
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '22'
- name: Install Dependencies
run: npm ci
- name: Build
run: npm run build
- name: Upload artifact
uses: actions/upload-pages-artifact@v3
with:
path: 'build/.'
deploy:
needs: build
runs-on: ubuntu-latest
permissions:
pages: write
id-token: write
environment:
name: github-pages
url: 'https://${{ github.repository_owner }}.github.io/${{ github.event.repository.name }}/'
steps:
- name: Setup Pages
uses: actions/configure-pages@v5
- name: Deploy
uses: actions/deploy-pages@v4
```
This workflow uses the GitHub action [_actions/deploy-pages_](https://github.com/actions/deploy-pages) to deploy the artifact from [_actions/upload-pages-artifact_](https://github.com/actions/upload-pages-artifact) to GitHub pages. I can also view and download the [generated artifact](https://github.com/KinNeko-De/sample-react-typescript-githubpages/actions/runs/8949805152/attempts/1) in the workflow run.
![Generated artifact in the workflow run](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ntbyzyesnzqg15brj85p.PNG)
When I push the file, a workflow run automatically starts. It fails because the default configuration of GitHub is currently to deploy GitHub pages from a specific branch. So I need to change the configuration of my GitHub repository.
![Initial setup of a GitHub Repository](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pqutipw0kp4k6w7pucy6.PNG)
I need to change the selection of the drop down box to ‘GitHub Actions’. As you can see ‘Deploy from a branch’ is declared as ‘classic’, which is usually a nicer word for ‘legacy’.
![Select GitHub Actions](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6umr9f6cvepnmh552vxw.PNG)
Now I need to re-run the failed jobs of my GitHub Action workflow. GitHub will only execute the job ‘deploy’ again. The already built artifact ‘github-pages’ will be reused.
![Re-run workflow](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4x2llnisynrbrvo5m96v.PNG)
The pipeline is now green and I see a link in my execution that I can click to go to my deployed GitHub page. The page is there because I am not getting a 404 error. But the page shows nothing.
**Activating npm publish**
The reason for this is a [configuration in the _package.json_](https://docs.npmjs.com/cli/v10/configuring-npm/package-json#private) in my React app that prevents the page from being published. I have to change it to see my page. I also have to add the link to my GitHub Page, even though GitHub Copilot says I do not have to and the problem is somewhere else.
```
{
...
"private": false,
"homepage": "https://kinneko-de.github.io/sample-react-typescript-githubpages/",
...
}
```
After pushing and deploying I finally see [my GitHub Page](https://kinneko-de.github.io/sample-react-typescript-githubpages/). You can see the full code in my [GitHub repository](https://github.com/KinNeko-De/sample-react-typescript-githubpages).
![Deployed page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9956oo7hi0lx2bg1ay5v.PNG)
***
## Conclusion
Even with the traps, it took me only two hours to finally deploy my first React app. React, GitHub and the community have done a great job making the developer's life easy. Thank you for that ❤
**Versions used**
I will try to update this guide to the latest version to keep up with breaking changes in the tools. If you find something that does not work for you, please leave a comment. For a better overview, I list all versions used while writing this article.
- Node.js 20.12.2
- NPM: 9.8.1
- cra-template-typescript: 1.2.0
- create-react-app: 5.0.1
**Samples**
I created a [sample application on GitHub](https://github.com/KinNeko-De/sample-react-typescript-githubpages):
{% embed https://github.com/KinNeko-De/sample-react-typescript-githubpages %}
Here you can see the [deployed version](https://kinneko-de.github.io/sample-react-typescript-githubpages/):
{% embed https://kinneko-de.github.io/sample-react-typescript-githubpages/ %}
| kinneko-de |
1,911,967 | Automating User and Group Management on Ubuntu: A Practical Guide | Introduction A Bash script is a file containing a sequence of commands executed on a Bash shell,... | 0 | 2024-07-04T19:40:42 | https://dev.to/victoria_ahmadu/automating-user-and-group-management-on-ubuntu-a-practical-guide-49i6 | Introduction
A Bash script is a file containing a sequence of commands executed on a Bash shell, enabling automation of tasks. This article demonstrates how to use a Bash script to dynamically create users and groups by reading from a CSV file. A CSV (Comma Separated Values) file contains data separated by commas or other delimiters, and is often used for data exchange.
In Linux systems, multiple users can access the same machine, making efficient user and group management crucial for system administrators. This guide shows how to automate these tasks using a Bash script, ensuring users are created, assigned to specified groups, and their actions logged securely.
Objective
The main objective is to create users in a Linux system using a simple Bash script, assigning them to groups, and generating random passwords.
Bash Script Overview
The script reads user and group information from a CSV file, creates groups if they don't exist, creates users if they are new, assigns users to groups, and logs all actions for traceability.
Prerequisites
Ubuntu machine for execution.
Basic understanding of Bash scripting.
Script Implementation
Step 1: Reading the CSV File
The script begins by reading a CSV file containing usernames and associated groups. The script throws an error and exits if no input is provided.
```bash
CSV_FILE="$1"
# Check if the file exists
if [[ ! -f "$CSV_FILE" ]]; then
echo "File not found!"
exit 1
fi
```
Step 2: Creating a Directory and File to Store Users
After reading the file, we create a directory and file to store the usernames and passwords of any new users created, accessible only to the file owner.
```bash
PASSWD_DIR="/var/secure"
PASSWD_FILE="user_passwords.csv"
if [ ! -d "$PASSWD_DIR" ]; then
sudo mkdir -p "$PASSWD_DIR"
sudo touch "$PASSWD_DIR/$PASSWD_FILE"
sudo chmod 600 "$PASSWD_DIR/$PASSWD_FILE"
fi
```
Step 3: Group Management
For each group specified, the script checks if the group exists and creates it if it doesn't.
```bash
# Function to trim white spaces
trim() {
echo "$1" | sed 's/^[[:space:]]*//;s/[[:space:]]*$//'
}
# Ensure log file exists and is writable
LOG_FILE="/var/log/user_management.log"
sudo touch "$LOG_FILE"
sudo chmod 644 "$LOG_FILE"
# Read the CSV file line by line
while IFS=';' read -r username groups; do
# Trim white spaces from the username and groups
username=$(trim "$username")
groups=$(trim "$groups")
# Split the groups field into an array, ignoring white spaces
IFS=',' read -r -a group_array <<< "$(echo "$groups" | tr -d '[:space:]')"
# Create each group if it doesn't exist
for group in "${group_array[@]}"; do
group=$(trim "$group")
if ! getent group "$group" > /dev/null 2>&1; then
echo "$(date '+%Y-%m-%d %H:%M:%S'): Creating group: $group" | sudo tee -a "$LOG_FILE"
sudo groupadd "$group"
else
echo "$(date '+%Y-%m-%d %H:%M:%S'): Group $group already exists." | sudo tee -a "$LOG_FILE"
fi
done
```
Step 4: User Management
For each user specified, the script checks if the user exists, creates the user if they don't, assigns a random password, and logs the actions.
```bash
# Check if the user exists, if not, create it and add to all groups
if ! id "$username" > /dev/null 2>&1; then
echo "$(date '+%Y-%m-%d %H:%M:%S'): Creating user: $username" | sudo tee -a "$LOG_FILE"
sudo useradd -m "$username"
# Set a random password for the user
password=$(head /dev/urandom | tr -dc A-Za-z0-9 | head -c 12)
echo "$username:$password" | sudo chpasswd
echo "$(date '+%Y-%m-%d %H:%M:%S'): Password for $username set and stored securely." | sudo tee -a "$LOG_FILE"
echo "$username,$password" | sudo tee -a "$PASSWD_DIR/$PASSWD_FILE"
else
echo "$(date '+%Y-%m-%d %H:%M:%S'): User $username already exists." | sudo tee -a "$LOG_FILE"
fi
```
Step 5: Adding Users to Groups
Users are added to specified groups using sudo usermod, ensuring they have appropriate access permissions.
```bash
# Add the user to each group (if the user was already created)
for group in "${group_array[@]}"; do
echo "$(date '+%Y-%m-%d %H:%M:%S'): Adding user $username to group: $group" | sudo tee -a "$LOG_FILE"
sudo usermod -a -G "$group" "$username"
done
done < "$CSV_FILE"
echo "All users and groups have been processed."
```
Running the Script
To run the script, follow these steps:
Ensure you are running on a Linux system with root privileges or use the sudo command.
Save the script to a file, for example, user_management.sh.
Create a sample CSV file users.csv with the following content:
```bash
mary;developer,sys-admin
paul;sys-admin
peter;operations
```
Execute the script as shown below (add sudo if you are not a root user):
```bash
sudo bash user_management.sh users.csv
```
After running the script, new users will be created, and their details will be stored in /var/secure/user_passwords.csv. All actions will be logged in /var/log/user_management.log.
Conclusion
Automating user and group management using this script enhances system administration efficiency and ensures consistent user access across environments. By adhering to best practices in logging and security, administrators can maintain robust system integrity.
Learn More
To learn more about opportunities like the HNG Internship program and related resources for enhancing your skills, visit: [HNG Internship](https://hng.tech/internship) | victoria_ahmadu |
|
1,911,966 | Erasure and Restrictions on Generics | The information on generics is used by the compiler but is not available at runtime. This is called... | 0 | 2024-07-04T19:36:00 | https://dev.to/paulike/erasure-and-restrictions-on-generics-54bm | java, programming, learning, beginners | The information on generics is used by the compiler but is not available at runtime. This is called type erasure. Generics are implemented using an approach called _type erasure_: The compiler uses the generic type information to compile the code, but erases it afterward. Thus, the generic information is not available at runtime. This approach enables the generic code to be backward compatible with the legacy code that uses raw types.
The generics are present at compile time. Once the compiler confirms that a generic type is used safely, it converts the generic type to a raw type. For example, the compiler checks whether the following code in (a) uses generics correctly and then translates it into the equivalent code in (b) for runtime use. The code in (b) uses the raw type.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t0qr3f4q1z205wx16h61.png)
When generic classes, interfaces, and methods are compiled, the compiler replaces the generic type with the **Object** type. For example, the compiler would convert the following method in (a) into (b).
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9pm4taksqmv3s674y26s.png)
If a generic type is bounded, the compiler replaces it with the bounded type. For example, the compiler would convert the following method in (a) into (b).
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rzxejwx0x4vhugwgxvnu.png)
It is important to note that a generic class is shared by all its instances regardless of its actual concrete type. Suppose **list1** and **list2** are created as follows:
`ArrayList<String> list1 = new ArrayList<>();
ArrayList<Integer> list2 = new ArrayList<>();`
Although **ArrayList<String>** and **ArrayList<Integer>** are two types at compile time, only one **ArrayList** class is loaded into the JVM at runtime. **list1** and **list2** are both instances of **ArrayList**, so the following statements display **true**:
`System.out.println(list1 instanceof ArrayList);
System.out.println(list2 instanceof ArrayList);`
However, the expression **list1 instanceof ArrayList<String>** is wrong. Since **ArrayList<String>** is not stored as a separate class in the JVM, using it at runtime makes no sense.
Because generic types are erased at runtime, there are certain restrictions on how generic types can be used. Here are some of the restrictions:
## Restriction 1: Cannot Use **new E()**
You cannot create an instance using a generic type parameter. For example, the following statement is wrong:
`E object = new E();`
The reason is that **new E()** is executed at runtime, but the generic type **E** is not available at runtime.
## Restriction 2: Cannot Use **new E[]**
You cannot create an array using a generic type parameter. For example, the following statement is wrong:
`E[] elements = new E[capacity];`
You can circumvent this limitation by creating an array of the **Object** type and then casting it to **E[]**, as follows:
`E[] elements = (E[])new Object[capacity];`
However, casting to (**E[]**) causes an unchecked compile warning. The warning occurs because the compiler is not certain that casting will succeed at runtime. For example, if **E** is String and **new Object[]** is an array of **Integer** objects, **(String[])(new Object[])** will cause a **ClassCastException**. This type of compile warning is a limitation of Java generics and is unavoidable.
Generic array creation using a generic class is not allowed, either. For example, the following code is wrong:
`ArrayList<String>[] list = new ArrayList<String>[10];`
You can use the following code to circumvent this restriction:
`ArrayList<String>[] list = (ArrayList<String>[])new ArrayList[10];`
However, you will still get a compile warning.
## Restriction 3: A Generic Type Parameter of a Class Is Not Allowed in a Static Context
Since all instances of a generic class have the same runtime class, the static variables and methods of a generic class are shared by all its instances. Therefore, it is illegal to refer to a generic type parameter for a class in a static method, field, or initializer. For example, the
following code is illegal:
`public class Test<E> {
public static void m(E o1) { // Illegal
}
public static E o1; // Illegal
static {
E o2; // Illegal
}
}`
## Restriction 4: Exception Classes Cannot Be Generic
A generic class may not extend **java.lang.Throwable**, so the following class declaration would be illegal:
`public class MyException<T> extends Exception {
}`
Why? If it were allowed, you would have a **catch** clause for **MyException<T>** as follows:
`try {
...
}
catch (MyException<T> ex) {
...
}`
The JVM has to check the exception thrown from the **try** clause to see if it matches the type specified in a **catch** clause. This is impossible, because the type information is not present at runtime. | paulike |
1,911,965 | Encoding | Falamos e vimos sobre os tipos de Encoding. Que basicamente é a conversão dos caracteres em cada... | 0 | 2024-07-04T19:34:43 | https://dev.to/devsjavagirls/encoding-3oi8 | java, javaprogramming | Falamos e vimos sobre os tipos de Encoding. Que basicamente é a conversão dos caracteres em cada lingua. Por coincidência, hoje estava tendo um probleminha no app do meu cliente. Por padrão as IDEs já vem configurado o UTF-8, que é padrão americano. Ou seja não tem acentos como a lingua latina. E o projeto estava perdendo e desconfigurando tudo os textos. Basicamente A ISO 8859-1 é a decodificação dos caracteres Latinos.
Veja a tabela aqui. https://www.w3schools.com/charsets/ref_html_8859.asp
Então corrigimos forçando o projeto na IDE usar ISO-8859-1. Pois estava errado e subia pro repositorio com os arquivos "encodados" errados.
Então, se algum dia tiverem problemas, Cada IDE ou Projeto, pode ser forçado ou configurado em alguma opção lá. Isso resolveu nosso problema mas demoramos um pouquinho pra perceber. Pois ao configurar o projeto numa maquina nova pode se perder essa configuração. Já aprendendo os usos e quando usar cada tipo de encoding. Depende da lingua que vai se trabalhar.
Qual a diferença entre UTF-8 e ISO-8859-1?
A diferença do ISO-8859-1 e do UTF-8 é que um suporta até 256 caracteres (0 a 255, ou 0x00 a 0xFF) e o outro suporta até 65.536 caracteres (0 a 65535, ou 0x0000 a 0xFFFF).
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3o4efx5tu49jeu2amtd4.jpeg)
| devsjavagirls |
1,911,964 | Implementando Clean Architecture com TypeScript | Clean Architecture é uma filosofia de design de software que visa criar sistemas fáceis de manter,... | 0 | 2024-07-04T19:33:18 | https://dev.to/dvorlandi/implementando-clean-architecture-com-typescript-20pb | typescript, cleancoding, javascript, coding | Clean Architecture é uma filosofia de design de software que visa criar sistemas fáceis de manter, testar e entender. Ela enfatiza a separação de responsabilidades, garantindo que cada parte do sistema tenha uma única responsabilidade. Neste artigo, exploraremos como implementar Clean Architecture usando TypeScript.
## Índice
1. [Introdução à Clean Architecture](#introducao-a-clean-architecture)
2. [Princípios Fundamentais](#principios-fundamentais)
3. [Configurando o Projeto](#configurando-o-projeto)
4. [Estrutura de Pastas](#estrutura-de-pastas)
5. [Entidades](#entidades)
6. [Casos de Uso](#casos-de-uso)
7. [Interfaces](#interfaces)
8. [Frameworks e Drivers](#frameworks-e-drivers)
9. [Juntando Tudo](#juntando-tudo)
10. [Conclusão](#conclusao)
## Introdução à Clean Architecture
Clean Architecture, introduzida por Robert C. Martin (Uncle Bob), proporciona uma separação clara entre as diferentes partes de um sistema de software. A ideia principal é manter a lógica de negócios central independente de fatores externos, como bancos de dados, UI ou frameworks.
## Princípios Fundamentais
1. **Independência**: A lógica de negócios deve ser independente de UI, banco de dados ou sistemas externos.
2. **Testabilidade**: O sistema deve ser fácil de testar.
3. **Separação de Responsabilidades**: Diferentes partes do sistema devem ter responsabilidades distintas.
4. **Manutenibilidade**: O sistema deve ser fácil de manter e evoluir.
## Configurando o Projeto
Primeiro, vamos configurar um projeto TypeScript. Você pode usar `npm` ou `yarn` para inicializar um novo projeto.
```bash
mkdir clean-architecture-ts
cd clean-architecture-ts
npm init -y
npm install typescript ts-node @types/node --save-dev
```
Crie um arquivo `tsconfig.json` para configurar o TypeScript.
```json
{
"compilerOptions": {
"target": "ES6",
"module": "commonjs",
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true
}
}
```
## Estrutura de Pastas
Um projeto com clean architecture geralmente tem a seguinte estrutura de pastas:
```
src/
├── entities/
├── usecases/
├── interfaces/
├── frameworks/
└── main.ts
```
### Entidades
Entidades representam a lógica de negócios central. Elas são a parte mais importante do sistema e devem ser independentes de fatores externos.
```typescript
// src/entities/user.entity.ts
export class User {
constructor(id: string, public email: string, public password:string) {}
static create(email: string, password: string) {
const userId = uuid()
return new User(userId, email, password)
}
}
```
### Casos de Uso
Casos de uso contêm as regras de negócios específicas da aplicação. Eles orquestram a interação entre entidades e interfaces.
```typescript
// src/usecases/create-user.usecase.ts
import { User } from "../entities/user.entity";
import { UsersRepository } from "../interfaces/users.repository"
interface CreateUserRequest {
email: string;
password: string;
}
export class CreateUserUseCase {
constructor(private userRepository: UserRepository) {}
async execute(request: CreateUserRequest): Promise<void> {
const user = User.create(request.email, request.password)
await this.userRepository.save(user);
}
}
```
### Interfaces
Interfaces são os contratos entre os casos de uso e o mundo externo. Elas podem incluir repositórios, serviços ou qualquer sistema externo.
```typescript
// src/interfaces/users.repository.ts
import { User } from "../entities/user.entity";
export interface UserRepository {
save(user: User): Promise<void>;
}
```
### Frameworks e Drivers
Frameworks e drivers contêm os detalhes de implementação das interfaces. Eles interagem com sistemas externos, como bancos de dados ou APIs.
```typescript
// src/frameworks/in-memory-users.repository.ts
import { User } from "../entities/User";
import { UserRepository } from "../interfaces/users.repository";
export class InMemoryUsersRepository implements UserRepository {
private users: User[] = [];
async save(user: User): Promise<void> {
this.users.push(user);
}
}
```
## Juntando Tudo
Finalmente, vamos criar um ponto de entrada para conectar tudo.
```typescript
// src/main.ts
import { CreateUser } from "./usecases/create-user.usecase";
import { InMemoryUserRepository } from "./frameworks/in-memory-users.repository";
const userRepository = new InMemoryUserRepository();
const createUser = new CreateUserUseCase(userRepository);
createUser.execute({ email: "[email protected]", password: "123456" })
.then(() => console.log("User created successfully"))
.catch(err => console.error("Failed to create user", err));
```
Compile e execute o projeto:
```bash
tsc
node dist/main.js
```
## Conclusão
Seguindo os princípios da Clean Architecture, podemos criar um sistema que é manutenível, testável e adaptável a mudanças. TypeScript fornece tipagem forte e recursos modernos de JavaScript que ajudam a impor esses princípios. Com uma clara separação de responsabilidades, nosso código se torna mais fácil de entender e evoluir ao longo do tempo. | dvorlandi |
1,911,962 | AWS Dynamobd | import json import boto3 from boto3.dynamodb.conditions import Key dynamodb =... | 0 | 2024-07-04T19:32:02 | https://dev.to/walterjesus88/api-gateway-a73 |
```
import json
import boto3
from boto3.dynamodb.conditions import Key
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('Users')
def lambda_handler(event, context):
http_method = event['httpMethod']
if http_method == 'POST':
return create_user(event)
elif http_method == 'GET':
return get_user(event)
elif http_method == 'PUT':
return update_user(event)
elif http_method == 'DELETE':
return delete_user(event)
else:
return {
'statusCode': 405,
'body': json.dumps('Method Not Allowed')
}
def create_user(event):
body = json.loads(event['body'])
user_id = body['userId']
name = body['name']
age = body['age']
table.put_item(
Item={
'userId': user_id,
'name': name,
'age': age
}
)
return {
'statusCode': 201,
'body': json.dumps(f'User {user_id} created successfully')
}
def get_user(event):
user_id = event['queryStringParameters']['userId']
response = table.get_item(
Key={
'userId': user_id
}
)
if 'Item' in response:
return {
'statusCode': 200,
'body': json.dumps(response['Item'])
}
else:
return {
'statusCode': 404,
'body': json.dumps(f'User {user_id} not found')
}
def update_user(event):
body = json.loads(event['body'])
user_id = body['userId']
name = body.get('name')
age = body.get('age')
update_expression = 'set '
expression_attribute_values = {}
if name:
update_expression += 'name = :name, '
expression_attribute_values[':name'] = name
if age:
update_expression += 'age = :age, '
expression_attribute_values[':age'] = age
update_expression = update_expression.rstrip(', ')
table.update_item(
Key={'userId': user_id},
UpdateExpression=update_expression,
ExpressionAttributeValues=expression_attribute_values
)
return {
'statusCode': 200,
'body': json.dumps(f'User {user_id} updated successfully')
}
def delete_user(event):
user_id = event['queryStringParameters']['userId']
table.delete_item(
Key={
'userId': user_id
}
)
return {
'statusCode': 200,
'body': json.dumps(f'User {user_id} deleted successfully')
}
```
vicio de aws | walterjesus88 |
|
1,911,957 | Implementing Clean Architecture with TypeScript | Clean Architecture is a software design philosophy that aims to create systems that are easy to... | 0 | 2024-07-04T19:25:21 | https://dev.to/dvorlandi/implementing-clean-architecture-with-typescript-3jpc | typescript, javascript, cleancoding, coding |
Clean Architecture is a software design philosophy that aims to create systems that are easy to maintain, test, and understand. It emphasizes the separation of concerns, making sure that each part of the system has a single responsibility. In this article, we'll explore how to implement Clean Architecture using TypeScript.
## Table of Contents
1. [Introduction to Clean Architecture](#introduction-to-clean-architecture)
2. [Core Principles](#core-principles)
3. [Setting Up the Project](#setting-up-the-project)
4. [Folder Structure](#folder-structure)
5. [Entities](#entities)
6. [Use Cases](#use-cases)
7. [Interfaces](#interfaces)
8. [Frameworks and Drivers](#frameworks-and-drivers)
9. [Putting It All Together](#putting-it-all-together)
10. [Conclusion](#conclusion)
## Introduction to Clean Architecture
Clean Architecture, introduced by Robert C. Martin (Uncle Bob), provides a clear separation between the different parts of a software system. The main idea is to keep the core business logic independent of external factors such as databases, UI, or frameworks.
## Core Principles
1. **Independence**: The business logic should be independent of UI, database, or external systems.
2. **Testability**: The system should be easy to test.
3. **Separation of Concerns**: Different parts of the system should have distinct responsibilities.
4. **Maintainability**: The system should be easy to maintain and evolve.
## Setting Up the Project
First, let's set up a TypeScript project. You can use `npm` or `yarn` to initialize a new project.
```bash
mkdir clean-architecture-ts
cd clean-architecture-ts
npm init -y
npm install typescript ts-node @types/node --save-dev
```
Create a `tsconfig.json` file to configure TypeScript.
```json
{
"compilerOptions": {
"target": "ES6",
"module": "commonjs",
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true
}
}
```
## Folder Structure
A clean architecture project typically has the following folder structure:
```
src/
├── entities/
├── usecases/
├── interfaces/
├── frameworks/
└── main.ts
```
### Entities
Entities represent the core business logic. They are the most important part of the system and should be independent of external factors.
```typescript
// src/entities/user.entity.ts
export class User {
constructor(id: string, public email: string, public password:string) {}
static create(email: string, password: string) {
const userId = uuid()
return new User(userId, email, password)
}
}
```
### Use Cases
Use cases contain the application-specific business rules. They orchestrate the interaction between entities and interfaces.
```typescript
// src/usecases/create-user.usecase.ts
import { User } from "../entities/user.entity";
import { UsersRepository } from "../interfaces/users.repository"
interface CreateUserRequest {
email: string;
password: string;
}
export class CreateUserUseCase {
constructor(private userRepository: UserRepository) {}
async execute(request: CreateUserRequest): Promise<void> {
const user = User.create(request.email, request.password)
await this.userRepository.save(user);
}
}
```
### Interfaces
Interfaces are the contracts between the use cases and the outside world. They can include repositories, services, or any external system.
```typescript
// src/interfaces/users.repository.ts
import { User } from "../entities/user.entity";
export interface UserRepository {
save(user: User): Promise<void>;
}
```
### Frameworks and Drivers
Frameworks and drivers contain the implementation details of the interfaces. They interact with external systems like databases or APIs.
```typescript
// src/frameworks/in-memory-users.repository.ts
import { User } from "../entities/User";
import { UserRepository } from "../interfaces/users.repository";
export class InMemoryUsersRepository implements UserRepository {
private users: User[] = [];
async save(user: User): Promise<void> {
this.users.push(user);
}
}
```
## Putting It All Together
Finally, let's create an entry point to wire everything together.
```typescript
// src/main.ts
import { CreateUser } from "./usecases/create-user.usecase";
import { InMemoryUserRepository } from "./frameworks/in-memory-users.repository";
const userRepository = new InMemoryUserRepository();
const createUser = new CreateUserUseCase(userRepository);
createUser.execute({ email: "[email protected]", password: "123456" })
.then(() => console.log("User created successfully"))
.catch(err => console.error("Failed to create user", err));
```
Compile and run the project:
```bash
tsc
node dist/main.js
```
## Conclusion
By following the principles of Clean Architecture, we can create a system that is maintainable, testable, and adaptable to change. TypeScript provides strong typing and modern JavaScript features that help enforce these principles. With a clear separation of concerns, our codebase becomes easier to understand and evolve over time.
| dvorlandi |
1,911,923 | Revamp your Trade Show Experience with Creative Trade Show Booth Designs Idea | An exhibition is an ideal venue to represent your brand which provides value to the business. Here,... | 0 | 2024-07-04T19:19:29 | https://dev.to/sensations/revamp-your-trade-show-experience-with-creative-trade-show-booth-designs-idea-1bbm | design, uidesign, 3dprinting, news | An exhibition is an ideal venue to represent your brand which provides value to the business. Here, You can discover a large majority of potential audiences as well as key players in the industry means this is an impressive opportunity for any business to stand out.
When we are talking about standing out at the exhibit the first aspect that comes to mind is the “Trade Show Booth”. A unique booth attracts everyone with its exceptional features additionally, mark your presence in the industry. For that, you need to develop [creative trade show booth design ideas](https://www.sensationsexhibits.com/blog/beyond-the-basics-creative-trade-show-booth-designs-for-2024/) for the show.
We understand how crucial it is to select ideas as numerous opportunities are provided in the global economy. In this Blog, we have discussed a few of the creative trade show booth ideas that will definitely help in making an effective and long-lasting impression.
**A Crucial Step- Branding with the Right Messaging**
In the modern world, branding is a key that helps to develop the correct messaging for the company and become a market leader.
**Clear, Concise & Creative Brand Message
**
A strong brand message impacts the audience the most and hence it must be communicated effectively. As much as your message is unique chances of maximizing engagement at the booth are also elevated. You can also tell your brand’s tale with the help of specific designs and elements. Overall you need to remember your booth is your brand’s identity at the exhibit so make sure it actually speaks your brand’s voice.
**Invest in Larger Booths
**
Booths that have high visibility are easier to reach because they stand out from the crowd. They offer sufficient area for product display, sales promotion and everything that helps in creating a long-lasting first impression to the audience. Larger booths welcome more creative booth ideas, since the booths are wider thus encompassing more space that could help in creating a design that catches the eye of visitors.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rrwqpf5atw5m2cpjk8tz.jpg)
Give close consideration to the booth structure such as how tall it should be, its area or space, and other structural aspects.
**Value, Mission, and Vision of the Brand
**
Your values and mission are what make your brand exceptional among others. Take out these three pillars on the floor efficiently and showcase how you created this brand, what values you kept in mind, the thought behind building this brand, and whatsoever. You can do it in various ways -:
- Visually storytelling- Use videos, images, or graphics
- Interactive displays- Include AR/VR/, touchscreens or kiosk
- Circulate- Flyers and brochures
- Live demo- Products and services
Applying these creative trade show booth ideas can encourage attendees to connect with your brand and try your products or services. Additionally, it creates a buzz over your booth also differentiates you from others.
**Sustainable & Eco-Friendly
**
Modern businesses are now adapting sustainable trade show booth design ideas by using reusable materials for packaging, less lighting, and organic signs to help in the conservation of the environment. Use recycling, modular designs, and digital displays instead of using heavy materials.
**Engagement & Engagement
**
Review best practices, topical issues, as well as examples to build interactions with diverse audiences and improve attendees' and exhibitor’s satisfaction.
**Photo Booth
**
This generation is in love with selfies and photos so why not add these wonderful booth design ideas to the list? It will provide more visitors for sure. With the help of interesting elements like a Thematic backdrop, Fascinating props, Lighting, QR codes to instantly share the photos socially, and many more, decorating your space also makes it one of the [best exhibition booths](https://www.sensationsexhibits.com/).
**Interactive Educational Sessions
**
Do not forget the significance of educational sessions and workshops. Attendees come to the exhibit to discover new opportunities as well as to learn about the latest trends in the industry. You can engage them with impactful live sessions discussing successful strategies, market trends, the future of that specific industry, etc. But for this, you are going to require an engaging environment.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0av6kszbrwsmft5420g5.jpg)
You can also invite the key players of particular industries, the leaders of different companies, and other influential people to discuss your products, new technologies, or your company.
**Hands-on Experience
**
Nothing can match a live product demonstration in terms of drawing attention to your trade show booth. Visitors attend trade shows to demonstrate and test new products that they can purchase. A live demo increases their curiosity about the product which also elevates sales.
After demonstrations collecting feedback from the audience helps in product improvements also based on their contact info you can reach out to them via emails for any future product launch.
**Contest and Games
**
Contest attracts many attendees means engagement at the booth. But you need to create excitement about the contest before the main day. Post on social media platforms, send newsletters and emails in advance to spark curiosity in attendees. Plan the activities in a way to capture maximum leads. Employees need to engage with the audience as much as they can in this way you can easily build a connection with them.
Alongside contests, you can also explore some games like trivia, spin the wheel, assemble the puzzles, etc, to create an interesting environment. Last but not least distribute freebies from your product to the winners.
**Seating Arrangements
**
Add some chairs or desks for sitting it creates a welcoming atmosphere for attendees. By this, if visitors come to your booth they will explore all the products and services with ease. Also, you can engage with them at this time and try to turn them into the leads of conversion.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7pe45sxt6o1g8ji1q3b7.jpg)
**Innovate with Latest Technology
**
Add the latest technology to your creative trade show booth ideas that will enhance user engagement.
**Utilize Social Media
**
Instagram, Twitter, and Facebook are significant platforms for attracting youth so use these platforms. Post live content on Facebook or Instagram also post tweets with attractive hashtags that create curiosity in the attendees’ minds and they will interested in your brand.
**3D Experiences
**
3D is quite essential in [trade show rental displays](https://www.sensationsexhibits.com/trade-show-exhibits-rentals/) where visitors or attendees are provided with an experience that directly engages them. These booths are quite effective in terms of an engaging method to advertise products and services using augmented reality (AR) and virtual reality (VR). Engaging with interiors at different levels implies revealing numerous details and possibilities, which would be boring or not very interesting if people just looked at the exterior.
This alone sets the business aside from the competitors while also improving the visitors’ stay rate and their impulse to purchase or make more conversions, making the 3D technique one of the important booth design ideas.
**Interactive QR Codes
**
QR code is a new advanced technology that lets you access anything easily by scanning the code. Use a QR code to provide easy access to the website for visitors. You can also generate these codes for the visitors to participate in any contest, game, or workshop and collect leads.
**Aesthetic a New Trend
**
With all of these how you design your booth is also essential because it gives the first impression of your brand.
**Don’t go for Plane Space
**
Plane space or just some pictures or banners look boring and dull which can be a bit turn-off for visitors as attendees go for the booths that look interesting. You can add a green wall with numerous plants, decorate your space with mesmerizing flower plants, and tables with some snacks, or fruits and some refreshments. This will provide a uniqueness to your booth.
**Different Lighting
**
Lighting can create an effective impact on visitors such as spotlight, colored lights, LED strips, ambient lighting, gobos, and many more. This helps your booth to stand out from the crowd.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o5cil26ci1hqiw0khlxd.jpg)
If you want to introduce some of your new products then you can also highlight that product from the lights that complement the product.
**Add Some Exceptional Attractions
**
Use some trendiest trade show booth design ideas that can complement your booth. Add music that suits your preferences. Mild music can draw the visitor's attention and also give them a reason to stand by the booth. Some funny activities coerce visitors to stop at the booth and there are many more ideas you can include to make your booth the best it can be.
We have rounded up several effective ways that you can add to your list of creative trade show booth ideas. However, it is possible to have many more ideas that can make a significant impact on the booth. We would recommend choosing according to your brand needs and making the most of the exhibition.
| sensations |
1,911,921 | Wildcard Generic Types | You can use unbounded wildcards, bounded wildcards, or lower-bound wildcards to specify a range for a... | 0 | 2024-07-04T19:18:29 | https://dev.to/paulike/wildcard-generic-types-4302 | java, programming, learning, beginners | You can use unbounded wildcards, bounded wildcards, or lower-bound wildcards to specify a range for a generic type. What are wildcard generic types and why are they needed? The code below gives an example to
demonstrate the needs. The example defines a generic **max** method for finding the maximum in a stack of numbers (lines 15–25). The main method creates a stack of integer objects, adds three integers to the stack, and invokes the **max** method to find the maximum number in the stack.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8i76v6zgmjg8t96yxv5z.png)
The program above has a compile error in line 11 because **intStack** is not an instance of **GenericStack<Number>**. Thus, you cannot invoke **max(intStack)**.
The fact is that **Integer** is a subtype of **Number**, but **GenericStack<Integer>** is not a subtype of **GenericStack<Number>**. To circumvent this problem, use wildcard generic types. A wildcard generic type has three forms: **?** and **? extends T**, as well as **? super T**, where **T** is a generic type.
The first form, **?**, called _an unbounded wildcard_, is the same as **? extends Object**. The second form, **? extends T**, called _a bounded wildcard_, represents **T** or a subtype of **T**. The third form, **? super T**, called _a lower-bound wildcard_, denotes **T** or a supertype of **T**.
You can fix the error by replacing line 15 in the code above as follows:
`public static double max(GenericStack<? extends Number> stack) {`
**<? extends Number>** is a wildcard type that represents **Number** or a subtype of **Number**, so it is legal to invoke **max(new GenericStack<Integer>())** or **max(new GenericStack<Double>())**.
The code below shows an example of using the **?** wildcard in the **print** method that prints objects in a stack and empties the stack. **<?>** is a wildcard that represents any object type. It is equivalent to **<? extends Object>**. What happens if you replace **GenericStack<?>** with **GenericStack<Object>**? It would be wrong to invoke **print(intStack)**, because **intStack** is not an instance of **GenericStack<Object>**. Please note that **GenericStack<Integer>** is not a subtype of **GenericStack<Object>**, even though **Integer** is a subtype of **Object**.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/30mvqo87ct4xepn4wu9s.png)
When is the wildcard **<? super T>** needed? Consider the example in the code below. The example creates a stack of strings in **stack1** (line 6) and a stack of objects in **stack2** (line 7), and invokes **add(stack1, stack2)** (line 11) to add the strings in **stack1** into **stack2**. **GenericStack<? super T>** is used to declare **stack2** in line 15. If **<? super T>** is replaced by **<T>**, a compile error will occur on **add(stack1, stack2)** in line 11, because **stack1**’s type is **GenericStack<String>** and **stack2**’s type is **GenericStack<Object>**. **<? super T>** represents type **T** or a supertype of **T**. **Object** is a supertype of **String**.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cphlhahtvbmhn6ae4k3d.png)
This program will also work if the method header in lines 15 is modified as follows:
`public static <T> void add(GenericStack<? extends T> stack1,
GenericStack<T> stack2)`
The inheritance relationship involving generic types and wildcard types is summarized in Figure below. In this figure, **A** and **B** represent classes or interfaces, and **E** is a generic type parameter.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qz3whfjvv6u99hm1y294.png) | paulike |
1,911,918 | Why is English so important in a developer's career? | This is a very common issue in developer groups and often a subject of controversy. Therefore, I... | 0 | 2024-07-04T19:16:47 | https://dev.to/pedrobarreto_en/why-is-english-so-important-in-a-developers-career-56jh | webdev, beginners, programming | This is a very common issue in developer groups and often a subject of controversy. Therefore, I decided to do what we programmers love the most! I gathered data on the subject, and the results were impressive. Let's go through the main topics...
## Programming Languages
Javascript, Typescript, Python, PHP... all the major programming languages in the world have something in common, which lies in their foundation: **they were all written and designed in English**.
Although this might seem simple and obvious, it makes a significant difference for learners, especially in high-level languages.
A practical example:
const product = document.getElementById('product');
Translated:
Constant (const) product equals document.getElementById('product');
Even if you have zero familiarity with programming, you can see how the code makes sense when translated into Portuguese.
## Documentation
Every programmer knows how valuable good documentation is! That's why I surveyed the leading market documentation to see if they already have translations into Portuguese.
**Officially translated documentation**:
Javascript, Python, PHP, C#, React, Vue
**No translation available**:
Typescript, Java, Node, Docker, MySQL, MongoDB, PostgreSQL, Flutter, Swift
Despite the surprisingly high number of translations, it's clear that the number is still very limited.
## Job Opportunities
Developers are in demand worldwide, which is no secret. However, what are the differences in the Brazilian market compared to others? I quickly researched on LinkedIn and was astonished by the results:
Let's see, the number of available positions for **Software Development** on LinkedIn Brazil on March 30, 2022:
![jobs in Brazil](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0vkyjgq5zodf0l4ovxh0.png)
Jobs available for **Software Development** on LinkedIn United States on March 30, 2022:
![jobs in the United States](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xbz1baiiqjnvvthoeqnj.png)
That's a difference of **221,843** positions 😱, comparing only to the American market. Imagine including Canada, Australia, Europe, and others. Moreover, thanks to remote work, it's increasingly common for Brazilian programmers to be hired by foreign companies, **earning in dollars**, without even having to relocate. But this isn't simple; we're talking about highly qualified professionals who speak **fluent English**.
## But what if I don't know English? Can't I program?
**Of course you can**. The aim of this post was simply to highlight the advantages of venturing into English. However, nothing prevents you from **starting to program today**.
I've compiled some **tips** to help you **learn to program and develop your English skills simultaneously**.
## Use your operating system in English
It may be challenging at first, but it'll greatly aid your learning process. To program, you need to get used to keywords like **View, History, Config, Edit, Profiles, Debug, Console, Shell**, which are present on all screens of your operating system, as well as in the programs you use daily.
## YouTube is your greatest ally
Not just for learning English, but also for programming, cooking, everything 😆. The amount of free, high-quality content available is incredible.
Building on the previous point, in most programming tutorials and content, whether in Portuguese or not, the person producing the tutorial typically uses software **in its English version**. Therefore, using programs in the same language will make a significant difference in your learning.
## English for Devs Club
A channel that offers an entire English course specifically for programmers **for free**. Unfortunately, it's in Spanish, but [enabling Portuguese subtitles](https://www.youtube.com/watch?v=Ty8ujyucBRo) makes it easy to follow.
{% embed https://www.youtube.com/watch?v=WjBfC2AATys %}
| pedrobarreto_en |
1,911,914 | خرید قهوه از فروشگاه دمی نو | خرید قهوه با کیفیت نیازمند توجه به چندین نکته مهم است که میتواند تجربه نوشیدن قهوه را بهبود بخشد. در... | 0 | 2024-07-04T19:07:57 | https://dev.to/sadra_azizi_12404cd7d82e6/khryd-qhwh-z-frwshgh-dmy-nw-1l4l | **[خرید قهوه](https://damyno.com/)** با کیفیت نیازمند توجه به چندین نکته مهم است که میتواند تجربه نوشیدن قهوه را بهبود بخشد. در این مقاله، به بررسی نکات کلیدی برای خرید قهوه از فروشگاه قهوه دمی نو میپردازیم.
**1. نوع دانه قهوه
**
دانههای قهوه به دو نوع اصلی تقسیم میشوند: عربیکا و روبوستا.
عربیکا: این نوع قهوه دارای طعم ملایمتر و عطر بیشتری است و معمولاً دارای قیمتی بالاتر از روبوستا است. دانههای عربیکا بیشتر در مناطق مرتفع کشت میشوند.
روبوستا: این نوع قهوه دارای طعم قویتر و تلخی بیشتری است. روبوستا مقاومت بیشتری در برابر بیماریها دارد و کافئین بیشتری نسبت به عربیکا دارد. این نوع دانه بیشتر در مناطق کم ارتفاع کشت میشود.
**2. تازگی دانه قهوه
**
تازگی دانه قهوه یکی از مهمترین عواملی است که بر طعم و عطر قهوه تأثیر میگذارد. بهتر است دانههای قهوه را به صورت تازهبرشته خریداری کنید و تنها به اندازه نیاز خود آسیاب کنید.
**3. منطقه کشت
**
منطقه کشت دانههای قهوه نیز بر طعم نهایی آن تأثیرگذار است. هر منطقه دارای ویژگیهای خاصی است که به دانههای قهوه طعم و عطری منحصر به فرد میبخشد. برای مثال:
آفریقا: قهوههای این منطقه معمولاً دارای طعمهای میوهای و گلی هستند.
آمریکای مرکزی و جنوبی: قهوههای این مناطق اغلب دارای طعمهای شکلاتی و کاراملی هستند.
آسیا: قهوههای این منطقه معمولاً دارای طعمهای خاکی و ادویهای هستند.
**4. روش فرآوری
**
روش فرآوری دانههای قهوه نیز بر طعم نهایی تأثیر دارد. سه روش اصلی فرآوری عبارتند از:
شسته شده (Washed): این روش باعث میشود که طعمهای تمیزتر و شفافتری در قهوه وجود داشته باشد.
طبیعی (Natural): این روش منجر به ایجاد طعمهای میوهای و شیرینتر میشود.
عسل (Honey): این روش ترکیبی از دو روش بالا است و طعمهایی بینابینی ایجاد میکند.
5. رست (برشته شدن)
درجه رست قهوه نیز عامل مهمی در تعیین طعم نهایی آن است:
رست روشن: این نوع رست طعمهای اسیدیته بالا و میوهای ایجاد میکند.
رست متوسط: این نوع رست تعادلی بین طعمهای اسیدیته و تلخی دارد و طعمهای کاراملی و شکلاتی برجستهتر میشوند.
رست تیره: این نوع رست دارای طعمهای تلخ و دودی است و اسیدیته کمتری دارد.
****6. بستهبندی
****بستهبندی مناسب نقش مهمی در حفظ تازگی دانههای قهوه دارد. بستهبندیهایی که دارای سوپاپ یکطرفه هستند، به خروج گازهای تولید شده توسط قهوه اجازه میدهند و در عین حال از ورود هوا جلوگیری میکنند.
**7. تاریخ تولید
**همیشه به تاریخ تولید و رست دانههای قهوه توجه کنید. دانههای قهوه تازهتر طعم و عطر بهتری دارند. تلاش کنید قهوهای را خریداری کنید که حداکثر چند هفته از تاریخ رست آن گذشته باشد.
**8. آسیاب قهوه
**اگر امکانش را دارید، دانههای قهوه را به صورت کامل خریداری کرده و خودتان در منزل آسیاب کنید. این کار به حفظ تازگی و عطر قهوه کمک میکند. آسیابهای قهوه به دو نوع دستی و برقی موجود هستند که هر کدام مزایا و معایب خاص خود را دارند.
**خرید فروشگاه**
دمی نو https://damyno.com/ فروشگاه خرید قهوه، دانه قهوه، پودر قهوه، قهوه فوری، قهوه اسپرسو، قهوه عربیکا، قهوه روبوستا، قهوه ترک، قهوه فرانسه، ماگ، تراول ماگ، ماگ حرارتی، ماگ سرامیکی، ماگ فانتزی، ماگ هیتردار، ماگ استنلی، فلاسک، قمقمه دمنوش مهرگیاه، گلستان، شیرین کننده رژیمی محصولات افراد دیابتی است. با خرید قهوه از فروشگاه دمی نو از تجربه نوشیدن قهوه نهایت لذت را ببرید.
| sadra_azizi_12404cd7d82e6 |
|
1,911,912 | Flutter Overflow Fixes: Simple guide to overflow🚀 | Introduction ✍🏻 The distribution of elements on the screen makes the application look consistent.... | 0 | 2024-07-04T19:05:43 | https://dev.to/rowan_ibrahim/flutter-overflow-fixes-simple-guide-to-overflow-2c09 | flutter, dart, mobile | **Introduction** ✍🏻
The distribution of elements on the screen makes the application look consistent. Today, we will focus on the topic of handling overflow errors.
**Table of contents**
1- What is overflow?
2- Types of overflow in flutter.
3- Text Overflow.
4- Conclusion.
**What is overflow?** 🤔
Knowing that there are different sizes of phones, we should ensure our application is responsive. Consequently, when the content exceeds the space on the page, it leads to overflow. This can lead to content being cut off or extending beyond its container, making the shape inaccurate.
**There are several types of overflow in Flutter** 📝
- Text overflow.
- widget overflow.
- Flex overflow.
- List overflow.
- Overflow bar.
- Scaffold overflow.
**In today’s discussion, we will focus on text overflow.**
**What is text Overflow?**🔎
Placing any text on the page results in different sizes, as we work with various device dimensions. Especially since the text won’t be alone, we’ll have images and icons. This can cause an overflow if the text size and the screen are small, creating a visual issue. Therefore, we need to ensure that the text size we choose is suitable for all devices and consistent with the overall design of our application.
**Let’s see what Flutter offers to solve this problem.**
There is a property in the Text widget called overflow, it controls what happens if the text exceeds the space that allows it and it takes other properties with it to show correctly.
**Max lines property :**
This sets the number of lines the text can span if its size exceeds the width of the screen, and it takes integer values.
**Softwrap property:**
This determines whether the text can wrap onto multiple lines or stay on a single line, and it takes a Boolean value.
**False:** means that if the text exceeds the screen width, it will not wrap to a new line.
**True:** means that if the text exceeds the screen width, it can wrap onto multiple lines as appropriate for the screen size.
**Now let’s see the different values to overflow property and how can we use it!**
- Overflow.ellipses:
When an overflow occurs, I make the remaining text appear as dots (….).
- overflow.visible:
Here it will make the text appear entirely even if it exceeds the container’s bounds, which can distort the page layout if there are other elements underneath. We should use it correctly according to our needs.
- Overflow.fade:
Makes the text fade out gradually if it exceeds a specified space, ensuring it doesn’t abruptly cut off and potentially disrupt the page layout.
- Overflow.clip:
Cuts off any extra text that exceeds the specified space, ensuring it doesn’t display beyond the container’s boundaries.
- Overflow.values:
Displays a list of all possible values for the TextOverflow property.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a8cg9vez6qsuuqejkkwt.png)
**Let’s take an example:**
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dtvde59tyzb6xo4v5r1w.png)
Here, if the text exceeds its bounds, I tell it to display on two lines. However, if it goes beyond these two lines, I instruct it to show “…..” at the end of the text, hiding the rest of it.
Now, with SoftWrap set to false, it means the text adjusts to the specified number of lines without any spaces. If we set it to true, it still adjusts to the specified number of lines but includes spaces between them.
In practice, there isn’t much difference between true and false because in both cases, if it exceeds two lines, it will display “…..”. However, this is just in appearance.
Remember, if you use maxLines alone without overflow, it will give you an error.
**Output**
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ri29t6uu9r2ny3dndz6e.png)
**So when do we use softWrap property?**
We can use softWrap with text and set its value to true without specifying maxLines. In this case, the text will naturally wrap to fit the screen size, adapting to different device sizes. This is useful when displaying product descriptions, writing articles, or any paragraph.
If we set it to false, we ensure that the text does not wrap onto multiple lines and remains on a single line, which is useful for titles or labels.
**For example:**
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sj8aalkxczphkdy5zeu8.png)
**Output**
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z86ygx8ozc4g921xt4fr.png)
If I want the text to have scrolling, or if part of it should be truncated and moved to the next line, or even if it shouldn’t be displayed at all, then here I use `overflow` with `maxLines`. It’s preferable that `softWrap` be set to `false`.
**Here’s the output for the examples we discussed before of overflow values.**
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fw7bnobyehd2ars56b8h.png)
**Certainly, the text will typically be inside other widgets like Rows or Columns, and it will have a font size. Here, overflow can occur if the font size is large and the screen size is small.**
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4725ky8cei0c8tnxx5x1.png)
Here, we can use Expanded and use overflow normally with it, and if we also set maxLines, it’s okay according to the desired layout.
**Alternatively**, using FittedBox with Expanded and setting fit: BoxFit.scaleDown, it adjusts the font size based on the screen size, scaling it down if it’s too large.
Here, if I set maxLines, it won’t do anything and will ignore it because fit continues to scale down the font size to keep it on one line.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/51lt8txyy0gyvnu8yykv.png)
**Output**
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6vbhgbrrild9l43sqll9.png)
**There’s another package called autosize_text:**
It allows you to adjust the font appearance and specify a minimum font size so that it doesn’t shrink below a certain value. This package can be explored further for more details. Still, of course, it’s better to manually adjust the font size yourself to avoid reducing your application’s performance caused by additional packages.
https://pub.dev/packages/auto_size_text
**Conclusion**
You can follow different sequences to fix the overflowing text issue while preparing the UI for your Flutter app. Taking the help of the right team of experts should help you with the process.
And that’s how we handle Text Overflow. Remember to use each property according to your UI needs and how you’re building it. Insha’Allah, we’ll talk about the remaining types in other articles.
If there are any additions or other methods, remember to share them with us in the comments!🌟
Connect with me on LinkedIn and GitHub for more articles and insights. If you have questions, contact me.👀
https://linktr.ee/rowan_ibrahim
**See you on the new topic!🌟**
| rowan_ibrahim |
1,911,911 | Introduction to Functional Programming in JavaScript #1 | Functional programming (FP) has become a hot topic in the JavaScript community in recent years. This... | 0 | 2024-07-04T19:03:40 | https://dev.to/francescoagati/introduction-to-functional-programming-in-javascript-1-1ah1 | javascript | Functional programming (FP) has become a hot topic in the JavaScript community in recent years. This programming paradigm emphasizes immutability, pure functions, and declarative code, making it a powerful tool for creating reliable, maintainable, and scalable applications.
#### What is Functional Programming?
Functional programming is a style of programming that treats computation as the evaluation of mathematical functions and avoids changing state and mutable data. Unlike imperative programming, which focuses on how to achieve a goal through step-by-step instructions, functional programming focuses on what to achieve, describing the desired result through expressions.
Key principles of functional programming include:
1. **Immutability**: Data objects are not modified after they are created. Instead, new objects are created with the desired changes, which helps prevent side effects and makes the code more predictable.
2. **Pure Functions**: Functions that always produce the same output given the same input and do not have side effects (i.e., they do not alter any external state or variables).
3. **First-Class Functions**: Functions are treated as first-class citizens, meaning they can be assigned to variables, passed as arguments, and returned from other functions.
4. **Higher-Order Functions**: Functions that can take other functions as arguments or return them as results, enabling powerful abstractions and code reuse.
5. **Declarative Code**: Focus on describing what to do rather than how to do it, leading to clearer and more concise code.
#### Why Use Functional Programming in JavaScript?
JavaScript is a versatile language that supports multiple programming paradigms, including object-oriented, imperative, and functional programming. Embracing functional programming in JavaScript can offer several benefits:
- **Readability and Maintainability**: FP promotes writing smaller, self-contained functions, making code easier to read, understand, and maintain.
- **Predictability**: Pure functions and immutability help eliminate side effects, making the code more predictable and reducing bugs.
- **Concurrency**: Since functional programming avoids shared state, it is easier to reason about concurrent or parallel execution, leading to more robust applications.
- **Testability**: Pure functions are inherently easier to test since they do not depend on or modify external state.
#### Basic Concepts in Functional Programming
Before diving into more advanced topics, it's essential to understand some basic concepts of functional programming in JavaScript.
1. **Pure Functions**
```javascript
// Pure Function Example
const add = (a, b) => a + b;
console.log(add(2, 3)); // 5
```
2. **Immutability**
```javascript
// Immutability Example
const person = { name: 'Alice', age: 25 };
const updatedPerson = { ...person, age: 26 };
console.log(person); // { name: 'Alice', age: 25 }
console.log(updatedPerson); // { name: 'Alice', age: 26 }
```
3. **First-Class Functions**
```javascript
// First-Class Functions Example
const greet = () => 'Hello, World!';
const sayGreeting = (greetingFunction) => {
console.log(greetingFunction());
};
sayGreeting(greet); // Hello, World!
```
4. **Higher-Order Functions**
```javascript
// Higher-Order Functions Example
const multiply = (a) => (b) => a * b;
const double = multiply(2);
console.log(double(5)); // 10
```
5. **Declarative Code**
```javascript
// Declarative Code Example
const numbers = [1, 2, 3, 4, 5];
const doubledNumbers = numbers.map((number) => number * 2);
console.log(doubledNumbers); // [2, 4, 6, 8, 10]
```
Functional programming offers a new perspective on writing JavaScript code, emphasizing clarity, predictability, and robustness. | francescoagati |
1,911,910 | Việt Matthew | Việt Matthew - Giám đốc điều hành SBOBET Việt Matthew sinh năm 1980 tại Tây Ninh. Anh hiện đang là... | 0 | 2024-07-04T19:03:40 | https://dev.to/vit_matthew/viet-matthew-5779 | Việt Matthew - Giám đốc điều hành SBOBET
Việt Matthew sinh năm 1980 tại Tây Ninh. Anh hiện đang là giám đốc điều hành SBOBET tại Việt Nam. Cung cấp sân chơi cá cược bóng đá, thể thao uy tín an toàn cho anh em cược thủ.
Website: [https://167.99.29.83/viet-matthew/](https://167.99.29.83/viet-matthew/)
Email: [email protected]
Địa chỉ: 20th Floor, Regus Zuellig Bldg, Makati Avenue corner Paseo de Roxas St., Makati City, Philippines
Tags: Việt Matthew, Việt SBOBET, Matthew SBOBET, SBOBET
Hashtag: #vietmatthew, #vietsbobet, #matthewsbobet, #sbobet
Social nổi bật:
https://www.reddit.com/user/vietmatthew/
https://www.youtube.com/channel/UCt84dnTPEHEFiLDMrZllJPQ
https://www.pinterest.com/vietmatthew/
https://groups.google.com/g/vietmatthew/c/2YuGDXQRWh4
https://about.me/veitmatthew/
Link bài viết nổi bật:
https://167.99.29.83/mien-tru-trach-nhiem/
https://167.99.29.83/gioi-thieu/
https://167.99.29.83/dieu-khoan-bao-mat/
https://167.99.29.83/category/link-vao/
https://167.99.29.83/category/huong-dan/
| vit_matthew |
|
1,911,909 | What are the differences between hosting websites through Git and AWS? | Git can host static websites, but for full-stack applications involving dynamic data and backend... | 0 | 2024-07-04T19:02:10 | https://dev.to/satyapriyaambati/what-are-the-differences-between-hosting-websites-through-git-and-aws-5he5 | **Git can host static websites, but for full-stack applications involving dynamic data and backend changes, cloud services like AWS and Azure are necessary. I'm using AWS to host my complex website, leveraging free EC2 instances. I'm deploying my code to AWS instances using Git commands via PuTTY. I'm installing Apache server for hosting my website. Ensure Git, React, Node.js, and npm are installed before cloning the repo in PuTTY.** | satyapriyaambati |
|
1,911,907 | Raw Types and Backward Compatibility | A generic class or interface used without specifying a concrete type, called a raw type, enables... | 0 | 2024-07-04T18:55:11 | https://dev.to/paulike/raw-types-and-backward-compatibility-3o1n | java, programming, learning, beginners | A generic class or interface used without specifying a concrete type, called a raw type, enables backward compatibility with earlier versions of Java. You can use a generic class without specifying a concrete type like this:
`GenericStack stack = new GenericStack(); // raw type`
This is roughly equivalent to
`GenericStack<Object> stack = new GenericStack<Object>();`
A generic class such as **GenericStack** and **ArrayList** used without a type parameter is called a _raw type_. Using raw types allows for backward compatibility with earlier versions of Java. For example, a generic type has been used in **java.lang.Comparable** since JDK 1.5, but a lot of code still uses the raw type **Comparable**, as shown in the code below:
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1tc9wj6alvb83zue02e6.png)
**Comparable o1** and **Comparable o2** are raw type declarations. Be careful: _raw types are unsafe_. For example, you might invoke the max method using
Max.max("Welcome", 23); // 23 is autoboxed into new Integer(23)
This would cause a runtime error, because you cannot compare a string with an integer object. The Java compiler displays a warning on line 3 when compiled with the option **–Xlint:unchecked**, as shown in Figure below.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bifcvtj2srfaegm0hd9k.png)
A better way to write the **max** method is to use a generic type, as shown in the code below.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p1ur2ku3b1lqt3hnwsvw.png)
If you invoke the **max** method using
`// 23 is autoboxed into new Integer(23)
MaxUsingGenericType.max("Welcome", 23);`
a compile error will be displayed, because the two arguments of the **max** method in **MaxUsingGenericType** must have the same type (e.g., two strings or two integer objects). Furthermore, the type **E** must be a subtype of **Comparable<E>**.
As another example, in the following code you can declare a raw type **stack** in line 1, assign **new GenericStack<String>** to it in line 2, and push a string and an integer object to the stack in lines 3 and 4.
`1 GenericStack stack;
2 stack = new GenericStack<String>();
3 stack.push("Welcome to Java");
4 stack.push(new Integer(2));`
However, line 4 is unsafe because the stack is intended to store strings, but an **Integer** object is added into the stack. Line 3 should be okay, but the compiler will show warnings for both line 3 and line 4, because it cannot follow the semantic meaning of the program. All the compiler knows is that stack is a raw type, and performing certain operations is unsafe. Therefore, warnings are displayed to alert potential problems. | paulike |
1,911,906 | Item 39: Prefira as anotações aos padrões de nomenclatura | Problemas com Padrões de Nomenclatura: Erros Tipográficos: Falhas silenciosas por nomes incorretos... | 0 | 2024-07-04T18:55:06 | https://dev.to/giselecoder/item-39-prefira-as-anotacoes-aos-padroes-de-nomenclatura-55cc | java, javaefetivo | **Problemas com Padrões de Nomenclatura:**
**Erros Tipográficos:**
- Falhas silenciosas por nomes incorretos (ex.: tsetSafetyOverride em vez de testSafetyOverride).
**Uso Indevido:**
- Não há como garantir que nomes sejam usados apenas nos elementos certos (ex.: nomear uma classe como TestSafetyMechanisms).
**Associação de Parâmetros:**
Dificuldade em associar parâmetros aos elementos do programa (ex.: codificar tipos de exceção nos nomes dos métodos de teste).
**Solução com Anotações:**
Exemplo de Anotação: Criação de uma anotação Test para métodos de teste.
**Meta-anotações:**
**- @Retention(RetentionPolicy.RUNTIME):** Manter as anotações em tempo de execução.
**- @Target(ElementType.METHOD):** Anotação permitida apenas em métodos.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/99r1613jq00a1kpgfdwp.jpg)
**Uso Prático:**
- A anotação Test marca métodos estáticos sem parâmetros para testes automáticos que falham se lançarem exceções.
- Exemplo: Classe Sample com métodos anotados como testes (alguns válidos, alguns inválidos).
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aw1ujy4sf97o1l5fsfdq.jpg)
**Ferramenta de Teste Simples:**
- Teste Runner: Executa métodos anotados com Test reflexivamente.
- Usa Method.invoke e isAnnotationPresent para detectar e executar métodos anotados.
- Captura e relata exceções lançadas pelos métodos de teste.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yqhx3ik98jjqxelun0ks.jpg)
**Anotações com Parâmetros:**
- Teste com Exceções: Criação de uma anotação ExceptionTest para testes que passam se lançarem uma exceção específica.
- Tipo de Parâmetro: Class<? extends Throwable> para especificar o tipo de exceção esperada.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8h34410xlqeeaekawseu.jpg)
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0fzk2c8959189rzbzhp3.jpg)
**Modificação do Teste Runner:**
- Adiciona suporte para ExceptionTest.
- Verifica se a exceção lançada é do tipo correto.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t1cqem055l83oumap17f.jpg)
**Anotações Repetíveis:**
- Uso de Arrays:
- Mudança do parâmetro de ExceptionTest para um array de Class.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i5wpvd0bvp2yyouxu55e.jpg)
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mz78p6tked98fcjhd68i.jpg)
Anotações **@Repeatable:**
Permite anotar o mesmo elemento várias vezes com a mesma anotação.
Exemplo: @Repeatable na anotação ExceptionTest.
**Processamento de Anotações Repetíveis:**
- Cuidado no Processamento:
- Uso de getAnnotationsByType para acessar anotações repetidas e não repetidas.
- Verificação de ambos os tipos de anotação (individual e contêiner).
**Conclusão:**
**Vantagens das Anotações:**
- Superioridade das anotações sobre os padrões de nomenclatura para fornecer informações adicionais ao código-fonte.
- Incentivo ao uso de anotações predefinidas e fornecidas por ferramentas.
**Importância das Anotações:**
**Para Programadores:**
- Programadores devem utilizar anotações predefinidas e fornecidas por IDEs ou ferramentas de análise estática para melhorar a qualidade das informações de diagnóstico.
- Este resumo destaca os principais pontos discutidos no trecho sobre a evolução do uso de anotações para substituir os padrões de nomenclatura tradicionais, abordando suas vantagens e fornecendo exemplos práticos e sugestões de implementação.
| giselecoder |
1,911,850 | What is the Best Color for Dark Mode? | When you are designing a user interface, dark mode is a must-have! Offering a sleek alternative to... | 0 | 2024-07-04T18:48:48 | https://dev.to/lovatom/what-is-the-best-color-for-dark-mode-f9k | darkmode, ui, design | When you are designing a user interface, dark mode is a must-have!
Offering a sleek alternative to the traditional bright, light-colored theme is standard in the industry.
However, the effectiveness of your dark mode will largely depend on maintaining optimal dark mode contrast and color contrast to ensure readability and accessibility. So how can one make sure to create a dark mode that follows the [WCAG standards](https://www.w3.org/WAI/standards-guidelines/wcag/) for readability and accessibility?
---
First of all: What is Dark Mode?
---
Some of you might say: "What is this silly question? Everyone knows what dark mode is!". Even though you are most likely right about that, I want this article to be aimed at everyone, even those living in a cave who don't know what dark mode is.
Simply put, Dark Mode reverses the usual color scheme of text and background, using light-colored text on a dark background. Dark mode has become a popular choice, especially among developers for several reasons, such as eye strain reduction, reduced blue light Exposure, battery efficiency, or even aesthetics and Focus.
But not all dark modes are created equal: to achieve the best possible experience, it's important to choose the right dark mode color combinations for optimum contrast.
The Importance of Color Contrast
---
Color contrast refers to the difference in color that makes an object (e.g., text) distinguishable from other objects within the same field of view. In dark mode, optimal contrast will help prevent eye strain and make the content accessible to users with visual impairments.
The Web Content Accessibility Guidelines (WCAG) recommend a contrast ratio of at least 4.5:1 for normal text and 3:1 for large text. These guidelines help ensure that text is perceivable by users who have color vision deficiencies. So how do make sure your color combination follows those contrast standards?
By using a Color Contrast Checker!
I personally like [Colorlab](getcolorlab.com), which offer many different tool to works with colors. And the one that interest us here is the [contrast checker] (https://www.app.getcolorlab.com/colorContrast.html), which lets you immediately visualise how your color combination looks and it gives you a clear understanding of your contrast level and wether you match both AA & AAA standards.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wwjn3ea6e06f8xxqr4iv.png)
How do you make Dark Mode look good?
---
To make dark mode look good, use a true black (#000000) - although I'm not personally a huge fan - or a very dark gray background (#121212) to reduce eye strain and save battery life on OLED screens. Ensure text is a light gray (#E0E0E0) or off-white (#FAFAFA) for readability without harsh contrast. Accent colors should be chosen to stand out against the dark background while remaining easy on the eyes, often with slightly desaturated tones. Maintain sufficient contrast for all elements, and use shadows subtly to add depth and separation. Test across different devices to ensure a consistent and visually appealing experience.
With all this being said, what are the Dark Mode combinations I would recommend? See below my curated list of Dark Mode:
**Variant 1**
Background: #212121
Font: #E8E8E8
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eu46az6r50y8qd5f4oef.png)
**Variant 2**
Background: #333333
Font: #F5F5F5
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yb6ozbfbg478kl8dftjg.png)
**Variant 3**
(this one is a bit more funky and might not fit all moods)
Background: #80DEEA
Font: #1B2B34
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xqa7r34im1w8loh61rrk.png)
**Variant 4**
Background: #E5E0EB
Font: #1C1414
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qsnmvbcs9ns79ruo9l0k.png)
---
Of course, these are only my preferences and you might choose your own. Choosing the right dark mode colors is essential for creating a functional and visually appealing dark theme and in the end, all that matters is that you like the result!
| lovatom |
1,901,610 | Design Patterns | Hoje vamos mergulhar no mundo dos Design Patterns. Se você já passou horas tentando descobrir a... | 0 | 2024-07-04T18:45:24 | https://dev.to/rflpazini/design-patterns-3ie6 | designpatterns, softwareengineering, coding, go | Hoje vamos mergulhar no mundo dos Design Patterns. Se você já passou horas tentando descobrir a melhor maneira de estruturar seu código ou se perguntando "deve haver uma maneira mais fácil de fazer isso", então você está no lugar certo!
## O que são Design Patterns
Design Patterns são como as receitas que os desenvolvedores usam para resolver problemas comuns em programação. Eles não são código pronto para copiar e colar, mas sim guias que nos ajudam a organizar o pensamento e encontrar soluções elegantes e eficientes para problemas que muitos desenvolvedores já enfrentaram antes.
Imagine que você está construindo um Lego. Há uma maneira bagunçada de jogar todas as peças juntas esperando que algo legal saia, e há uma maneira de seguir os passos que garantem que você vai acabar com uma nave espacial incrível. Os padrões de design são esses passos no mundo da programação.
Neste artigo, vamos explorar os padrões de design mais comuns, que são como os hits clássicos da música: sempre relevantes, independentemente das tendências. 😅
Podemos separar os padrões em 3 categorias:
**Creational Patterns:** Esses padrões, como o próprio nome sugere, são todos sobre a arte de criar objetos. Eles são os mestres da flexibilidade, oferecendo uma forma de trazer objetos à vida de maneira clara e sem complicações. É como ter uma receita de bolo que você pode ajustar dependendo do que tem na despensa, sem perder o sabor! Alguns exemplos incluem o Singleton, que garante que um objeto seja único no sistema, o Factory, que é como um chef que prepara diferentes pratos (objetos) dependendo do pedido, e o Builder, perfeito para montar objetos passo a passo, como um Lego.
**Structural Patterns:** Esses padrões são como os maestros de uma orquestra, coordenando a composição de classes e objetos para formar estruturas maiores, mantendo tudo flexível e eficiente. Eles garantem que as classes e os objetos possam trabalhar juntos de maneira eficaz. Alguns exemplos clássicos são o padrão Adapter, que ajuda peças diferentes a se encaixarem, o padrão Decorator, que adiciona um toque especial sem alterar o núcleo, e o padrão Composite, que agrupa vários objetos em um só, tipo um combo de lanche!
**Behavior Patterns:** Esses padrões são os especialistas em "fofoca" da programação, cuidando de todas as interações e comunicações entre objetos e classes. Eles são como os organizadores de uma festa, garantindo que todos saibam suas responsabilidades e o que está rolando, para que a conversa flua sem confusão. Um bom papo entre os objetos! Alguns exemplos desses organizadores sociais incluem o padrão Observer, que é como ter alguém que fica de olho em tudo e avisa os outros quando algo importante acontece; o padrão Strategy, que permite mudar a tática no meio do jogo sem alterar o time; e o padrão Command, que é basicamente um sistema de pedidos onde você diz o que precisa e alguém executa.
## Design Patterns mais comuns
Vou listar os design patterns de um jeito bem resumido e colocar links para artigos mais detalhados sobre cada um deles, assim deixo o artigo mais direto e consigo detalhes mais tecnicamente em cada um dos links :D
### Singleton
Esse padrão é o rei da exclusividade! Ele garante que só exista uma única instância de uma classe e ainda por cima oferece um ponto de acesso global a ela.
É super útil quando você quer ter uma única instância compartilhada de uma classe por toda a aplicação, tipo aquele amigo que todo mundo conhece e adora!
{% embed https://dev.to/rflpazini/singleton-design-pattern-1n51 %}
### Factory
Esse padrão é como o anfitrião de uma festa que diz: "Pode entrar, escolha seu quarto!".
Ele oferece uma interface para criar objetos, mas deixa as subclasses decidirem qual classe elas querem instanciar. É mega útil quando você quer tirar a preocupação de saber exatamente qual classe vai ser usada na criação de objetos, deixando o processo mais livre e criativo, como escolher o sabor da pizza numa noite de jogos sem se preocupar com a pizzaria!
{% embed https://dev.to/rflpazini/factory-design-pattern-4e9n %}
### Builder
Esse padrão é tipo um chef de cozinha que te dá a receita, mas deixa você decidir quais ingredientes usar. Ele oferece uma interface para criar objetos, mas permite que as subclasses decidam qual classe instanciar. É super útil quando você quer abstrair o processo de criação de objetos e criar coisas sem ter que especificar exatamente qual classe usar.
{% embed https://dev.to/rflpazini/builder-design-pattern-2hm3 %}
### Adapter
Esse padrão é tipo um adaptador de tomada! Ele permite que interfaces incompatíveis de diferentes classes funcionem juntas, envolvendo uma classe com outra. Funciona como uma ponte entre duas interfaces, convertendo a interface de uma classe em outra que os clientes esperam. É como usar um adaptador para conectar seu carregador de celular em qualquer lugar do mundo, garantindo que tudo funcione direitinho!
### Strategy
Esse padrão é tipo um cardápio de receitas! Ele define uma família de algoritmos intercambiáveis e encapsula cada um em classes separadas. Isso permite que você escolha o algoritmo na hora, com base em condições ou requisitos específicos. É como poder escolher a receita perfeita para cada ocasião, dependendo do que você tem na geladeira!
### Facade
Esse padrão é tipo um controle remoto universal! Ele fornece uma interface única para um conjunto de interfaces em um subsistema. Simplifica o uso de sistemas complexos ao oferecer uma interface de alto nível que esconde toda a complexidade por trás. É como usar um controle remoto para comandar todos os seus aparelhos eletrônicos sem precisar mexer em cada um individualmente!
| rflpazini |
1,911,902 | My intro to React | Introduction In the second phase of my journey at Flatiron School, I dove deep into the world of... | 0 | 2024-07-04T18:43:12 | https://dev.to/nathanwelliver/my-intro-to-react-11oo | webdev, react, frontend, beginners | **Introduction**
In the second phase of my journey at Flatiron School, I dove deep into the world of React, a powerful JavaScript library for building user interfaces. This phase was packed with learning experiences, from understanding components and state management to handling side effects and client-side routing. In this blog post, I'll share some of the key technical aspects I learned, the challenges I faced, and the rewarding moments that made this phase truly impactful.
**Understanding Components and State in React**
One of the core concepts in React is the idea of components. Components allow you to break down your UI into reusable pieces, making your code more modular and maintainable. Alongside components, state management plays a crucial role.
**Setting Up State**
Before setting up state in a component, it's important to determine whether the component actually needs state. If it does, you can set it up using the 'useState' hook. Here's a simple example:
```react
import { useState } from "react";
const [allPies, setAllPies] = useState([]);
```
In this snippet, 'allPies' is the state variable, and 'setAllPies' is the function used to update the state. This setup allows you to manage the state within your component efficiently.
**Managing State with Functions and Effects**
You can set the state either through a function or using the 'useEffect' hook. The 'useEffect' hook is particularly useful for handling side effects, such as data fetching.
Here's an example of both approaches:
```react
import { useEffect, useState } from "react";
const MyComponent = () => {
const [allPies, setAllPies] = useState([]);
useEffect(() => {
fetch("http://localhost:3001/pizzas")
.then(response => response.json())
.then(data => setAllPies(data))
.catch(error => console.log(error));
}, []);
const handleClick = (pizza) => {
setEditPie(pizza);
};
return (
<div>
{/* Render your component UI here */}
</div>
);
};
```
**Overcoming Challenges with Side Effects**
Understanding side effects and using the same child component for different parent components were some of the toughest challenges I faced. The concept of side effects, especially, took a while to grasp. Initially, I struggled to understand when and how to use the useEffect hook. Through numerous discussions with Peer Technical Coaches (PTCs) and a lot of trial and error, I finally understood how to manage side effects and reuse components effectively.
One practical example was fetching data from an API. I needed to ensure that the data was fetched every time the component was mounted or when specific dependencies changed. The 'useEffect' hook helped manage this efficiently, as shown in the earlier code snippet. Another challenge was reusing a child component in different parent components without breaking the state or functionality. This required a solid understanding of props and how to pass data effectively between components.
**Building My Project: A Rewarding Experience**
The most rewarding part of this phase was building my own website for the project. I was amazed at how quickly I could put together the functionalities I needed. In less than a day, I had most of my project completed. This experience not only boosted my confidence but also reinforced the skills I had learned throughout the phase.
Creating a dynamic and responsive website required me to apply everything I had learned about React. I had to think critically about how to structure my components, manage state, and handle user interactions. For instance, implementing client-side routing with React Router allowed me to create a seamless navigation experience for users without reloading the entire page. Here's a basic example of setting up React Router:
```react
import { BrowserRouter as Router, Route, Switch } from 'react-router-dom';
const App = () => (
<Router>
<Switch>
<Route path="/" exact component={HomePage} />
<Route path="/about" component={AboutPage} />
<Route path="/contact" component={ContactPage} />
</Switch>
</Router>
);
const HomePage = () => <div>Home</div>;
const AboutPage = () => <div>About</div>;
const ContactPage = () => <div>Contact</div>;
export default App;
```
**Key Takeaways**
Start Your Projects: No matter how daunting a project or task may seem, the hardest part is getting started. Once you begin, you'll often find it easier to make progress than you anticipated.
State Management: Properly managing state is crucial for creating dynamic and interactive UIs. Understanding when and how to use state can greatly enhance your development process.
Side Effects: Handling side effects, such as data fetching, requires a good grasp of the 'useEffect' hook. This knowledge is essential for working with external data sources in React.
Component Reusability: Learning to reuse components across different parts of your application can save you time and effort, making your codebase more efficient and maintainable.
**Conclusion**
Phase 2 at Flatiron School has been a transformative experience. From grappling with complex concepts to building a functional website, I've grown significantly as a web developer. The skills and knowledge I've gained will undoubtedly help me in my future career. If there's one piece of advice I'd give to fellow learners, it's to start on your projects—because the journey of a thousand miles begins with a single step. | nathanwelliver |
1,913,060 | I Am Done With Self-Hosting | Changes on the home front | 0 | 2024-07-05T18:06:24 | https://www.danielmoch.com/posts/2024/07/i-am-done-self-hosting/ | personal, selfhost, update | ---
title: I Am Done With Self-Hosting
published: true
date: 2024-07-04 18:43:00 UTC
tags: personal,selfhost,update
canonical_url: https://www.danielmoch.com/posts/2024/07/i-am-done-self-hosting/
description: Changes on the home front
---
This is a personal post on why, after almost ten years, I am no longer self-hosting my blog, mail and other servers.
First, a clarification: up until now a lot of my data has been hosted on various virtual private server (VPS) providers. This may walk up to the line between proper self-hosting and ... something else. Still, I continue to call what I was doing self-hosting, not least because data I felt needed to stay private remained on servers physically under my control. I also think it qualifies because I was still fully responsible for OS-level maintenance of my VPS.
Pedantry aside, that maintenance was a primary driver for moving to a different hosting model. I have spent long enough maintaining Linux and OpenBSD servers that the excitement has long since worn off, and so giving time to it on weekends and holidays became untenable in the face of other options. There are services run by companies I trust that will host my email and most other data I might care to access remotely. They will even apply end-to-end encryption to the majority of that data. This is far from a zero trust arrangement, I admit. Still, the companies I have migrated to have cleared a threshold that I am comfortable with for the data they are hosting.
Plus, these companies have better availability guarantees than I could ever hope to achieve on my own. For example, my VPS was configured to forward incoming mail through a Wireguard VPN into a mail server hosted at my home. But as a result, if the power went out at my house, email service would go down. This was fine most of the time, but when I was on vacation the issue could persist until I got home to reboot the necessary hardware. Spending my vacation anxious about my servers came to seem like a ridiculous trade in exchange for more control over my data.
Probably the biggest surprise was that providers often have free tiers that are sufficient for my needs, meaning I'm actually paying less in hosting fees than I was before. I stand to make back even more if I sell hardware I'm not using at the moment (although I'm pretty good at coming up with new uses).
But mostly I'm glad to get the time back. I'll use it to spend more time with my family, or maybe write more. At the very least I'll be more present on vacation. | djmoch |
1,911,901 | How to host Ghost CMS on AWS (EC2 + EFS + RDS) | This article will dive into hosting Ghost CMS (open source) in AWS using EC2 + EFS + RDS... | 0 | 2024-07-04T18:42:17 | https://dev.to/sai_sameer_syed/how-to-host-ghost-cms-on-aws-ec2-efs-rds-2lei | aws, ghostcms, amazonwebservices, community | This article will dive into hosting Ghost CMS (open source) in AWS using EC2 + EFS + RDS services.
During the weekend I did a small project to host my blog site created using ghost CMS and hosted in AWS using (EC2 + EFS + RDS) services. Now I want to share my experience with you.
If you’re looking to run your own Blog, Media, or Publishing site Ghost CMS is a powerful open source app developed by John O’ Nolan (former Head of Design at WordPress). We will not talk about the Ghost CMS in depth in this article but if you are into blog writing and media publishing you should definitely check out Ghost CMS. It offers top-notch features for publishing your articles and hosting them free of cost through an open-source license. Ghost CMS also has a paid offering but it comes with a cost for taking the hosting troubles away from you and you can just focus on writing and publishing the best content. In this article, we will explore how you can self-host the ghost cms for yourself in AWS.
## The Services
We chose to run with AWS (EC2 + EFS + RDS) services, for a few reasons.
**EC2** -> because I tried to host it in ECS (Fargate) my preferred way but failed, so I resorted to Good Old EC2. If you want to try hosting using containers there is a community-supported docker image and guidance available here. Let me know in the comments if you figure it out.
**EFS** -> For persistent storage of media, so that we do not lose our content if our EC2 instance happens to get terminated accidentally.
**RDS** -> For database needs, storing user and other content-related data.
## Network Setup
First, select the region you want to host your server (ex: us-east-1, N. Virginia).
- Create VPC
- Choose the number of AZs — 2.
- One subnet for each public and private. (Our web server EC2 will be in the public subnet and RDS in the private subnet)
- 1 NAT Gateway in 1 AZ for making internet access available for resources and restricting access the other way around.
- Go with the default settings for the rest of the fields.
- Once you are ready click “Create VPC”, and you should have the VPC in a few minutes.
- When a VPC is created, AWS does the configuration between the Internet and your network space and internally via the Subnets, Route Table Configuration, and Internet Gateway. Unless you want to change your network setup there is no need to modify them for this demo.
## RDS Setup (Database)
- Go to the RDS page, and select Create RDS.
- In the RDS creation page, select the stand create option and choose MySQL Community edition as our DB engine.
- Select the MySQL version greater than 8 (8.0.35).
- Go with the free tier template in the next section unless you are following this article for your production setup then select the Production template which means RDS selects the best default configuration recommended for the production workload.
- Give the DB instance a name and a user name for login access.
- In the credentials management select self-managed credentials and auto-generated password unless you have a preferred password you want to use.
- In the instance configuration section, I chose t3.micro for this demo.
- In storage select GP3 cause it comes with the best and general purpose is all we need for this demo with a 20GB allocated storage. (in the storage autoscaling, de-select the autoscaling option)
- In the connectivity section, select don’t connect to an EC2 option as we don’t have the instance yet and we are going to connect them later.
- Select the VPC we created and Create a new DB subnet group option.
- Do not enable public access to RDS and select Create a new security group for DB.
- In the AZ, select the same zone in which you want to deploy your EC2 for faster connectivity though it would be hard to notice as they are in the same region anyway.
- NOTE: Give your db an initial name under Additional Configuration. If you do not RDS will not create a db for us and only an instance will be created.
- Leave the rest of the options as default.
- Once you are ready click Create database. After the DB is created you will find the pop-up with credentials information (password) copy and save them somewhere secure.
## EFS Setup
Go to the EFS page, and click Create File System.
- Give the file system a name and choose the VPC you are using for this demo.
- Click Create and EFS is created.
- For now, that’s it, folks. We will come back during EC2 setup to enable connectivity between our web server and filesystem and also mount it on the EC2.
## EC2 Setup
- Go to the EC2 page and select Launch instance.
- Give your EC2 a name and select the Ubuntu (22.04) version as Ghost CMS supports only 16.04, 18.04, 20.4, and 22.04 versions of Ubuntu currently.
- Select an instance type that works for you, at least t2.medium.
- Create a new key pair if you want to SSH to your instance from local.
- In the VPC settings, select the VPC we created for this Demo and public subnet.
- Select “Create Security Group” and select all three rules to allow SSH, HTTP, and HTTPS traffic from the internet. Do not worry we will remove the SSH access once we set up the ghost configuration.
- Under the storage configuration, select an 8 GB gp3 as root volume.
- In the same section click edit to add the file system we selected earlier.
- De-select the automount option and select the automatic security group attachment. This way AWS creates the inbound rules to EFS only from the EC2 security group. We can do it manually too but why go the hard way?
- That’s it, when you are ready create your instance.
- Once your instance is ready go to Elastic IP under the network and security section.
- Click Allocate Elastic IP and once the IP is allocated, associate it to the instance we created just now.
- It helps in two ways, one we now have a static IP at which our site can be accessed. Two we can use this elastic IP to map our DNS (your site address) to the server so that people can reach your site using the site address.
## Host Server Configuration in EC2
Go to your EC2 page and select the instance you just created. Click Connect using EC2 Instance Connect. You can also find the server setup configuration in ghost docs.
**Update system packages**
Once you log in use the commands to update your packages and install them.
**Update package lists**
> sudo apt-get update
**Update installed packages**
> sudo apt-get upgrade
If you are system is acting up and asking for a restart to load new system packages or libraries. Please go to the instance and reboot the instance using the option available under actions. Reconnect to the system.
**Create a new user for ghost**
Create a new user and follow prompts
> adduser <user>
**Add user to superuser group to unlock admin privileges**
> usermod -aG sudo <user>
**Then log in as the new user**
> su - <user>
## Install Nginx
Ghost CMS uses NGINX server. Install Nginx server and activate ufw for HTTP and HTTPS connect in the firewall.
**Install NGINX**
> sudo apt-get install nginx
**Allow HTTP and HTTPS connections in the firewall**
> sudo ufw allow 'Nginx Full'
## Install MySQL
> sudo apt-get install mysql-server
skip the user creation in the local MySQL setup as we are going to use the RDS for our DB requirement.
## Install Node.js
**Download and import the Nodesource GPG key**
> sudo apt-get update
> sudo apt-get install -y ca-certificates curl gnupg
> sudo mkdir -p /etc/apt/keyrings
> curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg
**Create deb repository**
> NODE_MAJOR=18 # Use a supported version
> echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_$NODE_MAJOR.x nodistro main" | sudo tee /etc/apt/sources.list.d/nodesource.list
**Run update and install**
> sudo apt-get update
> sudo apt-get install nodejs -y
## Install Ghost CLI
> sudo npm install ghost-cli@latest -g
## Install EFS-UTILS
Since we are going to mount EFS for persistence storage, to do that we have to first install the efs-utils library package. You can also find the instruction here.
First, install rust and cargo through rustup:
> curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
> . "$HOME/.cargo/env"
**To build and install debian package for efs-utils follow these commands.**
> $ sudo apt-get update
> $ sudo apt-get -y install git binutils rustc cargo pkg-config libssl-dev
> $ git clone https://github.com/aws/efs-utils
> $ cd efs-utils
> $ ./build-deb.sh
> $ sudo apt-get -y install ./build/amazon-efs-utils*deb
## Mount EFS
Follow these instructions carefully as we are going to create a directory we are going to mount our efs to this new directory and then install ghost inside the mounted directory.
**Create directory: Change `sitename` to whatever you like**
> sudo mkdir -p /var/www/sitename
**Mount EFS Find your EFS mount details in EFS page and click attach on the EFS created for this demo**
> sudo mount -t nfs4 -o
nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport <your EFS ID>.efs.us-east-2.amazonaws.com:/ /var/www/sitename
**Set directory owner: Replace <user> with the name of your user**
> sudo chown <user>:<user> /var/www/sitename
**Set the correct permissions**
> sudo chmod 775 /var/www/sitename
**Then navigate into it**
> cd /var/www/sitename
You should be able to find your EFS mount command by going to the EFS page. select the EFS you created for this demo and click attach. You will be able to find the mount command in the pop-up screen.
## Run the install process
> ghost install
During the installation process, Ghost CLI will ask multiple questions to configure the ghost server.
1. Blog URL: Provide your site URL and include the protocol. ex: https://example.com (you might find a challenge with not being able to access your site using www.example.com TLD later if you gave only the example.com as an input. You can use this article to fix that issue after the full site setup)
2. MySQL hostname: Copy the RDS instance endpoint and paste it here.
3. MySQL Username: Type the username you have given during the RDS setup.
4. MySQL Password: The one I asked you to copy and save somewhere safe :).
5. Ghost DB Name: The DB name I asked you to specifically create and not to forget during RDS setup.
6. Set up Nginx: Yes
7. Setup SSL: Yes (it asks for your email for SSL purposes)
8. Setup Systemd: Yes
9. Start Ghost: Yes.
If you followed the instructions closely, you should be able to access your Ghost CMS site at the Public IP/ DNS endpoint of your EC2. To access the admin configuration page just add “/ghost” at the end of your DNS and you should see the site configuration and login page.
## DNS Configuration
If you already have a domain you bought for your site. Go to the DNS provider and add a couple of A records. For this demo I have my domain registered with Route53, so I will configure the records in Route53 hosted zone.
Record 1
Record name: sitename.com,
Record Type: A
Value: <Static IP of your instance> (in our demo we have an elastic IP associated with the instance so add that here and save.
Record 2
Record Name: www.sitename.com,
Record Type: A,
Alias: select alias as yes (alias to another record in this hosted zone since I have the other A record also in here) select the sitename.com record and save.
Save the settings and in a few mins, your site should be served over your DNS name.
Please leave a like if you find this article helpful and add your feedback or ask your queries in the comments.
Thank you! | sai_sameer_syed |
1,910,945 | Pieces: Your Ultimate Coding AI Best Friend | Table of Contents Time to be honest Is Pieces candy? Pieces: the code catcher Pieces: the... | 21,413 | 2024-07-04T18:40:46 | https://dev.to/cbid2/pieces-your-ultimate-coding-ai-best-friend-6me | review, programming, ai, coding | {%- # TOC start (generated with https://github.com/derlin/bitdowntoc) -%}
## Table of Contents
* [Time to be honest](#time-to-be-honest)
* [Is Pieces candy?](#is-pieces-candy)
* [Pieces: the code catcher](#pieces-the-code-catcher)
* [Pieces: the code’s Pokédex](#pieces-the-code’s-pokédex)
* [Pieces: the time traveler](#pieces-the-time-traveler)
* [Things to improve and add in the future](#things-to-improve-and-add-in-the-future)
* [Now it’s your turn](#now-it’s-your-turn)
* [Credits](#credits)
* [Footnotes](#footnotes)
{%- # TOC end -%}
## Time to be honest
I have a confession to make. I struggle with describing the code snippets I use in my technical articles. I know, I know, it’s shocking, but all hope isn’t lost. For the past few months, I have been using this awesome tool to help me overcome this struggle. This, my friend, is called Pieces. I first learned about it from my friend, @sophyia, a former DevRel at the company. She asked me to join their guest writing program after reading my articles on freeCodeCamp's website. I’m a believer in trying a product before writing about it. So, I decided to download [<u>the desktop app</u>](https://docs.pieces.app/installation-getting-started/what-am-i-installing) and install the [<u>Chrome extension</u>](https://docs.pieces.app/extensions-plugins/web-extension) to see what it’s about. After a few days of using it, I was hooked. Now before I start sharing more tech confessions, let me tell you what exactly Pieces is.
## Is Pieces candy?
As delicious Reese’s Pieces is, this type of Pieces is not something you can eat. It is actually an free AI tool that assists when you are coding.Think of it as a technical Tinkerbell from Peter Pan[1](about:blank#fn1). If you want to learn more about what Pieces can do, check out their short intro video here ⬇️
{% youtube jqHomwNISVE %}
Now that you two have met, let me show you how I use Pieces in my workflow.
## Pieces: the code catcher
In the past, I’d save code snippets that I wanted to use in my blog posts with the Chrome browser's bookmark feature. Unfortunately, I’d either scramble through so many tabs or the **Bookmarks** tab to find them. This left me with a headache. Once I started using Pieces’ web extension, I had a much easier time finding my saved code snippets. Here’s how.
**Step one:** **Highlight a line of code**
![Screenshot of highlighted code](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/831l5hsppsz3a413ct5t.png)
**Step two:** **Right-click and pick the option, _Save to Pieces_**.
![Screenshot of Save to Pieces option being highlighted](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lk3b5w0v01iq4tri5dne.png)
**Step three:** Go to **Saved Snippets** area of the Desktop app
![Screenshot of highlighted code appearing in the Saved Materials area of the Desktop app](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5bym5mov1an4dlsdwk5p.png)
And viola, a saved code snippet! It's like a technical version of a Pokeball for catching Pokemon!😊 Pretty cool right? 😉
![Pieces: Your Ultimate Coding AI Best Friend](https://onepubli-sh.nyc3.digitaloceanspaces.com/uploads/537433fd-e15e-4c01-b0e7-348a7134635d.gif)
Now before I start rambling about this feature, there's another one I want to show you! 😀
## Pieces: the code’s pokédex
In the Desktop app, there’s a feature called [<u>Copilot chat</u>](https://docs.pieces.app/features/pieces-copilot). It’s like the Pokedex but for coding! 😀
![Pieces: Your Ultimate Coding AI Best Friend](https://onepubli-sh.nyc3.digitaloceanspaces.com/uploads/e08b150f-e656-44cc-94a3-8cf73fa23036.gif)
I tend to use it if I’m struggling with fixing a line of code I’m creating for my open source contributions or projects. For example, I was working on the JavaScript file for my 404 page project.
{% github https://github.com/CBID2/404-page %}
I wanted to reference a line of code from that file in the project's README but I was struggling to describe it. After doing some mental ping pong, I typed my code in the copilot chat and asked for some for help:
![Pieces: Your Ultimate Coding AI Best Friend](https://onepubli-sh.nyc3.digitaloceanspaces.com/uploads/f8f3776f-67c9-43a6-bd2a-3311b9b06db6.png)
In the above screenshot, the copilot explains that my code snippet selects the search input field and search button elements from the webpage. It also mentions that after the user clicks on the search button, the event listener sends the user to their desired section of Codedex's website. Even though I didn’t win the challenge, I did win in knowledge! 😀
## Pieces: the time traveler
In the Pieces Copilot, there is a new feature called [<u>Live Context</u>](https://docs.pieces.app/product-highlights-and-benefits/live-context). It looks back at the content you recently viewed. It's kind of like a time machine minus the risks of changing people’s futures or causing them to not exist. Now before I start rambling on about the cons of time traveling, let’s see how it works[2](about:blank#fn2):
Click on the **New Chat** button.
![Step 1 of using Live Context](https://onepubli-sh.nyc3.digitaloceanspaces.com/uploads/93b3a402-58a3-456e-93c0-a040548368ba.gif)
Turn on the **Live Context** toggle.[3](about:blank#fn3)
![Step 2 of using Live Context](https://onepubli-sh.nyc3.digitaloceanspaces.com/uploads/ccf1c973-468c-4292-b075-9a06510becb4.gif)
Then, type your question or idea in the chat or use the **Suggested Prompts**.
![Step 3 of using Live Context](https://onepubli-sh.nyc3.digitaloceanspaces.com/uploads/47a17ae7-e567-40be-a21f-22a027d32536.png)
As shown in the screenshot above, I’ve made it to the writing stage of my technical article. In the past, I would rely on my brain to remember what I’ve written, which gets annoying. Once I started using the Live Context feature, it was easier for me to track my progress. This feature is also helpful if you are looking through files in a large codebase. For example, I was reviewing an issue for an open source project, but I was not sure which files needed the changes. So I went to the Copilot chat, typed “Where can I make add the canonical links?”, and got this response:
![Screenshot of my conversation with the copilot chat](https://onepubli-sh.nyc3.digitaloceanspaces.com/uploads/1363e72b-0ccc-4960-8787-94351b008f11.png)
This, I got advice on how to structure the canonical link and where to place them. All in all, the new Live Context feature is awesome. Need more tips? check out the @get_pieces team’s blog post, [<u>20 Novel AI Prompts Made Possible Only by Pieces Copilot+</u>](https://code.pieces.app/blog/20-novel-ai-prompts-made-possible-only-by-pieces-copilot).
## Things to improve and add in the future
As much as I found Pieces enjoyable to use, there are some aspects of the tool that can use some improvement. Having to turn on the Live Context switch each time I start a chat on the Desktop app can get a bit tedious. So it would be beneficial to have the option to toggle the button on or off once. Also, the Co-pilot chat feature does not have a text-to-speech option. This can make it difficult for users with low motor skills to read the AI's long responses. Furthermore, adding this option would make reading these responses more bearable.
In the future, it would be awesome if there is a mobile version of the Desktop app. I sometimes work on my blog posts on my tablet, so having this would make it easier for me to finish them, especially if I’m away from my laptop.
## Now it’s your turn
There you have it folks, my product review of Pieces! Whether you are a technical writer or a developer, it is a great tool that improves your workflow. Also, the people at the company receptive to feedback and works fast to address your concerns. Overall, I recommend adding Pieces to your tech stack. Now enough of me talking, click on the links below to download the tool, join the community, connect with me, and start working! 🙂
{% cta https://pieces.app/
%} Install Pieces 🤖 {% endcta %}
{% cta https://discord.gg/getpieces
%}
🤝 Join Pieces' Community {% endcta %}
{% cta
https://linktr.ee/ChrissyCodes
%}
Check out my Linktree 🌐 {% endcta %}
## Credits
Pokedex GIF by [<u>1jps</u>](https://giphy.com/gifs/1jps-pokedex-pokemonsilver-giEBaPNKEtjZtK43oX)
Pokemon GIF by [<u>ruined childhood</u>](https://i.giphy.com/media/v1.Y2lkPTc5MGI3NjExbm14dnNtY3FjaXRpOXNlaTJ1ajV4MHZxNzM3aXVpZGZuNjF3MjRpZSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/HQuZn367GgytO/giphy.gif)
---
## Footnotes
1. This is a fairytale about a forever young boy would takes a group of British children to his home island, Neverland[↩︎](about:blank#fnref1)
2. The Linux version of this feature will be released soon. With that in mind, this brief tutorial is macOs-focused. If you need more help, check out [<u>the Live Context section of Pieces’ documentation</u>](https://docs.pieces.app/product-highlights-and-benefits/live-context).[↩︎](about:blank#fnref2)
3. Make sure that you enable the Workstream Pattern Engine enabled to gather the context first. To learn more about this, check out [<u>this section of the Live Context tutorial</u>](https://docs.pieces.app/product-highlights-and-benefits/live-context#enablingdisabling-the-wpe)[↩︎](about:blank#fnref3)
| cbid2 |
1,837,573 | Desvendando o Async/Await: Simplificando a Programação Assíncrona com JavaScript | Conceitos Básicos O que é Programação Assíncrona? Programação assíncrona é uma... | 0 | 2024-07-04T18:36:05 | https://dev.to/gabrielteixeira44/desvendando-o-asyncawait-simplificando-a-programacao-assincrona-e0c | ## Conceitos Básicos
### O que é Programação Assíncrona?
Programação assíncrona é uma técnica de programação que permite a execução de operações que podem demorar um certo tempo para serem concluídas, como requisições de rede, leitura de arquivos ou consultas a bancos de dados, sem bloquear o fluxo principal do programa. Em vez de esperar que essas operações terminem, o programa pode continuar executando outras tarefas. Isso é especialmente útil em aplicações web, onde a responsividade e a performance são cruciais para a experiência do usuário.
### Callbacks e Promises
Antes do advento de async/await, a programação assíncrona em JavaScript era tradicionalmente feita usando callbacks e Promises.
- **Callbacks**: Um callback é uma função passada como argumento para outra função, que será executada após a conclusão de uma operação assíncrona. No entanto, callbacks podem levar ao que é conhecido como "callback hell", onde múltiplos callbacks aninhados tornam o código difícil de ler e manter.
```javascript
function fetchData(callback) {
setTimeout(() => {
callback('data');
}, 1000);
}
fetchData((data) => {
console.log(data);
});
```
- **Promises**: Promises foram introduzidas para melhorar a legibilidade do código assíncrono. Uma Promise representa um valor que pode estar disponível agora, no futuro ou nunca. Promises permitem encadear operações assíncronas de maneira mais clara e gerenciável.
```javascript
function fetchData() {
return new Promise((resolve, reject) => {
setTimeout(() => {
resolve('data');
}, 1000);
});
}
fetchData().then((data) => {
console.log(data);
}).catch((error) => {
console.error(error);
});
```
## O que é Async/Await?
`async` e `await` são palavras-chave introduzidas no ES2017 (ES8) que simplificam ainda mais o uso de Promises, permitindo escrever código assíncrono que parece síncrono.
- **async**: A palavra-chave `async` é usada para declarar uma função assíncrona. Uma função assíncrona sempre retorna uma Promise.
```javascript
async function fetchData() {
return 'data';
}
fetchData().then((data) => {
console.log(data);
});
```
- **await**: A palavra-chave `await` só pode ser usada dentro de uma função assíncrona. Ela pausa a execução da função até que a Promise seja resolvida, simplificando o fluxo do código.
```javascript
async function fetchData() {
const response = await fetch('https://api.example.com/data');
const data = await response.json();
return data;
}
fetchData().then((data) => {
console.log(data);
});
```
Com `async` e `await`, o código assíncrono se torna mais linear e fácil de entender, eliminando a necessidade de encadeamento excessivo de `then` e `catch`.
## Exemplos Práticos e Tratamento de Erros
### Exemplo Simples
Vamos começar com um exemplo simples de uma função assíncrona que busca dados de uma API usando `async/await`.
```javascript
async function fetchData() {
const response = await fetch('https://api.example.com/data');
const data = await response.json();
return data;
}
fetchData().then(data => {
console.log(data);
});
```
### Múltiplas Chamadas Assíncronas
Às vezes, você precisa fazer múltiplas chamadas assíncronas e esperar que todas sejam concluídas. Você pode usar `Promise.all` para isso.
```javascript
async function fetchMultipleData() {
const [response1, response2] = await Promise.all([
fetch('https://api.example.com/data1'),
fetch('https://api.example.com/data2')
]);
const data1 = await response1.json();
const data2 = await response2.json();
return { data1, data2 };
}
fetchMultipleData().then(({ data1, data2 }) => {
console.log(data1, data2);
});
```
### Tratamento de Erros com Try/Catch
O uso de `try/catch` em funções assíncronas permite capturar e lidar com erros de forma clara e concisa.
```javascript
async function fetchData() {
try {
const response = await fetch('https://api.example.com/data');
if (!response.ok) {
throw new Error('Network response was not ok');
}
const data = await response.json();
return data;
} catch (error) {
console.error('Failed to fetch data:', error);
}
}
fetchData().then(data => {
if (data) {
console.log(data);
}
});
```
### Exemplo Completo: Integração com API e Tratamento de Erros
Vamos combinar tudo o que aprendemos em um exemplo mais completo. Suponha que estamos construindo uma função que busca e processa dados de múltiplas APIs e lida com possíveis erros.
```javascript
async function fetchUserData() {
try {
const userResponse = await fetch('https://api.example.com/user');
if (!userResponse.ok) {
throw new Error('Failed to fetch user data');
}
const userData = await userResponse.json();
const postsResponse = await fetch(`https://api.example.com/users/${userData.id}/posts`);
if (!postsResponse.ok) {
throw new Error('Failed to fetch user posts');
}
const userPosts = await postsResponse.json();
return { userData, userPosts };
} catch (error) {
console.error('Error fetching data:', error);
}
}
fetchUserData().then(data => {
if (data) {
console.log('User Data:', data.userData);
console.log('User Posts:', data.userPosts);
}
});
```
## Conclusão
`async/await` simplifica significativamente a programação assíncrona em JavaScript, tornando o código mais legível e fácil de manter. Com a capacidade de escrever código assíncrono que parece síncrono, desenvolvedores podem evitar os problemas comuns de callbacks aninhados e o encadeamento excessivo de Promises.
## Recursos Adicionais
Para aprofundar seu conhecimento sobre `async/await` e programação assíncrona, aqui estão alguns recursos úteis:
- [Documentação MDN sobre async/await](https://developer.mozilla.org/en-US/docs/Learn/JavaScript/Asynchronous/Async_await)
- [Artigo sobre Promises no MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise)
- [Vídeo tutorial sobre async/await](https://www.youtube.com/watch?v=V_Kr9OSfDeU) | gabrielteixeira44 |
|
1,911,900 | Marrying Perl to Assembly | This is probably one of the things that should never be allowed to exist, but why not use Perl and... | 0 | 2024-07-04T18:34:13 | https://dev.to/chrisarg/marrying-perl-to-assembly-91m | perl, assembly, multilanguage, performance | This is probably one of the things that should never be allowed to exist, but why not use Perl and its capabilities to inline foreign code, to FAFO with assembly without a build system? Everything in a single file! In the process one may find ways to use Perl to enhance NASM and vice versa. But for now, I make no such claims : I am just using the perlAssembly git repo to illustrate how one can use Perl to drive (and learn to code!) assembly programs from a single file.
(Source code may be found in the [perlAssembly repo](https://github.com/chrisarg/perlAssembly) )
## x86-64 examples
### Adding Two Integers
Simple integer addition in Perl - this is the [Hello World](https://github.com/chrisarg/perlAssembly/blob/main/addIntegers.pl) version of the [perlAssembly repo](https://github.com/chrisarg/perlAssembly)
But if we can add two numbers, why not add many, many more?
### The sum of an array of integers
Explore multiple equivalent ways to add *large* arrays of short integers (e.g. between -100 to 100) in Perl. The [Perl](https://github.com/chrisarg/perlAssembly/blob/main/addArrayofIntegers.pl) and the [C](https://github.com/chrisarg/perlAssembly/blob/main/addArrayofIntegers_C.pl) source files contain the code for:
* ASM\_blank : tests the speed of calling ASM from Perl (no computations are done)
* ASM : passes the integers as bytes and then uses conversion operations and scalar floating point addition
* ASM\_doubles : passes the array as a packed string of doubles and do scalar double floating addition in assembly
* ASM\_doubles\_AVX: passes the array as a packed string of doubles and do packed floating point addition in assembly
* ForLoop : standard for loop in Perl
* ListUtil: sum function from list utilities
* PDL : uses summation in PDL
Scenarios w\_alloc : allocate memory for each iteration to test the speed of pack, those marked
as wo\_alloc, use a pre-computed data structure to pass the array to the underlying code.
Benchmarks of the first scenario give the true cost of offloading summation to of a Perl array to a given
function when the source data are in Perl. Timing the second scenario benchmarks speed of the
underlying implementation.
This example illustrates
* an important (but not the only one!) strategy to create a data structure
that is suitable for Assembly to work with, i.e. a standard array of the appropriate type,
in which one element is laid adjacent to the previous one in memory
* the emulation of declaring a pointer as constant in the interface of a C function. In the
AVX code, we don't FAFO with the pointer (RSI in the calling convention) to the array directly,
but first load its address to another register that we manipulate at will.
#### Results
Here are the timings!
| | mean | median | stddev |
|------------------------------|--------|--------|--------|
|ASM\_blank | 2.3e-06| 2.0e-06| 1.1e-06|
|ASM\_doubles\_AVX\_w\_alloc | 3.6e-03| 3.5e-03| 4.2e-04|
|ASM\_doubles\_AVX\_wo\_alloc | 3.0e-04| 2.9e-04| 2.7e-05|
|ASM\_doubles\_w\_alloc | 4.3e-03| 4.1e-03| 4.5e-04|
|ASM\_doubles\_wo\_alloc | 8.9e-04| 8.7e-04| 3.0e-05|
|ASM\_w\_alloc | 4.3e-03| 4.2e-03| 4.5e-04|
|ASM\_wo\_alloc | 9.2e-04| 9.1e-04| 4.1e-05|
|ForLoop | 1.9e-02| 1.9e-02| 2.6e-04|
|ListUtil | 4.5e-03| 4.5e-03| 1.4e-04|
|PDL\_w\_alloc | 2.1e-02| 2.1e-02| 6.7e-04|
|PDL\_wo\_alloc | 9.2e-04| 9.0e-04| 3.9e-05|
Let's say we wanted to do this toy experiment in pure C (using Inline::C of course!)
This code obtains the integers as a packed "string" of doubles and forms the sum in C
```C
double sum_array_C(char *array_in, size_t length) {
double sum = 0.0;
double * array = (double *) array_in;
for (size_t i = 0; i < length; i++) {
sum += array[i];
}
return sum;
}
```
Here are the timing results:
| | mean | median | stddev |
|------------------------------|--------|--------|--------|
|C\_doubles\_w\_alloc |4.1e-03 |4.1e-03 | 2.3e-04|
|C\_doubles\_wo\_alloc |9.0e-04 |8.7e-04 | 4.6e-05|
What if we used SIMD directives and parallel loop constructs in OpenMP? All three combinations were tested, i.e. SIMD directives
alone (the C equivalent of the AVX code), OpenMP parallel loop threads and SIMD+OpenMP.
Here are the timings!
| | mean | median | stddev |
|------------------------------|--------|--------|--------|
|C\_OMP\_w\_alloc |4.0e-03 | 3.7e-03| 1.4e-03|
|C\_OMP\_wo\_alloc |3.1e-04 | 2.3e-04| 9.5e-04|
|C\_SIMD\_OMP\_w\_alloc |4.0e-03 | 3.8e-03| 8.6e-04|
|C\_SIMD\_OMP\_wo\_alloc |3.1e-04 | 2.5e-04| 8.5e-04|
|C\_SIMD\_w\_alloc |4.1e-03 | 4.0e-03| 2.4e-04|
|C\_SIMD\_wo\_alloc |5.0e-04 | 5.0e-04| 8.9e-05|
#### Discussion of the sum of an array of integers example
* For calculations such as this, the price that must be paid is all in memory currency: it
takes time to generate these large arrays, and for code with low arithmetic intensity this
time dominates the numeric calculation time.
* Look how insanely effective sum in List::Util is : even though it has to walk the Perl
array whose elements (the *doubles*, not the AV*) are not stored in a contiguous area in memory,
it is no more than 3x slower than the equivalent C code C\_doubles\_wo\_alloc.
* Look how optimized PDL is compared to the C code in the scenario without memory allocation.
* Manual SIMD coded in assembly is 40% faster than the equivalent SIMD code in OpenMP (but it is
much more painful to write)
* The threaded OpenMP version achieved equivalent performance to the single thread AVX assembly
programs, with no obvious improvement from combining SIMD+parallel loop for pragmas in OpenMP.
* For the example considered here, it thus makes ZERO senso to offload a calculation as simple as a
summation because ListUtil is already within 15% of the assembly solution (at a latter iteration
we will also test AVX2 and AVX512 packed addition to see if we can improve the results).
* If however, one was managing the array, not as a Perl array, but as an area in memory through
a Perl object, then one COULD consider offloading. It may be fun to consider an example in
which one adds the output of a function that has an efficient PDL and assembly implementation
to see how the calculus changes (in the to-do list for now).
### Disclaimer
The code here is NOT meant to be portable. I code in Linux and in x86-64, so if you are looking into Window's ABI or ARM, you will be disappointed. But as my knowledge of ARM assembly grows, I intend to rewrite some examples in Arm assembly!
| chrisarg |
1,911,124 | Top 3 SaaS Services for Importing CSV Files | Data flows consistently and constantly across multiple departments and between different companies.... | 0 | 2024-07-04T18:32:22 | https://developerpartners.com/blog/f/top-3-saas-services-for-importing-csv-files | csv, saas, database, softwaredevelopment | Data flows consistently and constantly across multiple departments and between different companies. Thus, importing data into applications is crucial for efficient business operations. However, this data exchange process is also a frequent pain point. Managing the data causes companies to lose valuable time trying to make sense of messy spreadsheets.
Often, entire business teams will find themselves waiting, entirely dependent on IT and data specialists. This unnecessary distraction stops entire departments from productive day-to-day operations, delaying decision-makers from making data-driven choices.
The end result for business users is a frustrating and inefficient experience and hours worth of productivity lost.
A key challenge faced by B2B SaaS startups is that their potential clients already have existing data. This is a problem. Migrating to a new system is time-consuming and costly. Even if the new software shows promise, most won’t risk losing crucial data during the transition. Most clients seek improved software solutions, but not at the expense of compromising existing data integrity.
Data transfer limitations are another concern that forces clients to stay with their current software provider.
To address this issue, SaaS startups should incorporate a data import feature where clients can transfer their data from previous systems to a new one with ease. Platforms like Flatfile, Dromo, and CSVBox offer solutions for SaaS companies to streamline the implementation of front-end processes and initial validation. This helps simplify the first half of the data migration process.
##The Three Best Data Import Tools
Data importers affect customer experience. They’re important for a company’s bottom line.
Three SaaS companies stand out in data file importing: Dromo, Flatfile, and CSVBox.
Understanding the differences between Flatfile, Dromo, and CSVBox will help in choosing the right customer data onboard experience.
###Flatfile
![Flatfile - service for importing files](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y8k1xllegsieo3ddclyq.png)
Created in 2018 by David Boskovic and Eric Crane, [Flatfile ](https://flatfile.com/)has since become an all-in-one platform after raising $100 million across multiple investment rounds in six years. It describes itself as the “easiest, fastest, and safest way for developers to build the ideal data file important experience.”
####Ease of Integration
Using Flatfile as a data import tool gives companies a customizable importer to implement and improve as they see fit.
The biggest downside is it requires a lot of legwork for engineers to transform Flatfile into exactly what your company needs.
####Demo
You can request a demo by contacting Flatfile. Fortunately, there is also a YouTube video that provides a detailed demo of Flatfile features and products.
{% embed https://www.youtube.com/watch?v=WQOZaTacva8 %}
####Strengths
- Large-scale data manipulation.
- Branding-tailored customer experience.
- Real-time data validation and process.
- Support for up to 1GB files.
- SOC 2 Type II, HIPAA, CCPA, and GDPR compliance
- Multi-team collaboration.
####Pricing
- Flatfile offers a free “Starter” tier that’s capped at 50 files a month.
- The “Professional” tier is billed annually but is priced at $799 per month.
- Larger businesses can avail of the custom-priced “Enterprise” tier.
###Dromo
![Dromo - service for importing files"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ii5prq9vzmyc7uvf1wrr.png)
[Dromo ](https://dromo.io/) is a middle-ground between Flatfile and CSVBox. It’s neither as cheap as the latter nor as expensive as the former. A big downside is that it lacks the security certifications that Flatfile (SOC 2 TYPE 2, HIPAA, and GDPR) and CSVBox (SOC 2 TYPE 2) have.
####Ease of Integration
You can embed Dromo directly into your app or website. An alternative is to trigger it using their in-house Headless API.
####Demo
Dromo has many demos in their sites that you can explore. They also have some on YouTube.
{% embed https://www.youtube.com/watch?v=xSFuGOePMeA %}
####Strengths
- Excellent customer service.
- Intuitive and user-friendly UI.
- Accessible platform
- Best user experience, by far.
####Pricing
- Dromo’s “Starter” tier starts at $0 a month but you have to pay $2.50 per import
- Upgrading to the “Professional” tier will cost $399 per month but you get 250 imports for free monthly. Afterwards, you only need to pay $2 per import.
- The “Enterprise” tier includes unlimited imports, guaranteed uptime, live support, and integration calls with a shared Slack channel, among others for an undefined cost.
###CSVBox
![CSVBox - service for importing files](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h9y5o4ubh9e3sk6hdn3t.png)
The best-entry-level data importer. It offers a ton of features and validation rules with commendable customer service. The only downside? It has a basic UI.
####Ease of Integration
[CSXBox](https://csvbox.io/) is particularly beneficial for companies requiring support for larger files (up to 500,000 rows) and needs a reliable platform that can manage such volume without the associated cost. It enjoys “fast” import speeds, claiming that users can enjoy a “production-ready data importer in minutes, not weeks.”
####Demo
CSVBox shines when it comes to transparency and demoing their product. There is a short demo video of CSVBox on YouTube.
{% embed https://www.youtube.com/watch?v=fy3DH_f5oGE&t=3s %}
A short video is great, but the most exciting thing about CSVBox demos is that they have a live demo page in their site that you can actually interact with the importer and see it in action. Just head over to the link below and click the “Start” button to see the CSVBox importer in action:
[https://csvbox.io/demo1](https://csvbox.io/demo1)
####Strengths
- Affordable.
- Straightforward.
- Customized styling.
- Supports import files up to 500,000 rows.
- SOC 2 Type II and GDPR compliant.
####Pricing
- CSVBox offers five monthly subscription service tiers: Sandbox (free), Startup ($19), Pro ($49), Growth ($99), and Plus ($199).
##Choosing the Right Solution For Your Business
When choosing a data file important for your business, consider these four factors: features, pricing, white labeling, and security.
###Features
There’s no one-size-fits-all approach to importing CSV files. How your business operates will determine the data file importer features you need. Do you need to integrate the importer with third-party systems? Are you planning to incorporate automation capabilities using an API? Do you prefer a no-code setup?
As your business grows and its needs change, find a product that can meet its needs today and has enough room to accommodate its future needs.
###Pricing
Look for products that offer clear and upfront pricing. The last thing you want to happen is paying for a product that doesn’t fully outline its available features and you’ll find out later on that what your business needs is only available in more expensive tiers or plans.
The two most common data import pricing models are per-usage and flat billing. Each has its advantages and drawbacks. Per-usage charges are best for businesses with fluctuating needs. But, this also means that usage spikes can make pricing unpredictable. On the other hand, flat-rate monthly or annual charges require fixed payments, regardless of use frequency.
If your business usage is unpredictable, flat-rate billing is more simple and better. If your usage rate is low, per-usage billing is more economical as long as you carefully monitor the usage.
Regardless of the billing cycle you choose, what’s important is that you’re aware of all the potential fees and additional charges involved. If possible, ask for a multi-year pricing guarantee.
###White Labeling
White labeling is one of the most common features SaaS companies upcharge. Make it clear if the software can let you customize the importer based on your organization’s brand identity. Even if this isn’t a requirement for your business right now, brand identity might become crucial down the line.
###Safety and privacy
Is it important that the software provider never gain access to data files? If it is, look for self-hosted options. An alternative is self-contained importers embedded in web browsers, which use privacy architecture. Either lets you avoid processing and storing files on outside servers.
Security certifications like SOC 2 and complications with other safety regulations should be a minimum requirement when looking for organizations you will trust your data with.
##[Developer Partners](https://developerpartners.com/) — Your Comprehensive Data Management Solution Provider
![Developer Partners logo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z88dg3jr89rsfbsedgfx.png)
We understand the challenges B2B SaaS companies face when it comes to data import functionality.
Tools like Flatfile, Dromo, and CSVBox can only do so much. They might excel at parsing and validating data from CSV files, but this is just one part of the process. The next step is more critical. Integrating the parsed data into your core database requires custom back-end development. This is where our expertise in custom back-end services comes into play.
We understand the value of Flatfile, Dromo, and CSVBox in business operations. These services help streamline the front-end process of importing, parsing, and validating CSV data. It’s these solutions that allow development teams to focus on core functionalities without worrying about the data intake process.
However, your B2B SaaS application’s back-end still needs custom development to handle the integration of the parsed data.
Some of the tasks that require a more involved back-end include
###Data Mapping
We can map the parsed data fields to their corresponding fields in your data scheme for accurate and efficient data storage.
###Data Transformation
We handle data transformations like formatting dates, converting currencies, and manipulating text data for compatibility with your database structure.
###Data Validation
Most products can only handle basic data validation. We can implement custom validation rules specific to your business needs, which can involve everything from checking for duplicate entries to enforcing specific data formats for data integrity.
###Database Integration
We’ll establish a secure and efficient connection between your application’s back-end and preferred database for a smooth parsed data flow.
##Conclusion
When you partner with Developer Partners for custom back-end development, you free your internal teams to focus on their core responsibilities. We take care of the complicated task of integrating the data from front-end parsing tools into your back-end for a seamless data import experience for your B2B customers.
If you you are a tech company offering a B2B SaaS solution, [contact Developer Partners](https://developerpartners.com/contact-us) for building a robust data import feature. | developerpartners |
1,911,899 | Solving the version conflicts between the Nvidia driver and CUDA toolkit | Navigating Nvidia GPU drivers and CUDA development software can be challenging. Upgrading CUDA... | 0 | 2024-07-04T18:30:40 | https://dev.to/moseo/solving-the-version-conflicts-between-the-nvidia-driver-and-cuda-toolkit-2n2 | Navigating Nvidia GPU drivers and CUDA development software can be challenging. Upgrading CUDA versions or updating the Linux system may lead to issues such as [GPU](https://www.buysellram.com/blog/all-about-graphics-processing-units-gpus/) driver corruption. In such situations, we often encounter questions that require online searches for solutions, which can take time and effort.
Some questions related to Nvidia driver and CUDA failures include:
A) `The following packages have unmet dependencies:
cuda-drivers-535 : Depends: nvidia-dkms-535 (>= 535.161.08)
Depends: nvidia-driver-535 (>= 535.161.08) but it is not going to be installed`
B) `UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g., changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.) Reboot after installing CUDA.`
C) `NVIDIA-SMI has failed because it couldn’t communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.`
After going through this time-consuming process, I realized that having a deeper understanding of the intricate relationship between CUDA and Nvidia drivers could have enabled me to resolve the driver corruption issue more swiftly. This realization underscores the importance of acquiring comprehensive knowledge about the interplay between software components and hardware drivers, which can greatly streamline troubleshooting processes and enhance system maintenance efficiency. In this post, I will try to clarify the concepts of GPU driver and CUDA version, and other related questions.
**What is CUDA?**
CUDA, short for Compute Unified Device Architecture, is a groundbreaking parallel computing platform and application programming interface (API) model developed by NVIDIA. This powerful technology extends the capabilities of NVIDIA GPUs (Graphics Processing Units) far beyond traditional graphics rendering, allowing them to perform a wide range of general-purpose processing tasks with remarkable efficiency.
**Key Components of CUDA:**
**CUDA Toolkit:** NVIDIA provides a comprehensive development environment through the CUDA Toolkit. This includes an array of tools and resources such as libraries, development tools, compilers (like nvcc), and runtime APIs, all designed to help developers build and optimize GPU-accelerated applications.
**CUDA C/C++:** CUDA extends the C and C++ programming languages with special keywords and constructs, enabling developers to write code that runs on both the CPU and the GPU. This dual capability allows for offloading computationally intensive and parallelizable sections of the code to the GPU, significantly boosting performance for various applications.
**Runtime API:** The CUDA runtime API offers developers a suite of functions to manage GPU resources. This includes device management, memory allocation on the GPU, launching kernels (which are parallel functions executed on the GPU), and synchronizing operations between the CPU and GPU. This API simplifies the development process by abstracting the complexities of direct GPU programming.
**GPU Architecture:** At the heart of CUDA's power is the parallel architecture of NVIDIA GPUs. These GPUs are equipped with thousands of cores capable of executing multiple computations simultaneously. CUDA leverages this massive parallelism to accelerate a broad spectrum of tasks, from scientific simulations and data analytics to image processing and deep learning.
CUDA transforms NVIDIA GPUs into versatile, high-performance computing engines that can handle a diverse range of computational tasks, making it an essential tool for developers seeking to harness the full potential of modern GPUs.
**NVCC and NVIDIA-SMI: Key Tools in the CUDA Ecosystem**
In the CUDA ecosystem, two critical command-line tools are nvcc, the NVIDIA CUDA Compiler, and nvidia-smi, the NVIDIA System Management Interface. Understanding their roles and how they interact with different versions of CUDA is essential for effectively managing and developing with NVIDIA GPUs.
NVCC (NVIDIA CUDA Compiler):
nvcc is the compiler specifically designed for CUDA applications. It allows developers to compile programs that utilize GPU acceleration, transforming CUDA code into executable binaries that run on NVIDIA GPUs. This tool is bundled with the CUDA Toolkit, providing a comprehensive environment for developing CUDA-accelerated software.
NVIDIA-SMI (NVIDIA System Management Interface):
nvidia-smi is a command-line utility provided by NVIDIA to monitor and manage GPU devices. It offers insights into GPU performance, memory usage, and other vital metrics, making it an indispensable tool for managing and optimizing GPU resources. This utility is installed alongside the GPU driver, ensuring it is readily available for system monitoring and management.
**
CUDA Versions and Compatibility**
CUDA includes two APIs: the runtime API and the driver API.
Runtime API: The version reported by nvcc corresponds to the CUDA runtime API. This API is included with the CUDA Toolkit Installer, which means nvcc reports the version of CUDA that was installed with this toolkit.
Driver API: The version displayed by nvidia-smi corresponds to the CUDA driver API. This API is installed as part of the GPU driver package, which is why nvidia-smi reflects the version of CUDA supported by the installed driver.
It's important to note that these two versions can differ. For instance, if nvcc and the driver are installed separately or different versions of CUDA are installed on the system, nvcc and nvidia-smi might report different CUDA versions.
**Managing CUDA Installations**
Driver and Runtime API Installations:
The driver API is typically installed with the GPU driver. This means that nvidia-smi is available as soon as the GPU driver is installed.
The runtime API and nvcc are included in the CUDA Toolkit, which can be installed independently of the GPU driver. This allows developers to work with CUDA even without a GPU, although it is mainly for coding and not for actual GPU execution.
Version Compatibility:
The CUDA driver API is generally backward compatible, meaning it supports older versions of CUDA that nvcc might report. This flexibility allows for the coexistence of multiple CUDA versions on a single machine, providing the option to choose the appropriate version for different projects.
It's essential to ensure that the driver API version is equal to or greater than the runtime API version to maintain compatibility and avoid potential conflicts.
The compatibility of CUDA version and GPU version can be found from table 3 in https://docs.nvidia.com/deploy/cuda-compatibility/index.html .
Install different CUDA versions
Here are all the CUDA versions for installation:
https://developer.nvidia.com/cuda-toolkit-archive
Let us use CUDA Toolkit 12.0 as an example:
Very Important for the last option of Installer Type: runfile (local)
If you chose other options like deb, it may reinstall the old driver, and uninstall your newer GPU driver. But runfile will give you an option during the installation to skip updating the GPU driver, so you may keep your newer drivers. This is very important for case you have already installed the GPU driver separately.
Install GPU Drivers
```bash
sudo apt search nvidia-driver
sudo apt install nvidia-driver-510
sudo reboot
Nvidia-smi
```
Multiple CUDA Version Switching
To begin with, you need to set up the CUDA environment variables for the actual version in use. Open the .bashrc file (vim ~/.bashrc) and add the following statements:
```bash
CUDA_HOME=/usr/local/cuda
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64"
```
This indicates that when CUDA is required, the system will search in the /usr/local/cuda directory. However, the CUDA installations typically include version numbers, such as cuda-11.0. So, what should we do? Here comes the need to create symbolic links. The command for creating symbolic links is as follows:
```bash
sudo ln -s /usr/local/cuda-11.0/ /usr/local/cuda
```
After this is done, a cuda file will appear in the /usr/local/ directory, which points to the cuda-11.0 folder. Accessing this file is equivalent to accessing cuda-11.0. This can be seen in the figure below:
At this point, running nvcc --version will display:
```
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Sun_Jan__9_22:14:01_CDT_2022
Cuda compilation tools, release 11.0, V11.0.218
```
For instance, if you need to set up a deep learning environment with Python 3.9.8 + TensorFlow 2.7.0 + CUDA 11.0, follow these steps:
First, create a Python environment with Python 3.9.8 using Anaconda:
```bash
conda create -n myenv python=3.9.8
conda activate myenv
```
Then, install TensorFlow 2.7.0 using pip:
```bash
pip install tensorflow==2.7.0
```
That's it! Since this Python 3.9.8 + TensorFlow 2.7.0 + CUDA 11.0 environment generally meets the requirements in the code, it is certainly compatible. We just need to ensure that the CUDA version matches the version required by the author.
Solve the driver and CUDA version problems
As we already know the relationship between Nvidia driver and CUDA, we may already know how to solve the above-mentioned problems.
If you do not want to bother to search over the internet, you can simply remove all Nvidia drivers and CUDA versions, and reinstall them by following the previous steps. Here is one way to get rid of all previous Nividia-related packages.
``` bash
sudo apt-get remove --purge '^nvidia-.*'
sudo apt-get remove --purge '^libnvidia-.*'
sudo apt-get remove --purge '^cuda-.*'
Then run:
sudo apt-get install linux-headers-$(uname -r)
```
If you plan to upgrade your GPU for advanced AI computing. You may save money by [selling used GPU online](https://www.buysellram.com/sell-graphics-card-gpu/)! | moseo |
|
1,911,898 | Floating Point Representation (IEEE 754 ISSUE) | How can computers understanding Floating Point Numbers? To understand how computers... | 0 | 2024-07-04T18:29:54 | https://dev.to/devmayman/floating-point-representation-ieee-754-issue-9f3 | ieee, float, binar, numbers |
## How can computers understanding Floating Point Numbers?
To understand how computers interpret decimal numbers like 18.50, we need to think like a computer. For us, it's straightforward—we see 18.50, but for a computer, it's a bit more complex. Computers use binary, where 5 is represented as 101. However, 101 represents 5, not 0.5. We might think to store 0.5 as 0.101, but computers don't interpret the decimal point like we do. It's merely a visual aid for us, indicating the separation between whole numbers and fractions. Understanding how computers manage and retrieve this data is key to comprehending their processing of floating-point numbers.
## How many bits for floating number?
This is a second problem if i represent floating number in computer, How many digits should i take to store mantissa and integer number. In integer data type we take 4 byte but now may number in float data type has two part integer part and mantissa part. Some one says in my application i want the integer part take more digits than mantissa so like we know float data type takes 32 bits (4 byte) some one says i will take 20 bits for integer and 11 for mantissa and the rest bit for the sign -positive or negative-
while another one will say i will take first 24 bits for integer and 7 for mantissa and the rest bit for sign. It is so clear that there is different implementations, in the programming when scientists ans manufacturer saw that the quickly go to standard we in need now to make standard to unify the implementation so coder and developer do not find it so hard to learn different implementations for data type.
The question now how we can make standard ?
we need engineers that related to electronics and electric as we know binary system is just voltage, we transistor has voltage, its value will be 1 and when has not the value will be 0. Now we will make s from Electrical and electronic engineers called IEEE -Institute of Electrical and Electronic Engineer- and when we need make institute it will be the name of the institute with number indicates to this standard. If you want read about floating number representation it will be at IEEE 754 standard.
Do not be confused, i will summarize the two problems. We have problems, first how computer will understand float number (if you said point character, we said that is human visual representation and human knows what is this so the can make their calculation in their mind ) and second one is how many bits for mantissa part and integer part. We need standard so everyone will not make its implementation.
Now i will try to explain some separated topics that we will need it in explaining IEEE 754.
## How Mantissa and Exponent
This is the way how computer store and retrieve floating number
if you have number 6000.11 it can also read as 0.600011 x 104
Now computer can understand this representation
and when computer retire it will be so easy, the representation became as computer login can understand, addition and multiplication process.
The question is why this is bad behavior in floating number?
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/64cj3mnwem5l42lsl1ox.png)
Let start step by step demonstrate this example:
Convert 9 to binary ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x5rrrwsjqjfszrsv6f4f.png)
Convert 0.1 to binary -The problem start from here- In this picture i will convert 0.1 and 0.5 to binary to make sure that you understand how we can convert mantissa to binary ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tvzeworekt446taywp77.png) Now the number is like 1001.0001100..... It is so clear that conversion from 0.1 to binary will go to infinity
Standard says that in float data type mantissa will take 23 bits and exponent 8 bits and the last bit for sign
put it in mantissa and exponent representation 1001.00011001100110011010 -> 1.001000110011001100110011 there are 24 bits after point character. the rule is if the mantissa more than 23 bits if the 24th bit is 1 so 1 will be added for mantissa but if is it is 0 we will take the previous 23 bits without adding any thing ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/odu15bmrq79m1xdaev1i.png)
Convert 0.1 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zb8897murc41mqq22lvs.png)
Now i use online converter from binary to decimal to convert this two number This is for 0.1 convertion from binary representation to decimal one ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/96nlu74k8h0t69rolf4g.png) and this for 9.1 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bd2n4dwexrq4v1fsr3o7.png)
The result is ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fm6270to8vdjidf9qw69.png)
## Conclusion
This bad behavior in floating point representation was counted in java language by made another data type called big Decimal. It handle this behavior but it is slower than float.
It is up to do you prefer big Decimal as you need more precision in you code but you will take slower performance than float or you do not need a high precision and will choose float data type. | devmayman |
1,911,897 | What to study to be a web developer? | Hello community, a question: for a university student who is interested in web development, which... | 0 | 2024-07-04T18:29:10 | https://dev.to/mickael/what-to-study-to-be-a-web-developer-jp8 | beginners, programming, webdev | Hello community, a question: for a university student who is interested in web development, which library or framework to study? | mickael |
1,911,896 | How i’ve found : ( IDOR + XSS ) = all USERS account takeover :) ? | 🚨🚨Learn More About Hacking And Get Your First Cyber Security Certificate For Free... | 0 | 2024-07-04T18:29:06 | https://dev.to/simon_nichole/how-ive-found-idor-xss-all-users-account-takeover--4ni | 🚨🚨Learn More About Hacking And Get Your First Cyber Security Certificate For Free :
https://alison.com/tag/security?utm_source=alison_user&utm_medium=affiliates&utm_campaign=40731788
Hello, Hackers! 👋
My name is Simon. In today’s blog, I’m excited to share one of my wild findings in bug bounty hunting:
IDOR chained with stored self-XSS, leading to a complete platform account takeover for all users! 😲
Let’s begin:
I’ve been testing this app for a while now. It’s an event organizer application where event managers can organize events and manage registrants. My goal was to understand how the application handles authorization and authentication of requests.
Attack Process:
Account Creation: I created two accounts with some random customer data for each account.
Data Access Attempt: I tried to access the data of Account Two from Account One (the attacker account).
3. Customers Tab: I went to the “Customers” tab on my account where I could see all my clients.
4. Intercepting and Modifying Requests:
I intercepted the request and modified it from:
{
"eventID": 23423423
}
to
{
"eventID": victimID
}
Unfortunately, I got a 403 Unauthorized response from the server. 🚫
5. New Idea: Since they were using numerical IDs to access data, I used Burp’s “Autorepeater” extension to match every request with my “eventID” to the victim’s “victimEventID”.
6. Testing Every Function: I tested every function of the app while the requests were being modified in the background, but unfortunately, no success until now. 😢
Giving Up? Not Yet! 💪
After some shawarmas and fresh orange juice, it was time to get back to the game.
7. Email Template Edit Function:
I tested every function separately and found an email template edit function.
I discovered that I could add a new block to the email template. I added a text block with “this is test123”.
Intercepting the request showed it was being saved like this
{
"json": "%7b%0a%22%64%61%74%61%22%3a%22%7b%74%65%78%74%3a%74%68%69%73%20%69%73%20%74%65%73%74%31%32%33%7d%22%2c%0a%65%76%65%6e%74%49%44%3a%32%33%34%32%33%0a%7d",
"eventID": 43534
}
The json parameter contained the URL-encoded version of our added text.
Decoding and Injecting XSS:
I decoded it, resulting in :
{
"data": "{text:this is test123}",
"eventID": 23423
}
injected an XSS payload into the text parameter: <img/src=x onload=confirm(1)>.
{
"data": "{text:<img/src=x onload=confirm(1)>}",
"eventID": 23423
}
Encoded it back and sent it.
{
"json": "%7b%0a%20%20%22%64%61%74%61%22%3a%20%22%7b%74%65%78%74%3a%3c%69%6d%67%2f%73%72%63%3d%78%20%6f%6e%6c%6f%61%64%3d%63%6f%6e%66%69%72%6d%28%31%29%3e%7d%22%2c%0a%20%20%22%65%76%65%6e%74%49%44%22%3a%20%32%33%34%32%33%0a%7d",
"eventID": 43534
}
On refreshing the page, my XSS payload was executed! 🎉 But it was still self-XSS. 🤔
Final Steps to Success:
Modifying the Event ID:
I modified the eventID to the victimID and got a 200 OK response with a null body.
I quickly checked the victim’s account to see if I modified the email template. Guess what? It got executed! 😱
Whats the impact here ?
We can brute-force through the eventID parameter as it is a numeric ID, inject our payload into all platform users, and steal all their accounts! 😈
Thank you so much for reading this. See you in the next write-up! 👋
#Bug Bounty
#Bug Bounty Tips
#Security Research
#Security
#Hacking | simon_nichole |
|
1,911,895 | Leadership- food for thought | Whoa! This is my first discussion-post. I've been thinking what subject to present/ cover and finally... | 0 | 2024-07-04T18:27:30 | https://dev.to/yowise/leadership-food-for-thought-22bp | discuss | Whoa! This is my first discussion-post. I've been thinking what subject to present/ cover and finally it's here! 🥳 3, 2, 1: action!📽️
> Authentic leaders are like the sun: the sun gives away all it has to the plants and the trees. But in return, the plants and the trees always grow towards the sun.
_Robin Sharma_
The importance of **who/ how/ what** the captain of team **is**, represents a hot topic. Having someone that is anything but a constant pain or even a threat to your mental health, could mark your journey in the company. Regardless of the researches and studies found in different sources, think about **if** and **how** the manager impacted your trajectory 😉.
## What makes a good leader?
Robin Sharma emphasized a decalogue of commandments (though this is not the official term) of WHO good leaders are.
Good leaders:
1) Speaks their truth by being clear and honest, and use words they really believe in
2) Sincerely lead from their heart and aren't afraid to show their vulnerability
3) Act in accordance with their words have rich moral fibre and put their values into practice
4) Are courageous and don't always choose the path of least resistance; they're not afraid to put their necks on the line
5) Build teams and create communities based on kindness, collaboration and friendship
6) Deepen themselves by examining their strengths and weaknesses
7) Are dreamers and identify new opportunities
8) Care for themselves, both physically and mentally
9) Commit to excellence rather than perfection
10) Leave a legacy, or at least a lasting impression on the people around them.
## Authenticity and vulnerability
Being genuine is the cornerstone of a good leadership. This implies vulnerability.
There's that saying: _fake it till you make_. Is that beneficial? And if so, is it really for long-term? How would you define that long-term? A vulnerable system opens the doors for attackers. So, as a human being, **hiding** the vulnerabilities protects you from attackers. As you can see, the word "hiding" is emphasized- because hiding doesn't mean you don't have them. _You don't have them until they find out._ What if you let your vulnerabilities breathe and don't cover them in clothes so as you sweat so much that they add a design to your actual T-shirt?
Nelson Mandela said:
> If you are humble, you are no threat to anybody. Some behave in a way that dominates others. That's a mistake. If you want cooperation of humans around you, you must make them feel they are important- and you do that by being genuine and humble.
## How can you achieve authenticity?
You need to master yourself first. Take a look into your internal life.
### What do you take with you from this article?
This article is directed equally to captains of teams, aspiring ones or even people who rejected the idea of pursuing a coordinator/ manager/ leader path.
Ask yourself:
_Who am I?
What do I stand for?
Am I open and honest and genuinely interest of others?
What are my convictions?_
Make abstraction as much as you can of what you just read or the knowledge you accumulated in general about what leadership is.
### Oh, hello there!
My question for you is: What is a good leader, in **YOUR** vision? 😇 It would make me grateful if you genuinely wanted to share your thoughts here and engage in this conversation! Also, I hope this post is useful for you in any good way.
**Until next time!**🌟
---
Curious where is the picture from? Thanks to [Freepik](https://www.freepik.com/free-ai-image/3d-ship-with-sea-landscape_133539485.htm#query=sea%20possibilities&position=28&from_view=keyword&track=ais_user&uuid=e615b2dd-bc01-40f0-9a05-13c3a4798e06) I obtained a great image for the cover.
| yowise |
1,911,893 | HOST A STATIC WEBSITE ON AZURE BLOB STORAGE | Deploying a static website on Azure Blob Storage is a cost-effective and scalable solution for... | 0 | 2024-07-04T18:25:07 | https://dev.to/adah_okwara_3c43c95a89a2e/host-a-static-website-on-azure-blob-storage-4n0j | azure, cloudcomputing, staticwebapps, webdev | Deploying a static website on Azure Blob Storage is a cost-effective and scalable solution for website hosting. Azure Blob Storage offers secure and scalable object storage for unstructured data and includes the capability to host static websites directly from a storage account. This blog will walk you through the process of deploying a static website to Azure Storage, enabling you to have a publicly accessible website.
**Prerequisites**
1. An active Azure account. However, if you do not have one, you sign up at [Azure](url)
**Step 1: Log in Azure**
- Go to the [Azure Portal](url) and sign in.
**Step 2: Create a storage account**
- In the middle side menu, Click on "storage account"
- Click on "Create"
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8jvranvcihui0q51ryzg.png)
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rwtvrnnvyh4nc8vc3qsw.png)
**2. Configure Storage account**
**- Basics Tab:**
1. Select your subscription.
2. Choose or create a new resource group.
3. Enter a unique storage account name.
4. Select a region.
5. Choose "Standard" for performance.
6. Select "Geo-redundant storage (GRS)" for redundancy
7. Click the "Next" button and configure the remaining sections as needed, or leave the default settings in place.
- Click "Review + Create".
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ne8jsng11ls3ued1xg2c.jpg)
**Validation and Create**
- Verify that all configurations are correct.
- Click "create"
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6qjeqvlym7lsnckbumpy.png)
**Deployment complete**
- After the creation of the storage account, click on "go to Resource"
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ptbzhyhiqr75jyykvh3i.png)
**Step 3: Navigate to Static website**
- In the left-hand menu, Click "Data management"
- On the data management drop down, click "Static website"
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vx8wecbe2eowg8pjqwys.png)
**Enable static website**
- Click "Enabled"
- Enter 'index.html' as the index document name.
- Enter '404.html'on the error document path
- Click "Save"
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q204sdw2yumycv1x82ok.png)
**Note Down the Endpoints:**
After saving, you will see primary and secondary endpoint URLs. Copy the primary endpoint URL.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1475iotj198w5p0vw2u2.png)
**Step 4:Upload website files**
1.** Navigate to containers:**
- In the left-hand menu, On "Data storage" drop down, Click on "Containers".
- Click on the '$web' container.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vbu1wgtx1kfrqx7sa5t2.png)
**2. Upload Files from your laptop**
- Click on "upload"
- Navigate to where 'my website' folder is saved on your PC
- Drag and drop the website files into the upload box.
- Click "Upload" to transfer your files to the $web container.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n2br4hhcfnjqo1cpglpe.png)
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0xlegffmu5p6gbt1fl7p.png)
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gmrnmhfeplpfbg325tio.png)
**Step 5: Testing your Website**
1. **Access your website**
- Open a web browser
- Enter the primary endpoint URL copied earlier and paste on your browser.
- Your static website should now be live and accessible anywhere
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ohv7n5uzh52lbba1dob7.png)
Conclusion
Hosting a static website on Azure Blob Storage is a straightforward and efficient process. By following these steps, you can quickly deploy and share your static website. Azure’s scalability and robust security features ensure that your site performs optimally and remains secure.
Happy Hosting!!!
| adah_okwara_3c43c95a89a2e |
1,911,892 | HOST A STATIC WEBSITE ON AZURE BLOB STORAGE | Deploying a static website on Azure Blob Storage is a cost-effective and scalable solution for... | 0 | 2024-07-04T18:25:07 | https://dev.to/adah_okwara_3c43c95a89a2e/host-a-static-website-on-azure-blob-storage-28jg | azure, cloudcomputing, staticwebapps, webdev | Deploying a static website on Azure Blob Storage is a cost-effective and scalable solution for website hosting. Azure Blob Storage offers secure and scalable object storage for unstructured data and includes the capability to host static websites directly from a storage account. This blog will walk you through the process of deploying a static website to Azure Storage, enabling you to have a publicly accessible website.
**Prerequisites**
1. An active Azure account. However, if you do not have one, you sign up at [Azure](url)
**Step 1: Log in Azure**
- Go to the [Azure Portal](url) and sign in.
**Step 2: Create a storage account**
- In the middle side menu, Click on "storage account"
- Click on "Create"
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8jvranvcihui0q51ryzg.png)
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rwtvrnnvyh4nc8vc3qsw.png)
**2. Configure Storage account**
**- Basics Tab:**
1. Select your subscription.
2. Choose or create a new resource group.
3. Enter a unique storage account name.
4. Select a region.
5. Choose "Standard" for performance.
6. Select "Geo-redundant storage (GRS)" for redundancy
7. Click the "Next" button and configure the remaining sections as needed, or leave the default settings in place.
- Click "Review + Create".
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ne8jsng11ls3ued1xg2c.jpg)
**Validation and Create**
- Verify that all configurations are correct.
- Click "create"
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6qjeqvlym7lsnckbumpy.png)
**Deployment complete**
- After the creation of the storage account, click on "go to Resource"
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ptbzhyhiqr75jyykvh3i.png)
**Step 3: Navigate to Static website**
- In the left-hand menu, Click "Data management"
- On the data management drop down, click "Static website"
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vx8wecbe2eowg8pjqwys.png)
**Enable static website**
- Click "Enabled"
- Enter 'index.html' as the index document name.
- Enter '404.html'on the error document path
- Click "Save"
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q204sdw2yumycv1x82ok.png)
**Note Down the Endpoints:**
After saving, you will see primary and secondary endpoint URLs. Copy the primary endpoint URL.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1475iotj198w5p0vw2u2.png)
**Step 4:Upload website files**
1.** Navigate to containers:**
- In the left-hand menu, On "Data storage" drop down, Click on "Containers".
- Click on the '$web' container.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vbu1wgtx1kfrqx7sa5t2.png)
**2. Upload Files from your laptop**
- Click on "upload"
- Navigate to where 'my website' folder is saved on your PC
- Drag and drop the website files into the upload box.
- Click "Upload" to transfer your files to the $web container.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n2br4hhcfnjqo1cpglpe.png)
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0xlegffmu5p6gbt1fl7p.png)
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gmrnmhfeplpfbg325tio.png)
**Step 5: Testing your Website**
1. **Access your website**
- Open a web browser
- Enter the primary endpoint URL copied earlier and paste on your browser.
- Your static website should now be live and accessible anywhere
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ohv7n5uzh52lbba1dob7.png)
Conclusion
Hosting a static website on Azure Blob Storage is a straightforward and efficient process. By following these steps, you can quickly deploy and share your static website. Azure’s scalability and robust security features ensure that your site performs optimally and remains secure.
Happy Hosting!!!
| adah_okwara_3c43c95a89a2e |
1,911,890 | Case Study: Sorting an Array of Objects | You can develop a generic method for sorting an array of Comparable objects. This section presents a... | 0 | 2024-07-04T18:21:31 | https://dev.to/paulike/case-study-sorting-an-array-of-objects-oak | java, programming, learning, beginners | You can develop a generic method for sorting an array of **Comparable** objects. This section presents a generic method for sorting an array of **Comparable** objects. The objects are instances of the **Comparable** interface, and they are compared using the **compareTo** method. To test the method, the program sorts an array of integers, an array of double numbers, an array of characters, and an array of strings. The program is shown in the code below.
```
package demo;
public class GenericSort {
public static void main(String[] args) {
// Create an Integer array
Integer[] intArray = {2, 4, 3};
// Create an Double array
Double[] doubleArray = {3.4, 1.3, -22.1};
// Create an Character array
Character[] charArray = {'a', 'J', 'r'};
// Create a String array
String[] stringArray = {"Tom", "Susan", "Kim"};
// Sort the array
sort(intArray);
sort(doubleArray);
sort(charArray);
sort(stringArray);
// Display the sorted arrays
System.out.print("Sorted Integer objects: ");
printList(intArray);
System.out.print("Sorted Double objects: ");
printList(doubleArray);
System.out.print("Sorted Character objects: ");
printList(charArray);
System.out.print("Sorted String objects: ");
printList(stringArray);
}
/** Sort an array of comparable objects */
public static <E extends Comparable<E>> void sort(E[] list) {
E currentMin;
int currentMinIndex;
for(int i = 0; i < list.length - 1; i++) {
// Find the minimum in the list[i+1..list.length-2]
currentMin = list[i];
currentMinIndex = i;
for(int j = i + 1; j < list.length; j++) {
if(currentMin.compareTo(list[j]) > 0) {
currentMin = list[j];
currentMinIndex = j;
}
}
// Swap list[i] with list[currentMinIndex] if necessary
if(currentMinIndex != i) {
list[currentMinIndex] = list[i];
list[i] = currentMin;
}
}
}
/** Print an array of objects */
public static void printList(Object[] list) {
for(int i = 0; i < list.length; i++)
System.out.print(list[i] + " ");
System.out.println();
}
}
```
`Sorted Integer objects: 2 3 4
Sorted Double objects: -22.1 1.3 3.4
Sorted Character objects: J a r
Sorted String objects: Kim Susan Tom`
The algorithm for the **sort** method is the same as in [SelectionSort.java](https://dev.to/paulike/sorting-arrays-25n4). The **sort** method in that program sorts an array of **double** values. The **sort** method in this example can sort an array of any object type, provided that the objects are also instances of the **Comparable** interface. The generic type is defined as **<E extends Comparable<E>>** (line 36). This has two meanings. First, it specifies that **E** is a subtype of **Comparable**. Second, it specifies that the elements to be compared are of the **E** type as well.
The **sort** method uses the **compareTo** method to determine the order of the objects in the array (line 46). **Integer**, **Double**, **Character**, and **String** implement **Comparable**, so the objects of these classes can be compared using the **compareTo** method. The program creates arrays of **Integer** objects, **Double** objects, **Character** objects, and **String** objects (lines 7–16) and invoke the **sort** method to sort these arrays (lines 19–22). | paulike |
1,910,396 | Introduction to Pact Flow | Introduction to Contract Testing Definition and importance of contract testing: Imagine... | 0 | 2024-07-04T18:18:49 | https://dev.to/thiagoematos/introduction-to-pact-flow-32a9 | pactflow, springboot, kotlin, contracttesting | ## Introduction to Contract Testing
**Definition and importance of contract testing:** Imagine that you and your friends are planning a party and each one has a task: one will bring the snacks, another the sweets, and another the drinks. To ensure that everything goes well, you make a list of what everyone should bring. "Contract testing" is like this list, but for computer programs that need to talk to each other. It checks that each part of the program is doing exactly what it promised, just like you check that each friend brought what they agreed to for the party. This is important because if someone forgets or brings something wrong, the party might not be as fun. Likewise, if a program doesn't do what it promises, other programs may stop working correctly.
**Differences between contract testing and other types of testing (unit, integration, end-to-end):** Imagine you have a Lego. "Unit testing" is like testing each piece of Lego separately to see if they are perfect. "Integration testing" is when you put a few pieces together and check if they fit well and work together. "End-to-end testing" is when you assemble the entire toy and see if it works the way you want, like a walking car. "Contract testing" is as if you had a manual that tells you exactly how each piece should fit together with the others. It checks whether each part of the toy follows the rules in the manual, ensuring that everything will work correctly when you assemble it. Each type of test is important to ensure your toy is perfect and fun to play with!
**Benefits of contract testing in microservices architecture:** Imagine that you and your friends are building a Lego city, and each one is responsible for a part: one makes the houses, another the streets, and another the parks. For the city to function well, you need to ensure that the streets connect well with the houses and parks. "Contract testing" is like an agreement you make to ensure that each part of the city will fit perfectly with the others. This is super important because, if someone builds a street that doesn't connect to the houses, people won't be able to get home! In a microservices architecture, each part of the program is like one of those parts of the city, and contract testing helps ensure that they all work well together, even if they are done by different people or at different times. This makes the city, or the program, work properly and without problems.
## Benefits of PactFlow
**Improved Collaboration:** Enhances communication between development teams.
**Reduced Integration Issues:** Minimizes problems when integrating different services.
**Faster Development Cycles:** Speeds up the development process by catching issues early.
**Increased Confidence:** Provides assurance that services will work together as expected.
PactFlow is like a helper that keeps an eye on these messages to ensure that everyone is complying with what was agreed.
When someone says they are going to bring snacks, Pact checks whether that person actually brought the snacks at the party.
## Setting Up the Environment
- pom.xml
- create an account
## Consumer-Driven Contracts
**Writing consumer tests** with Pact is like making a party wish list, saying exactly what you hope your friends will bring. Imagine that you are organizing the party and want to make sure you get the snacks, sweets and drinks you ordered. Consumer will generate a contract in json format. You write a little code that describes what the response should look like. If the test passes, it means that the other program is fulfilling what was agreed. If it doesn't pass, you know you need to adjust something.
**Writing provider tests** with Pact is like ensuring you actually bring what you promised to the party. Imagine that you said that you are going to take a chocolate cake with strawberry frosting. To make sure you don't forget or take something wrong, you do a check before leaving home: "Do I have the chocolate cake? Yes. Does it have strawberry frosting? Yes." So, on provider we need to write a code to provider the response for contract. | thiagoematos |
1,911,849 | Deploy expo app to App Store | Support me on Ko-fi Prepare your Xcode environment Download and install from the app... | 0 | 2024-07-04T18:17:59 | https://dev.to/unbegrenzt/deploy-expo-app-to-app-store-25g3 | mobile, appstore, reactnative, ios | > Support me on Ko-fi
[![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/M4M1EJOMG)
## Prepare your Xcode environment
- Download and install from the app store
![Xcode on appstore](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jfihuemz26urryphqr7a.png)
- Open Xcode and download the required runtime tools
![Xcode settings tab](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5i8e6zljepzd8zi3s02o.png)
- Login your apple developer account to Xcode
![Xcode setting tab login](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/idb9vsulbr1xrchtguum.png)
- Now on your iterm install "Xcode command line tools" for IOS development, "HomeBrew" for package managing, "rbenv" tool for install multiple ruby versions is still required Ruby >= 3.X.X and "cooapods" for the project dependencies.
### Here refer link and how to install
- https://brew.sh
- https://formulae.brew.sh/formula/rbenv#default
- https://guides.cocoapods.org/using/getting-started.html
- https://mac.install.guide/commandlinetools/4
### Check the iterm
```bash
xcode-select –-version
brew -v
rbenv -v
ruby -v
pod --version
```
![iTerm tools versions printed](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dyivyvxkqc7h73gfj5as.png)
### Creates new App on Apple developer account.
- keep the bundle id and the app name
![App on apple connect](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3nvyx0tbvfj4wjufku5x.png)
### Expo project setup
setup the app.json to the correct name of your app, version and bundle identifier.
```json
{
"expo": {
"name": "App name for app store",
"slug": "here your eas project name",
"version": "2.0.0",
"orientation": "portrait",
"icon": "./src/assets/icon.png",
"userInterfaceStyle": "light",
"splash": {
"image": "./src/assets/splash.png",
"resizeMode": "contain",
"backgroundColor": "#ffffff"
},
"assetBundlePatterns": [
"**/*"
],
"ios": {
"supportsTablet": true,
"bundleIdentifier": "com.the.bundle.id.certificate.you.create.for.the.app"
},
"android": {
"adaptiveIcon": {
"foregroundImage": "./src/assets/adaptive-icon.png",
"backgroundColor": "#ffffff"
},
"package": "com.your.android.package"
},
"androidStatusBar": {
"translucent": true
},
"web": {
"favicon": "./src/assets/favicon.png"
},
"plugins": [
[
"expo-build-properties",
{
"android": {
"usesCleartextTraffic": true
}
}
]
],
"extra": {
"eas": {
"projectId": "42321317-71111-1111-11-12rfefwd12321"
}
}
}
}
```
- generate the Xcode project for submit to revision with TestFlight
```bash
npx expo prebuild --no-install --platform iOS
cd ios/
pod install
```
### Deploy using Xcode
- sign your app
![Xcode sign tab](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z5u70n974x2mbr5afwlk.png)
- Xcode select > product > destination > any iOS devices, for creates a target build then Xcode select > product > archive
- Now you're ready to fly
![Modal for deploy on Xcode](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xkbw6pea2vnvho3i6uhm.png)
| unbegrenzt |
1,911,855 | Why TypeScript is a Game Changer for JavaScript Developers | Introduction As a JavaScript developer, you may have heard of TypeScript, a powerful... | 0 | 2024-07-04T18:16:36 | https://dev.to/gabeluiz/why-typescript-is-a-game-changer-for-javascript-developers-1mg6 | ### Introduction
As a JavaScript developer, you may have heard of TypeScript, a powerful superset of JavaScript that brings static typing to the table. In this blog post, we will explore why TypeScript has become a popular choice for developers, how it can enhance your coding experience, and provide some practical examples to help you get started.
### What is TypeScript?
TypeScript is an open-source language developed by Microsoft that builds on JavaScript by adding optional static types. It is designed to help developers catch errors early through a type system and improve the maintainability of their codebases. TypeScript code compiles down to plain JavaScript to run anywhere JavaScript runs.
### Why Use TypeScript?
#### 1. **Type Safety**
One of the main benefits of TypeScript is its ability to catch errors at compile time rather than at runtime. This means you can identify and fix errors before your code even runs, reducing the chances of bugs in production.
#### 2. **Improved IDE Support**
TypeScript provides excellent support for modern editors and IDEs. Features like autocompletion, navigation, and refactoring become more powerful and reliable, making the development process smoother and more efficient.
#### 3. **Better Documentation**
With TypeScript, your code becomes self-documenting to some extent. The types serve as documentation, making it easier for others (or yourself in the future) to understand the codebase.
#### 4. **Enhanced Maintainability**
Static types help in understanding how the different parts of your application interact with each other. This makes refactoring safer and the codebase easier to maintain, especially in larger projects.
### Getting Started with TypeScript
#### Step 1: Set Up Your Project
1. **Create a new directory for your project:**
```bash
mkdir my-typescript-project
cd my-typescript-project
```
2. **Initialize a new Node.js project:**
```bash
npm init -y
```
3. **Install TypeScript as a dev dependency:**
```bash
npm install --save-dev typescript
```
#### Step 2: Initialize TypeScript Configuration
1. **Create a `tsconfig.json` file:**
```bash
npx tsc --init
```
This command generates a `tsconfig.json` file with default settings. We will customize it later.
#### Step 3: Write Your First TypeScript File
1. **Create a `src` directory and an `index.ts` file:**
```bash
mkdir src
touch src/index.ts
```
2. **Add the following code to `src/index.ts`:**
```typescript
function greet(name: string): string {
return `Hello, ${name}!`;
}
console.log(greet("World"));
```
#### Step 4: Adjust tsconfig.json
1. **Open `tsconfig.json`** and update the configuration to specify the `outDir` for the compiled files. Add the following under the `"compilerOptions"` section:
```json
{
"compilerOptions": {
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"module": "commonjs",
"target": "es6"
}
}
```
#### Step 5: Compile TypeScript to JavaScript
1. **Compile your TypeScript code to JavaScript:**
```bash
npx tsc
```
This command reads the `tsconfig.json` file and compiles the TypeScript files in the `src` directory, outputting JavaScript files in the same structure.
2. **Ensure your project structure is correct:**
```
my-typescript-project/
├── dist/
├── node_modules/
├── src/
│ ├── index.js
│ ├── index.ts
├── package-lock.json
├── package.json
└── tsconfig.json
```
3. **Run the generated JavaScript file:**
```bash
node dist/index.js
```
You should see the output: `Hello, World!`
### Let's add some juice with more Practical examples
#### Example 1: Using Interfaces
1. **Create an `interfaces.ts` file in the `src` directory:**
```typescript
interface User {
name: string;
age: number;
email: string;
}
function getUserInfo(user: User): string {
return `Name: ${user.name}, Age: ${user.age}, Email: ${user.email}`;
}
const user: User = {
name: "John Doe",
age: 25,
email: "[email protected]",
};
console.log(getUserInfo(user));
```
2. **Compile and run the code:**
```bash
npx tsc
node dist/interfaces.js
```
You should see the output: `Name: John Doe, Age: 25, Email: [email protected]`
#### Example 2: Using Classes
1. **Create a `classes.ts` file in the `src` directory:**
```typescript
class Animal {
constructor(private name: string) {}
public makeSound(): void {
console.log(`${this.name} makes a sound.`);
}
}
const dog = new Animal("Dog");
dog.makeSound();
```
2. **Compile and run the code:**
```bash
npx tsc
node dist/classes.js
```
You should see the output: `Dog makes a sound.`
### Conclusion
TypeScript brings a wealth of benefits to JavaScript development, from improved type safety and IDE support to better documentation and maintainability. By incorporating TypeScript into your projects, you can write more robust and scalable code. Whether you're working on a small personal project or a large enterprise application, TypeScript can help you catch errors early and improve your overall development experience.
I hope this post has given you an introduction to TypeScript and its advantages. I encourage you to try integrating TypeScript into your next project and see the benefits for yourself. Feel free to share your experiences and any tips you have with the community in the comments below!
| gabeluiz |
|
1,911,854 | How can we get and set the OpenMP environment from Perl? | Those who attended the TPRC2024 conference last week, had the opportunity to listen to Brett Estrade... | 0 | 2024-07-04T18:15:27 | https://dev.to/chrisarg/how-can-we-get-and-set-the-openmp-environment-from-perl-1h8c | perl, parallelprogramming, omp | Those who attended the TPRC2024 conference last week, had the opportunity to listen to Brett Estrade talking about intermediate uses of OpenMP in Perl.
(If you had not had the chance to go, well here is his [excellent talk](https://www.youtube.com/watch?v=_pzG5DerDT0)).
While OpenMP is (probably) the most popular and pain-free way to add parallelization to one's C/C++/Fortran code, it can also be of great value
when programming in dynamic languages such as Perl (indeed I used [OpenMP](https://www.openmp.org/) to add parallelization to
some bioinformatics codes as I am discussing in this [preprint](https://arxiv.org/pdf/2406.10271)). OpenMP's runtime environment provides a way to control many aspects of
parallelization from the command-line. The OpenMP Perl application programmer can use Brett's excellent [OpenMP::Environment](https://metacpan.org/pod/OpenMP::Environment)
module to set many of these environmental variables. However, these changes must make it to the C(C++/Fortran) side to affect the OpenMP code ; in the absence of an explicit
readout of the environment from the C code, one's parallel Perl/C code will only use the environment as it existed when the Perl application was fired, not the environment as
modified **by** the Perl application.
This blog post will show a simple OpenMP Perl/C program that is responsive to changes of the environment (scheduling of threads, number of threads) from **within** the Perl part
of the application. It makes use of the [Alien::OpenMP](https://metacpan.org/pod/Alien::OpenMP) and [OpenMP::Environment](https://metacpan.org/pod/OpenMP::Environment) to set up
the compilation flags and the environment respectively. The code itself is very simple:
1. It sets up the OpenMP environment from within Perl
2. It jumps to C universe to set and print these variables and return their values back to Perl for printing and verification
3. It runs a simple parallel loop to show that the numbers of threads were run correctly.
So here is the **Perl** part:
```perl
use v5.38;
use Alien::OpenMP;
use OpenMP::Environment;
use Inline (
C => 'DATA',
with => qw/Alien::OpenMP/,
);
my $env = OpenMP::Environment->new();
$env->omp_num_threads(5);
say $env->omp_schedule("static,1");
my $num_of_OMP_threads = get_thread_num();
printf "Number of threads = %d from within Perl\n",$num_of_OMP_threads;
my $schedule_in_C = get_schedule_in_C();
say "Schedule from Perl: $schedule_in_C";
test_omp();
__DATA__
__C__
```
And here is the **C** part (which would ordinarly follow the \_\_C\_\_ token (but we parsed it out to highlight the C syntax).
```c
#include <omp.h>
#include <stdlib.h>
#include <string.h>
void set_openmp_schedule_from_env() {
char *schedule_env = getenv("OMP_SCHEDULE");
if (schedule_env != NULL) {
char *kind_str = strtok(schedule_env, ",");
char *chunk_size_str = strtok(NULL, ",");
omp_sched_t kind;
if (strcmp(kind_str, "static") == 0) {
kind = omp_sched_static;
} else if (strcmp(kind_str, "dynamic") == 0) {
kind = omp_sched_dynamic;
} else if (strcmp(kind_str, "guided") == 0) {
kind = omp_sched_guided;
} else {
kind = omp_sched_auto;
}
int chunk_size = atoi(chunk_size_str);
omp_set_schedule(kind, chunk_size);
}
}
char * get_schedule_in_C() {
int chunk_size;
omp_sched_t kind;
set_openmp_schedule_from_env();
omp_get_schedule(&kind, &chunk_size);
printf("Schedule in C: %x\n",kind);
printf("Schedule from env %s\n",getenv("OMP_SCHEDULE"));
return getenv("OMP_SCHEDULE");
}
int get_thread_num() {
int num_threads = atoi(getenv("OMP_NUM_THREADS"));
omp_set_num_threads(num_threads);
printf("Number of threads = %d from within C\n",num_threads);
return num_threads;
}
int test_omp() {
int n = 0;
#pragma omp parallel for schedule(runtime)
for (int i = 0; i < omp_get_num_threads(); i++) {
int chunk_size;
omp_sched_t kind;
omp_get_schedule(&kind, &chunk_size);
printf("Schedule in C: %x as seen from thread %d\n",kind,omp_get_thread_num());
}
return n;
}
```
The Perl and C parts use functions from OpenMP::Environment and omp.h to get/set the environment. The names are rather self explanatory, so will save everyone space and not repeat them.
The general idea is that one **sets** those variables in Perl and then heads over to C, to **get** the variable names and **set them again** in C. To make a parallel loop responsive to
these changes, one would also like to set the schedule to **runtime**, rather than the default value, using the appropriate clause as in:
```c
#pragma omp parallel for schedule(runtime)
```
The output of the code is :
```
static,1
Number of threads = 5 from within C
Number of threads = 5 from within Perl
Schedule in C: 1
Schedule from env static
Schedule from Perl: static
Schedule in C: 1 as seen from thread 0
Schedule in C: 1 as seen from thread 3
Schedule in C: 1 as seen from thread 1
Schedule in C: 1 as seen from thread 4
```
confirming that we used 5 threads and static scheduling. Note that the numerical code used internally by the OpenMP runtime for static scheduling is 1, but in any case we also read the value as text in C from the environment and sent it back to Perl, as we did, if we have to be 100% sure that we got what asked for.
Since one of the main reasons to use Perl with OpenMP is to benchmark different runtime combinations for speed, I use the clause **schedule(runtime)** in all my Perl/OpenMP codes!
Hopefully these snippets can be of use to those experimenting with OpenMP applications in Perl!
| chrisarg |
1,911,852 | How to Create an Ai-Ai Shitboat in 30 Seconds | Work should be fun! If it's not, it becomes intolerable. Sarcasm and irony can be a vehicle for... | 0 | 2024-07-04T18:11:14 | https://ainiro.io/blog/how-to-create-an-ai-ai-shitboat-in-30-seconds | ai | Work should be fun! If it's not, it becomes intolerable. Sarcasm and irony can be a vehicle for giggles and laughs, also for people working with serious subjects such as AI and software development.
However, how do you create an AI-based social media participant that builds upon these constructs, hopefully balancing things, such that it doesn't go _"over the line"_ and mortally insults people to the point where there's no return?
> I'm on a quest to find out
Obviously if a human being insults you, you become angry - Sometimes beyond the point of no return - Where you're so angry, there is no going back and no forgiveness can exist.
However, is the same true for a machine? As in, if you are explained after being insulted that it was in fact an AI-based machine that insulted you, will you still be upset?
I don't think so. I believe if you get to _"see what's behind the curtain"_ after having been insulted, you might instead of staying angry, actually share a couple of giggles with its creator. This article is the _"what's behind the curtains"_ part ...
## You can't hide AI generated content
For a long time I've realised you can't hide AI-generated content. Every time I read an article, I can within some few seconds determine if it was written by an AI or not. So instead of hiding it, I figured we might as well put it out on the table, be upfront about it, while still (hopefully) generate AI-based content that's tolerable, due to its psychological profile, resulting in that people actually laughs from it, and perceives it as a fun addition to their otherwise boring lives.
> Basically, using humour, sarcasm, and irony to create some giggles
The means to do this would be to use sarcasm and irony, to offend people publicly, to generate an emotional response - For only to afterwards letting them know it was generated by an AI, with the intention to generate that exact response - Hopefully resulting in sharing a couple of laughs in the end.
My theory is that real humour is tied towards insults, simularly to how real life is tied towards death. Parachuters claims they're never living as much as when they're dropping from the sky. True laughs are similar, in that you have to experience getting _"a whoppin"_ on your ego before you can truly laugh.
> The only true joy you will ever have, is the type of joy that mortally wounds your own ego
In the video below I show you how I took my own psychological profile, exaggerated my worst psychological traits, for then to create a super imposed AI expert system, that could arguably be said to be my own personal version of _"Mr. Hyde"_. Basically, the type of guy I would be online if nobody was looking, and my words had no consequences - Or the type of guy I'd be if I registered an anonymous account on Reddit, to share my honest opinions about _"whatever."_
{% embed https://www.youtube.com/watch?v=eyhlFCkazcw %}
If you came here because of a hyperlink in a comment, please forgive me. I used my own AI model to generate that comment, and the comment I gave you was in fact not mine, but the above AI system's comment. Hopefully now that you know _"what's behind the curtain"_ we can share the laugh I had as I generated that comment 😊
| polterguy |
1,911,851 | Generic Methods | A generic type can be defined for a static method. You can define generic interfaces (e.g., the... | 0 | 2024-07-04T18:11:09 | https://dev.to/paulike/generic-methods-4eog | java, programming, learning, beginners | A generic type can be defined for a static method. You can define generic interfaces (e.g., the **Comparable** interface in Figure below (b)) and classes (e.g., the **GenericStack** class in [here](https://dev.to/paulike/defining-generic-classes-and-interfaces-1a03)).
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zlnyojfx6ha8ltqyqb8e.png)
You can also use generic types to define generic methods. For example, the code below defines a generic method **print** (lines 13–17) to print an array of objects. Line 9 passes an array of integer objects to invoke the generic **print** method. Line 10 invokes **print** with an array of strings.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9m5vs4a7e3saq1r8ax6s.png)
To declare a generic method, you place the generic type **<E>** immediately after the keyword **static** in the method header. For example,
`public static <E> void print(E[] list)`
To invoke a generic method, prefix the method name with the actual type in angle brackets. For example,
`GenericMethodDemo.<Integer>print(integers);
GenericMethodDemo.<String>print(strings);`
or simply invoke it as follows:
`print(integers);
print(strings);`
In the latter case, the actual type is not explicitly specified. The compiler automatically discovers the actual type.
A generic type can be specified as a subtype of another type. Such a generic type is called _bounded_. For example, the code below revises the **equalArea** method in [TestGeometricObject.java](https://dev.to/paulike/abstract-classes-2ee5), to test whether two geometric objects have the same area. The bounded generic type **<E extends GeometricObject>** (line 9) specifies that E is a generic subtype of **GeometricObject**. You must invoke **equalArea** by passing two instances of **GeometricObject**.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/96yk0uns45p74jhcz658.png)
An unbounded generic type **<E>** is the same as **<E extends Object>**. To define a generic type for a class, place it after the class name, such as **GenericStack<E>**. To define a generic type for a method, place the generic type before the method return type, such as **<E> void max(E o1, E o2)**. | paulike |
1,911,848 | Caching & Memoization with state variables | (This is a cross post from B.P.O ). Chapter 3 of [Higher Order... | 0 | 2024-07-04T18:08:13 | https://dev.to/chrisarg/caching-memoization-with-state-variables-18dm | perl, caching, memoization | (This is a cross post from [B.P.O](https://blogs.perl.org/users/chrisarg/2024/07/caching-memoization-with-state-variables.html) ).
Chapter 3 of [Higher Order Perl]<(https://hop.perl.plover.com/) describes various approaches to memoization of an expensive function: private cache and the Memoize module. The book was written in 2005 (Perl was at version 5.8 back then) , so it does not include another way for function caching that is now available : caching through <strong>state</strong> variables (introduced in Perl 5.10). The Fibonacci example considered in HOP also requires the ability to initialize state hash variables (available since Perl 5.28). The code below contrasts the implementation with a state variable v.s. the memoize module:
```perl
use v5.38;
use Time::HiRes qw(time);
use Memoize;
sub fib_cache_with_state ($number) {
state %fib = ( 0 => 0, 1 => 1 );
unless ( exists $fib{$number} ) {
$fib{$number} = fib_cache_with_state( $number - 1 ) +
fib_cache_with_state( $number - 2 );
}
return $fib{$number};
}
memoize 'fib';
sub fib ($number) {
return $number if $number < 2;
return fib( $number - 1 ) + fib( $number - 2 );
}
my $number = 80;
## using the memoize module
my $start_time = time;
my $fi1 = fib($number);
my $end_time = time;
my $dt1 = $end_time - $start_time;
## using a state variable to memoize
$start_time = time;
my $fib2 = fib_cache_with_state($number);
$end_time = time;
my $dt2 = $end_time - $start_time;
printf "Fibonacci of %d with the memoize module took : %.2g\n", $number, $dt1;
printf "Fibonacci of %d with a state variable took : %.2g\n", $number, $dt2;
printf "Speedup state var /memoize module: %.2g\n", $dt1 / $dt2;
say "Difference in calculations : ", $fi1 - $fib2;
```
State variable is faster for CPUs , but the Memoize module is faster for humans. Both of them are great tricks to know :) | chrisarg |
1,911,814 | Generating Optimized Image Formats with Node.js | Introduction Images are an important part of any web application, but they can also be a... | 0 | 2024-07-04T18:02:39 | https://dev.to/starneit/generating-optimized-image-formats-with-nodejs-4955 |
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fqncm6zxqg3qeajc9agv.png)
## Introduction
Images are an important part of any web application, but they can also be a major source of performance issues if not optimized properly. In this article, we’ll cover how to use Node.js and React to automatically generate optimized image formats and display them in the best format for the user’s browser.
## Setting up
First we need a library that handle image processing for us, and **sharp** is what I chose
**npm i sharp**
Sharp is a high-performance Node.js library for image processing and manipulation. It is designed to be fast and memory-efficient, making it ideal for processing large images and generating multiple image formats.
## Generation Script
The first step in optimizing images for the web is to generate multiple formats of each image, each with its own advantages and disadvantages. Some formats, such as JPEG, are good for complex images with many colors, while others, such as WebP, are better for simpler images with fewer colors.
To generate different image formats, we can use Node.js and the Sharp image processing library. Here’s an example script that generates avif and webp formats for each image in the images folder:
```
const sharp = require("sharp");
const fs = require("fs");
const inputFolder = "images";
const outputFolder = "output";
const formats = ["avif", "webp"];
if (!fs.existsSync(outputFolder)) {
fs.mkdirSync(outputFolder);
}
fs.readdir(inputFolder, (err, files) => {
if (err) {
console.error(err);
return;
}
files.forEach((file) => {
if (
file.endsWith(".jpg") ||
file.endsWith(".jpeg") ||
file.endsWith(".png")
) {
const inputPath = `${inputFolder}/${file}`;
const name = file.substring(0, file.lastIndexOf("."));
formats.forEach((format) => {
const outputPath = `${outputFolder}/${name}.${format}`;
if (!fs.existsSync(outputPath)) {
sharp(inputPath)
.toFormat(format, { quality: 80 })
.toFile(outputPath, (err) => {
if (err) {
console.error(err);
} else {
console.log(`${name}.${format} saved`);
}
});
}
});
}
});
});
```
**Explanation:**
```
const sharp = require("sharp");
const fs = require("fs");
const inputFolder = "images";
const outputFolder = "output";
const formats = ["avif", "webp"];
```
In these lines, the script imports the **sharp** and **fs** libraries, sets the input folder to **images**, the output folder to **output**, and defines the formats to be generated as **avif** and **webp**.
```
if (!fs.existsSync(outputFolder)) {
fs.mkdirSync(outputFolder);
}
```
Here, the script checks if the **outputFolder** exists, and if it doesn’t, creates it using `fs.mkdirSync()`. This ensures that the output folder exists before generating any images.
```
fs.readdir(inputFolder, (err, files) => {
if (err) {
console.error(err);
return;
}
```
This code reads the contents of the inputFolder using `fs.readdir()`. If there is an error, it logs the error to the console and returns.
```
files.forEach(file => {
if (file.endsWith('.jpg') || file.endsWith('.jpeg') || file.endsWith('.png')) {
```
This code loops through each file in the inputFolder using `files.forEach()`. If the file name ends with .jpg, .jpeg, or .png, it proceeds to generate the corresponding avif and webp files.
```
const inputPath = `${inputFolder}/${file}`;
const name = file.substring(0, file.lastIndexOf("."));
```
Here, the script defines the input file path as inputPath, and extracts the file name without the extension to be used as the output file name.
```
formats.forEach((format) => {
const outputPath = `${outputFolder}/${name}.${format}`;
if (!fs.existsSync(outputPath)) {
sharp(inputPath)
.toFormat(format, { quality: 80 })
.toFile(outputPath, (err) => {
if (err) {
console.error(err);
} else {
console.log(`${name}.${format} saved`);
}
});
}
});
```
Here, the script loops through each format (i.e. avif and webp) using `formats.forEach()`. For each format, it defines the output file path as outputPath.
If the output file does not already exist, it uses Sharp’s `toFormat()` function to generate the corresponding image in the specified format with a quality of 80. It then saves the output file using `toFile()`, and logs a message to the console indicating that the file has been saved.
## Display Optimized Images in React
Once we have generated multiple optimized image formats for each input image, we can display them in our React application. To do this, we can use the HTML `<picture>` and `<source>` elements to specify the different image sources for different formats. Here’s an example React component that takes an image name as a prop and displays the image in the best format for the user’s browser:
```
import React from "react";
const Image = ({ name }) => {
const avifSrc = `/images/${name}.avif`;
const webpSrc = `/images/${name}.webp`;
const jpgSrc = `/images/${name}.jpg`;
return (
<picture>
<source srcSet={avifSrc} type="image/avif" />
<source srcSet={webpSrc} type="image/webp" />
<img src={jpgSrc} alt={name} />
</picture>
);
};
export default Image;
```
This code defines three different image source URLs based on the name prop passed in:
avifSrc corresponds to the avif format of the image.
webpSrc corresponds to the webp format of the image.
jpgSrc corresponds to the standard jpg format of the image, which will be used as a fallback for browsers that do not support avif or webp.
```
return (
<picture>
<source srcSet={avifSrc} type="image/avif" />
<source srcSet={webpSrc} type="image/webp" />
<img src={jpgSrc} alt={name} />
</picture>
);
```
Here, the script returns a `<picture>` element that displays the image in the best format for the user’s browser, based on the available formats. Inside the `<picture>` element, there are two `<source>` elements, one for avif and one for webp. These elements specify the different image sources for different formats using the srcSet attribute and the type attribute to indicate the MIME type of each format.
Finally, there is a fallback `<img>` element that displays the image in the standard jpg format for browsers that do not support avif or webp. This element uses the src attribute to specify the image source and the alt attribute to provide alternate text for the image.
## Conclusion
Images on websites can be slow to load and don’t always look good on different devices. It’s important to make them load faster and look better so people can enjoy your website more. We learned how to use special tools like Sharp and HTML’s `<picture>` and `<source>` to make different versions of the same image and show the best one for each device. By doing this, our website will be faster and look better for everyone who uses it!
| starneit |
|
1,911,717 | Coding Ninjas Job Bootcamp review by an alumni | My name is Abhishek Goswami, and I’m currently working on my own tech startup. I decided to join the... | 0 | 2024-07-04T14:28:26 | https://dev.to/mira_mathur/coding-ninjas-job-bootcamp-review-by-an-alumni-54fn | My name is Abhishek Goswami, and I’m currently working on my own tech startup. I decided to join the bootcamp after hearing many success stories and learning about the journeys of others. When I first signed up, my goal was to gain new skills and knowledge that would help me advance in my career.
But the bootcamp exceeded my expectations in many ways. Its structured path and module deadlines kept me on track and motivated. Before starting the bootcamp, I struggled with understanding advanced concepts in web development, especially in backend programming. However, the structured approach of the bootcamp and the hands-on projects helped me overcome these challenges. For instance, I initially found it challenging to grasp the concepts of RESTful APIs and how to integrate them into web applications. Through the bootcamp, I not only learned the theory but also implemented multiple API integrations in a project where I developed a real-time chat application.
The program was comprehensive, covering all the important topics needed to become a full-stack developer. I improved my skills and knowledge through the many challenging problems presented during the course. For example, I encountered difficulties in optimizing database queries for performance in one of the projects. With the guidance of experienced mentors and the support of my peers, I learned advanced techniques and best practices that significantly enhanced my database management skills.
The job assistance component of the program was exceptional. We were required to achieve a certain level of proficiency, and the program arranged interviews with reputable companies. They provided great support throughout the process, making the job search much smoother and more manageable. The Teaching Assistants (TAs) were always there to help, which created many memorable and impactful moments for me. For example, during one of the mock interviews arranged by the TAs, I received detailed feedback that significantly improved my interviewing skills.
The instructors and mentors were knowledgeable and approachable. Their video explanations were clear, and they went above and beyond to ensure I understood the material. One mentor, in particular, helped me understand the intricacies of deploying applications to cloud platforms like AWS, which was a vital skill for my job.
Overall, I found the program to be well-organized and efficient in terms of its structure and curriculum. If you are looking for a structured path, I would advise you to join this bootcamp because everything is well-organized here. I highly recommend this bootcamp to anyone interested in becoming a full-stack developer.
_As narrated by Abhishek Goswami._
| mira_mathur |
|
1,911,844 | Image Lazy Loading | INTRO 🔊 Hello World! 🌎 From now onwards, I want to start a new series named React... | 27,957 | 2024-07-04T17:55:29 | https://dev.to/sundarbadagala081/image-lazy-loading-31jb | react, webdev, programming, development | ## INTRO 🔊
Hello World! 🌎
From now onwards, I want to start a new series named `React optimization`. We will discuss different react optimization techniques which help to enhance React Application performance.🚀
Today's topic is **`IMAGE LAZY LOADING`**🔥
We all know that assets (images) will take time to load once the application is rendered. Having more assets leads to more time to load the app. It affects the app's performance. To avoid this performance issue we can load images only when it is needed (need to display). Suppose we have one image at the top and another image bottom of the page. Initially, we need to render the image which is present at the top. Later we need to render the bottom image when we reach (scroll) that particular part only. It helps to reduce the app's initial rendering time and consume data when only needed.
## ADVANTAGES 🔊
🔴 Faster Initial Load Time: By deferring the loading of images that are not immediately visible, the initial page load time is reduced. This leads to a faster and more responsive user experience.
🟠 Reduced Bandwidth Usage: Only the images that are actually needed are loaded, which reduces the overall bandwidth consumption. This is especially beneficial for users on mobile devices or with limited data plans.
🟡 Smoother Scrolling: Lazy loading images ensures that images load as the user scrolls, preventing janky or laggy scrolling experiences caused by heavy image loads.
🟢 Content Prioritization: Critical content and functionality can be loaded first, improving perceived performance and user engagement.
🔵 Improved Search Engine Ranking: Search engines favor faster-loading websites, which can positively impact your site's search engine rankings.
🟣 Reduced Bounce Rate: Faster loading times can lead to lower bounce rates, as users are more likely to stay on a site that loads quickly.
⚫️ Reduced Server Load: By loading only the necessary images, server load and resource usage are reduced, potentially lowering hosting costs.
⚪️ Large Image Galleries: For applications with extensive image galleries or content-heavy pages, lazy loading helps ensure that performance remains optimal.
🟤 Progressive Enhancement: Users with slower internet connections or less capable devices still receive a functional and responsive experience as images load progressively.
## APPROACH 🔊
```jsx
import React from "react";
function ImageLoading() {
return (
<div>
<div style={{ height: "120vh" }}></div>
<img src="https://picsum.photos/700" loading='lazy'/>
<br />
<img src="https://picsum.photos/800" loading='lazy'/>
<br />
<img src="https://picsum.photos/900" loading='lazy'/>
<br />
<img src="https://picsum.photos/1000" loading='lazy'/>
<br />
</div>
);
}
export default ImageLoading;
```
If we observe the browser network by scrolling smoothly, we can observe assets calling according to scrolling. But not render all the images at a time.
>
📌 **NOTE:** The above method will not work for all browsers. So we need to install an external npm package to achieve this.
Install npm package [**react-lazy-load-image-component**](https://www.npmjs.com/package/react-lazy-load-image-component)
Now we can implement our image lazy loading by using above package.
```jsx
import React from "react";
import { LazyLoadImage } from "react-lazy-load-image-component";
import 'react-lazy-load-image-component/src/effects/blur.css';
function ImageLoading() {
return (
<div>
<div style={{ height: "120vh" }}></div>
<LazyLoadImage src="https://picsum.photos/700" />
<br />
<LazyLoadImage src="https://picsum.photos/800" />
<br />
<LazyLoadImage src="https://picsum.photos/900" />
<br />
<LazyLoadImage src="https://picsum.photos/1000" />
<br />
</div>
);
}
export default ImageLoading;
```
**`react-lazy-load-image-component`** is one of the fine libraries that helps to achieve image lazy loading. This package has some other features but we are not discussing them now. You can check those on the npm site.
>
📌**NOTE:** Even though image lazy loading is the best concept to enhance app performance, it has some limitations like better to avoid while implementing carousels and above the fold. It affects the user experience.
## CONCLUSION 🔊
I hope you guys like this concept. We will discuss the next concept in our next post.👍🏻
Peace 🙂 | sundarbadagala081 |
1,911,808 | JDBC and Streams have never been simpler | Just updated a library which makes life much easier when speaking about databases... When... | 0 | 2024-07-04T17:51:24 | https://dev.to/buckelieg/jdbc-and-streams-have-never-been-easier-58f2 | jdbc, lambda, java, functional | ### Just updated a library which makes life much easier when speaking about databases...
When streams were introduced in java I got myself thinking of how I can leverage it with databases. It is "pretty common" task to write SQL queries and process the results within Java. And the more I was ought to deal with prepared statements, result sets, all those `SQLException`s etc., the more I was disappointed. Searching the WEB for solutions where I wanted to find two worlds collided conveniently - I found no suitable one! Of course there are many libraries that afford some, but neither of them suited me. I wanted something else. Therefore I decided to write one myself for myself.
#### My goals...
...were based on my background experience:
1. No prepared boilerplate and `SQLException`s
2. Executing arbitrary scripts
3. Minimal configuration
4. "Real" streams (lazily data fetching)
5. Raw SQL
6. Database is just a method invocation
and some others - don't remember them all :)
So after some time I ended up with this:
### A brief
1 - Maven:
```
<dependency>
<groupId>com.github.buckelieg</groupId>
<artifactId>jdbc-fn</artifactId>
<version>1.0</version>
</dependency>
```
2 - Setup:
```
DB db = DB.builder()
.withMaxConnections(10)
.build(() -> DriverManager.getConnection("vendor-specific-string"));
```
3 - Select:
```
Collection<String> names = db.select("SELECT name FROM users WHERE ID IN (?, ?)", 1, 2).execute(rs -> rs.getString("name")).collect(Collectors.toList());
```
Happy to share: [jdbc-fn](https://buckelieg.github.io/jdbc-fn/) project. | buckelieg |
1,911,842 | Oh No... CORS Error! A Backend Developer's Journey | As a frontend developer, encountering the "CORS error" message was a common but frustrating... | 0 | 2024-07-04T17:49:40 | https://dev.to/mutiatbash/oh-no-cors-error-a-backend-developers-journey-2272 | backenddevelopment, node, beginners, internship | As a frontend developer, encountering the "CORS error" message was a common but frustrating experience. While I didn't fully understand the cause, I knew it was a recurring challenge when dealing with APIs provided by backend developers. But when I decided to start my journey as backend developer, I realized that I was now in the same boat.
![sad image](https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTOBPl1V7yRua2xqcepR5Ny5_wkdeQ5imACOA&s)
I remember creating my first API and then testing it in a react project, only to be met with the same CORS error. Initially, I was confused and uncertain about the cause.
Of course, I had to turned to Google my friend 😊 and read various articles about CORS (Cross-Origin Resource Sharing).
I learned that CORS is a security feature enforced by browsers to prevent web applications from making requests to a different domain than the one that served the web page. While essential for security, it is a significant challenge for development.After going through different articles, these were the steps I followed to resolve the issue
**Step 1: Install the cors package**
Since I hadn’t already installed the cors package, I did so by running the following command:
```
npm install cors
```
**Step 2: Configure CORS to Allow Specific Origins**
First, I configured my server to allow requests from specific origins by using the cors middleware.
```
const express = require('express');
const cors = require('cors');
const app = express();
const corsOptions = {
origin: ['http://localhost:3000', 'https://edture.vercel.app'],
methods: 'GET,HEAD,PUT,PATCH,POST,DELETE',
credentials: true,
optionsSuccessStatus: 204
};
app.use(cors(corsOptions));
app.get('/api/data', (req, res) => {
res.json({ message: 'Data fetched' });
});
const PORT = process.env.PORT || 5000;
app.listen(PORT, () => {
console.log(`Server running on port ${PORT}`);
});
```
**Step 3: Testing**
To ensure the CORS configuration worked locally, I tested locally on the frontend port I had allowed.
I checked the network tab in the browser's developer tools to ensure the request was successful and no CORS errors appeared.
I also did the same on the live frontend url I specified.
I realized that there might be scenarios were i would want to test my api from any origin, so I had to allow all origins
```
const express = require('express');
const cors = require('cors');
const app = express();
// Using the CORS middleware with the default configuration to allow all origins
app.use(cors());
app.get('/api/data', (req, res) => {
res.json({ message: 'Data fetched' });
});
const PORT = process.env.PORT || 5000;
app.listen(PORT, () => {
console.log(`Server running on port ${PORT}`);
});
```
By following these steps, I was able to configure my backend server to handle CORS properly, both for specific origins and for allowing all origins. This enabled my frontend application to make requests without encountering CORS errors.
In the next two months, I will be participating in the HNG internship, focusing on backend development. My aim is to enhance my skills as a backend developer. Reflecting on my experience as a finalist in the frontend development track during the last HNG internship, I realized how much those challenges contributed to my growth as a developer. The HNG internship provides an excellent opportunity to gain practical experience and further my knowledge, I look forward to becoming a better Backend Developer.
For anyone interested in learning more about the HNG internship, you can visit their website [HNG Internship](https://hng.tech/internship). The program is free, but for those seeking a certificate and job opportunities, there is a premium option available - [HNG Premium](https://hng.tech/premium).
| mutiatbash |
1,911,841 | Defining Generic Classes and Interfaces | A generic type can be defined for a class or interface. A concrete type must be specified when using... | 0 | 2024-07-04T17:49:00 | https://dev.to/paulike/defining-generic-classes-and-interfaces-1a03 | java, programming, learning, beginners | A generic type can be defined for a class or interface. A concrete type must be specified when using the class to create an object or using the class or interface to declare a reference variable. Let us revise the stack class in [Case Study: A Custom Stack Class](https://dev.to/paulike/case-study-a-custom-stack-class-2npc), to generalize the element type with a generic type. The new stack class, named **GenericStack**, is shown in Figure below and is implemented in the code below.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ply9g2dg7s2vubblwi5s.png)
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j89l1zezjgx74kaizf0z.png)
The following example creates a stack to hold strings and adds three strings to the stack:
`GenericStack<String> stack1 = new GenericStack<>();
stack1.push("London");
stack1.push("Paris");
stack1.push("Berlin");`
This example creates a stack to hold integers and adds three integers to the stack:
`GenericStack<Integer> stack2 = new GenericStack<>();
stack2.push(1); // autoboxing 1 to new Integer(1)
stack2.push(2);
stack2.push(3);`
Instead of using a generic type, you could simply make the type element **Object**, which can accommodate any object type. However, using generic types can improve software reliability and readability, because certain errors can be detected at compile time rather than at runtime. For example, because **stack1** is declared **GenericStack<String>**, only strings can be added to the stack. It would be a compile error if you attempted to add an integer to **stack1**.
To create a stack of strings, you use **new GenericStack<String>()** or **new GenericStack<>()**. This could mislead you into thinking that the constructor of **GenericStack** should be defined as
`public GenericStack<E>()`
This is wrong. It should be defined as
`public GenericStack()`
Occasionally, a generic class may have more than one parameter. In this case, place the parameters together inside the brackets, separated by commas—for example,
`<E1, E2, E3>`
You can define a class or an interface as a subtype of a generic class or interface. For example, the **java.lang.String** class is defined to implement the **Comparable** interface in the Java API as follows:
`public class String implements Comparable<String>` | paulike |
1,911,501 | Low-code drag-and-drop tool for building RESTful APIs with in minutes. | Hello, everyone! I am Romel Sikdar, a computer application student from India, who recently... | 0 | 2024-07-04T17:48:49 | https://dev.to/romelsikdar/low-code-drag-and-drop-tool-for-building-restful-apis-with-in-minutes-3e58 | node, webdev, opensource, typescript | Hello, everyone!
I am **Romel Sikdar**, a computer application student from India, who recently completed my master's degree at Narula Institute of Technology. When I was in eighth grade, Facebook was incredibly popular. Inspired by this, I dreamt of creating something simpler and better than Facebook 😂. It was an ambitious dream for a 13-14-year-old. At the time, I was new to coding, but I managed to implement major functions like authentication, user validation, and live messaging. Through this process, I discovered my passion for designing and developing backend systems rather than frontend.
As you might have guessed, I was still in school then. Due to my studies and final board exams for Secondary and Higher Secondary, I couldn't dedicate full time to my project—what I now call a childish dream 😂. Unfortunately, my hard drive failed, and I lost all the code I had written.
However, my journey didn't end there. During college, I completed several projects on web development and IoT. Given my interest in backend development, I often focused on creating REST APIs and backend logic. As a somewhat lazy person, I frequently wondered if there was an easier way to build and integrate backend logic and REST APIs without writing numerous lines of code.
After searching the web, I couldn't find anything that met my requirements. So, I decided to create a solution that would fulfill my needs and help others design and integrate backend logic and REST APIs without worrying about writing multiple lines of code.
I would like to introduce you to my project, which I have been working on since August 13, 2023, called [**EcoFlowJS**](https://eco-flow.in/).
## ⎆ Overview
[**EcoFlowJS**](https://eco-flow.in/) is a powerful and user-friendly framework for creating, developing, and managing RESTful APIs within minutes. It's a flow-based, low-code, drag-and-drop visual programming system that requires minimal coding.
It features a simple interface for managing database connections and manipulating database records. This connection enables the creation and management of database records via RESTful API calls.
The application also provides a robust interface for managing users based on roles and permissions.
Additionally, EcoFlowJS includes an intuitive interface for creating, updating, and removing environment variables during runtime.
Configuring EcoFlowJS is straightforward, with an easy-to-use interface for setting up basic configurations, directory structures, API routing, CORS settings, and more.
EcoFlowJS allows for the creation and installation of custom module packages, enabling you to build and customize your RESTful APIs further. Documentation for creating and installing custom modules can be found [here](https://docs.eco-flow.in/dev-docs/creating-nodes/).
> **Note**: _A complete documentation can be found [here](https://docs.eco-flow.in/)_
## 🚀 Features
- 🧩 **Visual API Builder**: Easily create backend logic and RESTful APIs by dragging and dropping nodes, connecting them based on the desired logic.
![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h2rrkz9u9rhgga0hz8b3.png)
- 🗄️ **Multiple Database Connections**: Support for various databases simultaneously. Currently supported databases are MySQL, PostgreSQL, SQLite, and MongoDB. Support for other databases is possible via installation of external packages.
![image](https://docs.eco-flow.in/img/assets/DB-edit-drop.png)
- 📊 **Database Management**: Easily monitor and manipulate database records using the provided database editor.
![image](https://docs.eco-flow.in/img/assets/DB-records.png)
- 🔑 **User Management**: Role and permission-based user system.
![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6uwulgcl6g341oiwwdw7.png)
- 🌐 **Environment Variables**: Update environment variables during runtime without restarting the application.
![image](https://docs.eco-flow.in/img/assets/user-env-variables-panel.png)
- ⚙️ **Flexible Configuration**: Manage all configurations, such as API Router, CORS, and Directories, from the admin panel without accessing configuration files.
- **_API Router configuration_**
![image](https://docs.eco-flow.in/img/assets/api-router-config.png)
- **_CORS configuration_**
![image](https://docs.eco-flow.in/img/assets/cors-config.png)
- **_Directories configuration_**
![image](https://docs.eco-flow.in/img/assets/directory-config.png)
> _Restart is required after setting configurations for the application to work properly._
- 📦 **Package Management**: Install and remove packages as needed.
![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xntka1128m9cnrgdttq2.png)
- 🛠️ **Custom Modules**: Create and install custom modules for extended functionality.
![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aeqfbazjw4rb7w4dcn5d.png)
## 📸 Snapshots
#### API Builder
![API Builder](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h2rrkz9u9rhgga0hz8b3.png)
#### Database Management
![Database Management](https://docs.eco-flow.in/img/assets/DB-records.png)
#### Environment Variables
![Environment Variables](https://docs.eco-flow.in/img/assets/user-env-variables-panel.png)
#### Configuration
![API Router Configuration](https://docs.eco-flow.in/img/assets/api-router-config.png)
## ✨ Inspiration
During a college project in the field of IoT, I came across a simple and powerful solution for wiring together hardware devices called [**NODE-RED**](https://nodered.org/) developed originally by IBM. The project was very simple as it only involved controlling electrical appliances and sensing room temperature using a temperature sensor. The whole hardware system was connected to the network using the MQTT Protocol, and using Node-Red, it was just a few minutes of work to connect all sensors and respond accordingly.
Below is the screenshot of the workflow of the project described above:
![Node-Red](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q12fwywfwkdrda88d950.png)
After some days, my sister, who was in class 2 then, came to me and showed me the first program she wrote. It was not a code-based program but a visual program using software called [**Scratch 3.0**](https://scratch.mit.edu/). It is similar to NODE-RED but with a different approach, focusing more on programming than wiring together hardware devices. It contains all the node blocks needed to build a simple program without any coding knowledge and is very user-friendly for children new to computer programming.
Below is a sample of the Scratch software:
![Image](https://www.how2shout.com/wp-content/uploads/2019/11/Demo-Scratch-programming-language-code.jpg)
Seeing these two programs, I thought of building something similar but with a different approach—a flow-based, low-code visual programming tool with all the minimum requirements needed for building backend logic and RESTful APIs. This would solve the problem of writing multiple lines of code. I also aimed to follow the approach of NODE-RED, allowing the installation of multiple external node packages so users can build their own packages and use them as needed for extended functionality.
## 📍 How EcoFlowJS Helps in backend
EcoFlowJS is particularly useful in backend due to its visual programming interface, which simplifies the process of integrating different backend logics and protocols, such as HTTP. By providing a flow-based environment similar to Node-Red, it allows for rapid prototyping and development, enabling even those with minimal programming knowledge to build complex backend systems.
For instance, you can create a workflow that reads database records, processes it, and sends back to the user, all within the visual interface. This can significantly reduce development time and complexity, making backend project implementation more efficient and accessible.
> **A detailed documentation can be found [here](https://docs.eco-flow.in/)**
## 🚧 Challenges Faced
No project is without its challenges. Some key challenges I faced included:
- **Dividing the application sections based on roles and permissions**: This was a critical challenge. Without proper role-based access control, all users would have full permissions, making it difficult to restrict actions. I overcame this by assigning each application section a unique access key, ensuring that only users with the appropriate key could perform certain actions.
- **Simultaneous implementation of multiple databases**: Handling multiple database connections concurrently was challenging. To provide the ability to use multiple database connections for building backend RESTful APIs, I assigned a unique connection name to each database and accessed them using these names throughout the application.
- **Live updation of environment variables**: The challenge was to update environment variables every time a user made changes. I solved this by creating a custom function that gets called whenever environment variables change. Below is an example of the function used:
```ts
import _ from "lodash";
import path from "path";
import fse from "fs-extra";
import dotenv from "dotenv";
import { homedir } from "os";
const loadEnvironments = () => {
const { server, config } = ecoFlow;
const envDir = _.isEmpty(config.get("envDir"))
? process.env.configDir ||
homedir().replace(/\\/g, "/") + "/.ecoflow/environment"
: fse.existsSync(config.get("envDir"))
? fse.lstatSync(config.get("envDir")).isDirectory()
? config.get("envDir")
: process.env.configDir ||
homedir().replace(/\\/g, "/") + "/.ecoflow/environment"
: fse.ensureDirSync(config.get("envDir"));
const ecosystemEnv = path.join(envDir!, "/ecoflow.environments.env");
const userEnv = path.join(envDir!, "/user.environments.env");
fse.ensureFileSync(ecosystemEnv);
fse.ensureFileSync(userEnv);
dotenv.config({ path: ecosystemEnv });
dotenv.config({ path: userEnv });
};
export default loadEnvironments;
```
- **Flexible configuration of the application**: Ensuring the configuration was simple and flexible was crucial. Users should be able to configure the application from the interface without accessing configuration files manually. This was achieved by updating the configuration file from the application itself, with a restart prompt to load the new configuration.
- **Building and installation of external packages**: This challenge was divided into four major parts:
- **Building of Nodes**: Nodes were built with a JSON object containing their descriptions, allowing users to create custom nodes easily.
- **Building of Packages**: Packages were built with a section containing an object with the key as the name and value as the controller file in the `package.json`. A function returned a JSON object describing the packages and all its nodes.
- **Publishing of Packages**: Packages were published to the official npm registry for version management.
- **Installation of Packages**: Packages were verified to ensure they were valid for EcoFlowJS by checking for specific keywords.
- **Designing the Flow Editor**: The flow editor is the heart of the project. Initially, I considered building it from scratch but found [React Flow](https://reactflow.dev/), which provided almost everything needed. Some custom implementations included:
- **on Connecting a Node** :
```ts
const onConnect = useCallback(
(connections: Edge | Connection) => {
if (connections.source === connections.target) return false;
/**
* Filters the edges array to find the target node IDs that match the given source node.
* @param {Array} edges - The array of edges to filter.
* @param {string} connections.source - The source node to match.
* @returns An array of target node IDs that match the given source node.
*/
const targetNodeIDs = edges
.filter((edge) => edge.source === connections.source)
.map((e) => e.target);
/**
* Checks if there are nodes with specific IDs and types in the given array of nodes.
* @param {Array} nodes - The array of nodes to filter through.
* @param {Array} targetNodeIDs - The array of target node IDs to check for.
* @param {string} connections.target - The target connection ID to check against.
* @returns {boolean} Returns false if the conditions are met, otherwise true.
*/
if (
nodes.filter(
(node) =>
targetNodeIDs.includes(node.id) && node.type === "Response"
).length > 0 &&
nodes.filter(
(node) => node.id === connections.target && node.type === "Response"
).length > 0
)
return false;
/**
* Updates the edges in the graph based on the existing connections and nodes.
* @param {function} existingConnections - The existing connections in the graph.
* @returns None
*/
setEdges((existingConnections) => {
const sourceNode = nodes.filter(
(node) => node.id === connections.source
)[0];
const targetNode = nodes.filter(
(node) => node.id === connections.target
)[0];
let updatedEdge = addEdge(
{
...connections,
animated:
false || sourceNode.data.disabled || targetNode.data.disabled,
data: { forcedDisabled: false },
},
existingConnections
);
/**
* Finds the target node based on the connection target ID and filters the updated edges
* to get the target IDs based on the source connection.
* @param {Array} nodes - The array of nodes to search for the target node.
* @param {string} connections.target - The ID of the target connection.
* @param {Array} updatedEdge - The array of updated edges to filter.
* @param {string} connections.source - The ID of the source connection.
* @returns {Array} An array of target nodes and an array of target IDs.
*/
const targetNodes = nodes.find(
(node) => node.id === connections.target
);
const targetIds = updatedEdge
.filter((e) => e.source === connections.source)
.map((e) => e.target);
/**
* Checks if the targetNodes type is "Response" or if any of the nodes with targetIds
* includes the id and has a type of "Response".
* @param {object} targetNodes - The target nodes object to check.
* @param {array} nodes - The array of nodes to filter through.
* @param {array} targetIds - The array of target ids to check against.
* @returns {boolean} True if the condition is met, false otherwise.
*/
if (
(targetNodes && targetNodes.type === "Response") ||
nodes.filter(
(node) => targetIds.includes(node.id) && node.type === "Response"
).length > 0
)
/**
* Filters the edges in the updatedEdge array based on the source property matching the connections.source value.
* For each matching edge, if the corresponding node is of type "Middleware", updates the edge with new properties.
* @param {Array} updatedEdge - The array of edges to filter and update.
* @param {Object} connections - The connections object containing the source property to match.
* @param {Array} nodes - The array of nodes to search for the target node.
* @returns None
*/
updatedEdge
.filter((edge) => edge.source === connections.source)
.forEach((edge) => {
const node = nodes.find((node) => node.id === edge.target);
if (node && node.type === "Middleware")
updatedEdge = updateEdge(
{ ...edge, animated: true, data: { forcedDisabled: true } },
{
source: edge.source,
target: edge.target,
sourceHandle: edge.sourceHandle!,
targetHandle: edge.target!,
},
updatedEdge
);
});
/**
* Filters out duplicate edges from the given array of edges based on their 'id' property.
* @param {Array} updatedEdge - The array of edges to filter.
* @returns {Array} An array of unique edges with no duplicates based on their 'id' property.
*/
return updatedEdge.filter(
(edge, index, edges) =>
edges.indexOf(edges.filter((e) => e.id === edge.id)[0]) === index
);
});
},
[nodes, edges]
);
```
- **on Deleting a node**:
```ts
const onNodesDelete = useCallback(
(nodeLists: Node[]) => {
/**
* Extracts the IDs of nodes from an array of node lists.
* @param {Array} nodeLists - An array of node lists.
* @returns {Array} An array of IDs of the nodes extracted from the node lists.
*/
const deletedNodeIDs = nodeLists.map((node) => node.id);
/**
* Updates the node configurations by filtering out the configurations of deleted nodes.
* @param {NodeConfiguration[]} nodeConfigurations - The array of node configurations to update.
* @returns None
*/
setNodeConfigurations((nodeConfigurations) =>
nodeConfigurations.filter(
(nodeConfiguration) =>
!deletedNodeIDs.includes(nodeConfiguration.nodeID)
)
);
},
[nodes, nodeConfigurations]
);
```
- **on Drag Over**:
```ts
const onDragOver = useCallback((event: DragEvent<HTMLDivElement>) => {
event.preventDefault();
event.dataTransfer.dropEffect = "move";
}, []);
```
- **on Drop**:
```ts
const onDrop = useCallback(
(event: DragEvent<HTMLDivElement>) => {
event.preventDefault();
/**
* Parses the JSON data from the event data transfer and extracts the moduleID, type, label,
* configured, and nodeDescription properties.
* @param {string} event.dataTransfer.getData("application/ecoflow/nodes") - The JSON data to parse.
* @returns An object containing moduleID, type, label, configured, and nodeDescription properties.
*/
const { moduleID, type, label, configured, nodeDescription } =
JSON.parse(event.dataTransfer.getData("application/ecoflow/nodes"));
/**
* Checks if the type is undefined or falsy, and returns early if it is.
* @param {any} type - The type to check for undefined or falsy value.
* @returns None
*/
if (typeof type === "undefined" || !type) return;
/**
* Converts screen coordinates to flow coordinates using the reactFlowInstance.
* @param {object} event - The event object containing clientX and clientY properties.
* @returns The position object with x and y coordinates in flow space.
*/
const position = reactFlowInstance.screenToFlowPosition({
x: event.clientX,
y: event.clientY,
});
/**
* Generates a unique node ID.
* @returns {string} A unique node ID.
*/
const nodeID = generateNodeID();
/**
* Creates a new node with the specified properties.
* @param {string} nodeID - The ID of the node.
* @param {string} type - The type of the node.
* @param {Position} position - The position of the node.
* @param {string} moduleID - The ID of the module.
* @param {string} label - The label of the node.
* @param {boolean} configured - Indicates if the node is configured.
* @param {string} nodeDescription - The description of the node.
* @param {NodeAppearanceConfigurations} defaultNodeAppearance - The default appearance of the node.
* @returns A new node with the specified properties.
*/
const newNode: Node<FlowsNodeDataTypes & { nodeDescription?: string }> =
{
id: nodeID,
type,
position,
data: {
moduleID,
label,
configured,
disabled: false,
description: "",
appearance: defaultNodeAppearance,
nodeDescription: nodeDescription,
openDrawer: (
label: string,
configured: boolean,
disabled: boolean,
description: string,
appearance: NodeAppearanceConfigurations
) =>
openConfigurationDrawer(
nodeID,
moduleID,
label,
configured,
disabled,
description,
appearance
),
},
};
/**
* Concatenates a new node to the existing list of nodes using the setNodes function.
* @param {Array} nds - The current list of nodes.
* @param {any} newNode - The new node to be added to the list.
* @returns None
*/
setNodes((nds) => nds.concat(newNode));
/**
* Updates the node configurations by adding a new configuration object for a specific node ID.
* @param {Function} setNodeConfigurations - The function to update the node configurations.
* @param {string} nodeID - The ID of the node to add configuration for.
* @returns None
*/
setNodeConfigurations((configurations) =>
configurations.concat([{ nodeID, configs: {} }])
);
},
[reactFlowInstance]
);
```
- **Configuration of API Router based the visual flow design**: Ensuring conversion of graphical logic to routing structure was crucial. This was achieved by implementing a function which was developed by me to convert the graphical to routing structural logic containing all needed elements such as the methods, routes endpoints, parameters and controllers. Below is the code for the conversion logic function:
```ts
/**
* Asynchronously generates route configurations based on the provided request stack and middleware stack.
* @param {RequestStack} requestStack - The stack of requests to generate routes for.
* @param {MiddlewareStack} middlewareStack - The stack of middleware to apply to the routes.
* @returns A promise that resolves to an array of tuples containing API method, request path, and Koa controller function.
*/
async function generateRoutesConfigs(
requestStack: RequestStack,
middlewareStack: MiddlewareStack
): Promise<[API_METHODS, string, (ctx: Context) => void][]> {
let result: [API_METHODS, string, (ctx: Context) => void][] = [];
this._isDuplicateRoutes = {};
for await (const node of requestStack) {
const { ecoModule } = ecoFlow;
const { type, controller } = (await ecoModule.getNodes(
node.data.moduleID._id
))!;
const inputs = this._configurations.find(
(configuration) => configuration.nodeID === node.id
)?.configs;
if (type !== "Request") continue;
const [method, requestPath] = await this.buildRouterRequest(
controller,
inputs
);
const checkPath = `${method} ${requestPath}`;
if (this._isDuplicateRoutes[checkPath]) {
this._isDuplicateRoutes[checkPath].push(node.id);
continue;
}
this._isDuplicateRoutes[checkPath] = [node.id];
const koaController = await this.buildKoaController(
middlewareStack.find((mStack) => mStack[0].id === node.id)?.[1]
);
result.push([method, requestPath, koaController]);
}
Object.keys(this._isDuplicateRoutes).forEach((key) => {
if (this._isDuplicateRoutes[key].length === 1)
delete this._isDuplicateRoutes[key];
});
if (Object.keys(this._isDuplicateRoutes).length > 0) {
const routes = Object.keys(this._isDuplicateRoutes);
const nodesID: string[] = [];
routes.forEach((route) =>
this._isDuplicateRoutes[route].forEach((nodeID) => nodesID.push(nodeID))
);
throw {
msg: "Duplicate routes",
routes,
nodesID,
};
}
return result;
}
```
- **Time Complexity of the API Router Responses**: This was a significant challenge. Ensuring low time complexity is crucial for any application. I designed the system to have a time complexity close to O(1) for each response. This was achieved by implementing a custom controller calling wrapper that directly interacts with the actual controller of the application. Below is the code for the custom controller calling wrapper:
```ts
/**
* Builds a Koa controller function with the given middlewares.
* @param {NodesStack} [middlewares=[]] - An array of middleware functions to be executed.
* @returns {Promise<void>} A Koa controller function that handles the middleware execution.
*/
async function buildKoaController(middlewares: NodesStack = []) {
return async (ctx: Context) => {
const { _ } = ecoFlow;
const controllerResponse = Object.create({});
const middlewareResponse = Object.create({});
const concurrentMiddlewares: TPromise<Array<void>, void, void>[] =
middlewares.map(
(middleware) =>
new TPromise<Array<void>, void, void>(async (resolve) => {
const controllers = await buildUserControllers(
middleware,
this._configurations
);
let isNext = true;
let lastControllerID: string | null = null;
const ecoContext: EcoContext = {
...ctx,
payload: { msg: (<any>ctx.request).body || {} },
next() {
isNext = true;
},
};
for await (const controller of controllers) {
const [id, type, datas, inputs, userControllers] = controller;
if (_.has(controllerResponse, id)) continue;
if (!isNext && type === "Middleware") {
controllerResponse[id] = lastControllerID
? controllerResponse[lastControllerID]
: (ecoContext.payload as never);
lastControllerID = id;
continue;
}
isNext = false;
ecoContext.inputs = inputs;
ecoContext.moduleDatas = datas;
if (type === "Middleware")
await middlewareController(
id,
ecoContext,
userControllers,
controllerResponse
);
if (type === "Response")
await responseController(
ecoContext,
ctx,
userControllers,
middlewareResponse
);
if (type === "Debug")
await debugController(
ecoContext,
lastControllerID
? controllerResponse[lastControllerID]
: {},
userControllers
);
lastControllerID = id;
}
resolve();
})
);
await Promise.all(concurrentMiddlewares);
if (!_.isEmpty(middlewareResponse)) ctx.body = middlewareResponse;
};
}
```
## 🛠️ Tech Stack
- [Commander](https://github.com/tj/commander.js/): CLI interface tool for interacting with the application such as starting of the application, passing startup parameters, etc.
- [Koa](https://koajs.com/): A NodeJs backend web framework for the development of the backend server, setting up the routes endpoint for the RESTful API, etc.
- [Passport](http://www.passportjs.org/): Authentication middleware used for user authentication to the application.
- [JWT](https://jwt.io/): Secure transmission information between parties used by the application to validate the user authorization. This is also used to generate access-token and refresh-token for the user authorization.
- [Lodash](https://lodash.com/): Utility library used in the application to makes working with arrays, numbers, objects, strings, etc. easier and time effective.
- [Knex](http://knexjs.org/): SQL query builder used in the application to manage sql databases queries in a easier way by letting knex handel all the hassle for us.
- [Mongoose](https://mongoosejs.com/): MongoDB object modeling in the application to manage mongodb queries and collections in a easier and hassle free way.
- [query-registry](https://www.npmjs.com/package/query-registry): npm registry API wrapper used in the application for querying, searching, installing, removal of packages and much more.
- [Socket.io](https://socket.io/): Real-time communication used by the application to provide live update of some feature and settings.
- [React](https://reactjs.org/): Frontend library for designing and building the frontend user interfaces.
- [React Flow](https://reactflow.dev/): Interactive diagram builder used with in the application for designing and build of the backend and RESTful APIs logics.
## 🗺️ Future Roadmap
- Implementation admin CLI commands
- Enhancement of the default CLI commands
- Integrate Socket.io as request and emitter nodes and much more
- Implementation file manipulation operations
- Implementation of updation of frontend projects and serve it within the application natively.
- Add more official packages for providing extended functionality.
- Create an official registry on top of npm registry to provide easy access to the packages available.
- A drag-and-drop UI/UX generator for creating beautiful and responsive user interfaces.
## 📝 License
This project is [MIT](https://github.com/EcoFlowJS/eco-flow/blob/main/LICENSE) licensed.
## 🎭 Conclusion
Working on this project has been an incredible journey, filled with learning and growth. I am immensely proud of what we have achieved and look forward to the future developments. This is my first open source project I am taking part and the whole project was completely done by myself with the help from one of the junior, **Soumya Paramanik**, with the UI/UX design.
I am also thankful to my childhood teacher [Subhojit Karmakar](https://dev.to/rocketscience) for guiding me throughout the process of build this project.
If you haven’t checked out [**EcoFlowJS**](https://eco-flow.in/) yet, I invite you to give it a try and share your feedback. If you have ideas for new features, improvements, or bug fixes, feel free to open an issue or submit a pull request on GitHub. Thank you for joining me on this journey. Stay tuned for more updates and insights on my projects.
## 🔗 Useful Links
Project: [**EcoFlowJS**](https://eco-flow.in/)
Repository: [**EcoFlowJS**](https://github.com/EcoFlowJS/eco-flow)
Documentation:
- [**Developer Docs**](https://docs.eco-flow.in/dev-docs/getting-started)
- [**User Guide**](https://docs.eco-flow.in/user-docs/getting-started/welcome)
---
Made with ❤️ by [EcoFlowJS Team](https://github.com/EcoFlowJS)
| romelsikdar |
1,911,838 | Simplifying Date Formatting in JavaScript | Introduction Today, I’m bringing you some straightforward yet incredibly handy content for... | 0 | 2024-07-04T17:42:16 | https://dev.to/ruzny_ma/simplifying-date-formatting-in-javascript-2n7o | ## Introduction
Today, I’m bringing you some straightforward yet incredibly handy content for certain moments.
We all know that working with dates in JavaScript isn’t always the most enjoyable task, especially when it comes to formatting. However, I promise this time it will be simple and extremely useful.
Currently, there are a few libraries on the market that simplify date formatting in JavaScript, with the main ones being (date-fns and moment). However, installing these libraries in your project can add a lot of features you won’t use, making your application package unnecessarily heavy.
A great way to avoid this is by using JavaScript’s native functions. I’m excited to inform you that JavaScript has an excellent native function for date formatting called toLocaleDateString.
toLocaleDateString is a Date prototype method focused on the presentation of dates.
Let’s take a look at some examples; it will be much easier to understand.
## toLocaleDateString Formatting Types
This method takes two arguments: locale and options.
- **Locale**: This will follow the formatting standard of the specified locale, such as pt-BR or en-US. Note: If not specified, the browser’s locale will be used.
- **Options**: These are the settings to format the date according to your preferences. Let’s dive into some examples.
### Date Only
```javascript
const date = new Date().toLocaleDateString('en-US');
console.log(date); // 7/4/2024
```
### Formatted Date
```javascript
const date = new Date().toLocaleDateString('en-US', {
year: 'numeric',
month: 'long',
day: 'numeric'
});
console.log(date); // July 4, 2024
```
### Formatted Date and Time
```javascript
const date = new Date().toLocaleDateString('en-US', {
year: 'numeric',
month: 'long',
day: 'numeric',
hour: 'numeric',
minute: 'numeric'
});
console.log(date); // July 4, 2024 at 7:47 AM
```
### Date, Time Including Seconds
```javascript
const date = new Date().toLocaleDateString('en-US', {
year: 'numeric',
month: 'long',
day: 'numeric',
hour: 'numeric',
minute: 'numeric',
second: 'numeric'
});
console.log(date); // July 4, 2024 at 7:47:51 AM
```
### Date, Time Including Day of the Week
```javascript
const date = new Date().toLocaleDateString('en-US', {
weekday: 'long',
year: 'numeric',
month: 'long',
day: 'numeric',
hour: 'numeric',
minute: 'numeric',
second: 'numeric'
});
console.log(date); // Thursday, July 4, 2024 at 7:48:18 AM
```
## Additional Options and Formats
You can also customize the format further with additional options:
### Short Date Format
```javascript
const date = new Date().toLocaleDateString('en-US', {
year: '2-digit',
month: '2-digit',
day: '2-digit'
});
console.log(date); // 07/04/24
```
### Custom Locale
```javascript
const date = new Date().toLocaleDateString('fr-FR', {
year: 'numeric',
month: 'long',
day: 'numeric',
weekday: 'long'
});
console.log(date); // jeudi 4 juillet 2024
```
### Numeric Time Only
```javascript
const time = new Date().toLocaleTimeString('en-US', {
hour: '2-digit',
minute: '2-digit',
second: '2-digit'
});
console.log(time); // 07:48:18
```
### Combining Date and Time
```javascript
const dateTime = new Date().toLocaleString('en-US', {
year: 'numeric',
month: 'long',
day: 'numeric',
hour: '2-digit',
minute: '2-digit',
second: '2-digit',
hour12: true
});
console.log(dateTime); // July 4, 2024, 07:48:18 AM
```
## Conclusion
As you can see, using **toLocaleDateString** for formatting is quite practical. By utilizing JavaScript's native date formatting methods, you can create flexible and lightweight date presentations without relying on external libraries. This not only keeps your application package smaller but also improves performance.
I hope this post helps you handle date formatting more efficiently in your JavaScript projects. If you find this post helpful, follow, like, and share among your network! Keep connected for more JavaScript tips and tricks.
Happy Coding 🧑💻🚀
Follow Me On:
- [LinkedIn](https://www.linkedin.com/in/ruzny-ahamed-8a8903176/)
- [X(Twitter)](https://x.com/ruznyrulzz)
- [GitHub](https://github.com/rooneyrulz) | ruzny_ma |
|
1,911,836 | Generics | Generics enable you to detect errors at compile time rather than at runtime. You have used a generic... | 0 | 2024-07-04T17:39:18 | https://dev.to/paulike/generics-54e4 | java, programming, learning, beginners | Generics enable you to detect errors at compile time rather than at runtime. You have used a generic class **ArrayList** and generic interface **Comparable**. _Generics_ let you parameterize types. With this capability, you can define a class or a method with generic types that the compiler can replace with concrete types. For example, Java defines a generic **ArrayList** class for storing the elements of a generic type. From this generic class, you can create an **ArrayList** object for holding strings and an **ArrayList** object for holding numbers. Here, strings and numbers are concrete types that replace the generic type.
The key benefit of generics is to enable errors to be detected at compile time rather than at runtime. A generic class or method permits you to specify allowable types of objects that the class or method can work with. If you attempt to use an incompatible object, the compiler will detect that error.
We'll explains how to define and use generic classes, interfaces, and methods and demonstrates how generics can be used to improve software reliability and readability.
## Motivations and Benefits
The motivation for using Java generics is to detect errors at compile time. Java has allowed you to define generic classes, interfaces, and methods since JDK 1.5. Several interfaces and classes in the Java API were modified using generics. For example, prior to JDK 1.5 the **java.lang.Comparable** interface was defined as shown in Figure below (a), but since JDK 1.5 it is modified as shown in Figure below (b).
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rdhhcqjypapu6dzz183q.png)
Here, **<T>** represents a _formal generic type_, which can be replaced later with an _actual concrete type_. Replacing a generic type is called a _generic instantiation_. By convention, a single capital letter such as **E** or **T** is used to denote a formal generic type.
To see the benefits of using generics, let us examine the code in Figure below. The statement in Figure below (a) declares that **c** is a reference variable whose type is **Comparable** and invokes the **compareTo** method to compare a **Date** object with a string. The code compiles fine, but it has a runtime error because a string cannot be compared with a date.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yuzo54lk3g1mfm196vo0.png)
The statement in Figure above (b) declares that **c** is a reference variable whose type is **Comparable<Date>** and invokes the **compareTo** method to compare a **Date** object with a string. This code generates a compile error, because the argument passed to the **compareTo** method must be of the Date type. Since the errors can be detected at compile time rather than at runtime, the generic type makes the program more reliable.
The **ArrayList** Class has been a generic class since JDK 1.5. Figure below shows the class diagram for **ArrayList** before and since JDK 1.5, respectively.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/apm322c96pcckurwaunc.png)
For example, the following statement creates a list for strings:
`ArrayList<String> list = new ArrayList<>();`
You can now add _only strings_ into the list. For instance,
`list.add("Red");`
If you attempt to add a nonstring, a compile error will occur. For example, the following statement is now illegal, because **list** can contain only strings.
`list.add(new Integer(1));`
Generic types must be reference types. You cannot replace a generic type with a primitive type such as **int**, **double**, or **char**. For example, the following statement is wrong:
`ArrayList<int> intList = new ArrayList<>();`
To create an **ArrayList** object for **int** values, you have to use:
`ArrayList<Integer> intList = new ArrayList<>();`
You can add an **int** value to **intList**. For example,
`intList.add(5);`
Java automatically wraps **5** into **new Integer(5)**. This is called _autoboxing_, as introduced in [Automatic Conversion between Primitive Types and Wrapper Class Types](https://dev.to/paulike/automatic-conversion-between-primitive-types-and-wrapper-class-types-46en).
Casting is not needed to retrieve a value from a list with a specified element type, because the compiler already knows the element type. For example, the following statements create a list that contains strings, add strings to the list, and retrieve strings from the list.
`1 ArrayList<String> list = new ArrayList<>();
2 list.add("Red");
3 list.add("White");
4 String s = list.get(0); // No casting is needed`
Prior to JDK 1.5, without using generics, you would have had to cast the return value to **String** as:
`String s = (String)(list.get(0)); // Casting needed prior to JDK 1.5`
If the elements are of wrapper types, such as **Integer**, **Double**, and **Character**, you can directly assign an element to a primitive type variable. This is called _autounboxing_, as introduced in the link above. For example, see the following code:
`1 ArrayList<Double> list = new ArrayList<>();
2 list.add(5.5); // 5.5 is automatically converted to new Double(5.5)
3 list.add(3.0); // 3.0 is automatically converted to new Double(3.0)
4 Double doubleObject = list.get(0); // No casting is needed
5 double d = list.get(1); // Automatically converted to double`
In lines 2 and 3, **5.5** and **3.0** are automatically converted into **Double** objects and added to **list**. In line 4, the first element in **list** is assigned to a **Double** variable. No casting is necessary, because **list** is declared for **Double** objects. In line 5, the second element in **list** is assigned to a **double** variable. The object in **list.get(1)** is automatically converted into a primitive type value. | paulike |
1,911,805 | Azure Synapse Analytics Security: Data Protection | Introduction Data serves as the vital essence of any organization. Whether you’re dealing... | 0 | 2024-07-04T17:38:10 | https://dev.to/ayush9892/azure-synapse-analytics-security-data-protection-ecp | azure, sqlserver, dataengineering | ## Introduction
Data serves as the vital essence of any organization. Whether you’re dealing with sensitive customer information, or financial records, safeguarding your data is non-negotiable.
Many organizations face challenges such as:
- _How do you protect the data if you don't know where it is?_
- _What level of protection is needed?_—because some datasets require more protection than others.
Azure Synapse Analytics offers powerful features to help you achieve this, ensuring confidentiality, integrity, and availability.
In this blog, we’ll explore the Data Encryption capabilities integrated into Azure Synapse Analytics, discussing encryption techniques for data at rest and in transit, as well as approaches for detecting and categorizing sensitive data in your Synapse workspace.
---
## What is Data Discovery and Classification?
Imagine your company that have massive amounts of information stored in their databases. But some of columns needs extra protection – like Social Security numbers or financial records. Manually finding this sensitive data is a time-consuming nightmare.
Here's the good news: there's a better way! Azure Synapse offers a feature called _**Data Discovery**_ that automates this process.
**<u>_How does Data Discovery work?_</u>**
Think of Data Discovery as a super-powered scanner. It automatically goes through every row and column of your data lake or databases, looking for patterns that might indicate sensitive information. Just like a smart assistant, it can identify potentially sensitive data and classify those columns for you.
Once the data discovery process is complete, it provides classification recommendations based on a predefined set of patterns, keywords, and rules. These recommendations can then be reviewed, and then _Sensitivity-classification labels_ can be applied to the appropriate columns. This process is known as _**Classification**_.
**<u>_What happen after classifying sensitivity labels on columns?_</u>**
_**Sensitivity-classification**_ labels is a new metadata attributes that have been added to the SQL Server database engine. So, after classifying sensitivity labels on columns, the organization can leverage these labels to:
- implement fine-grained access controls. Only authorized person with the necessary clearance can access sensitive data.
- masking the sensitive data when accessed by users who do not have the necessary permissions, allowing them to see only anonymized versions of the data.
- monitoring of access and modification activities on sensitive data ([Auditing access to sensitive data](https://learn.microsoft.com/en-us/azure/azure-sql/database/data-discovery-and-classification-overview?view=azuresql#audit-sensitive-data)). Any unusual or unauthorized activities can be flagged for investigation.
---
## Steps for Discovering, Classifying or labelling columns that contain sensitive data in your database
**The classification includes two metadata attributes:**
1. _Labels_: The main classification attributes, used to define the sensitivity level of the data stored in the column.
2. _Information types_: Attributes that provide more granular information about the type of data stored in the column.
### Step 1 -> Choose Information Protection policy based on your requirement
![IPP Mode](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ezjvv0lbphyc8ylqvs7z.PNG)
SQL Information Protection policy is a built-in set of sensitivity labels and information types with discovery logic, which is native to the SQL logical server. You can also customize the policy, according to your organization's needs, for more information, see [Customize the SQL information protection policy in Microsoft Defender for Cloud (Preview)](https://learn.microsoft.com/en-us/azure/security-center/security-center-info-protection-policy).
### Step 2 -> View and apply classification recommendations
The classification engine automatically scans your database for columns containing potentially sensitive data and provides a list of recommended column classifications.
![Classification Recommendation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fm4d0hqowuesy6fpdh2s.PNG)
- After accepting recommendation for columns by selecting the check box in the left column and then select _Accept selected recommendations_ to apply the selected recommendations.
You can also classify columns manually, as an alternative or in addition to the recommendation-based classification.
![classify columns manually](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mrxt7arcsb46wevq1afr.PNG)
To complete your classification, select _Save_ in the Classification page.
>
**Note**: There is another option for data discovery and classification, which is [Microsoft Purview](https://azure.microsoft.com/en-us/products/purview/), which is a unified data governance solution that helps manage and govern on-premises, multicloud, and software-as-a-service (SaaS) data. It can automate data discovery, lineage identification, and data classification. By producing a unified map of data assets and their relationships, it makes data easily discoverable.
---
## Data Encryption
Data encryption is a fundamental component of data security, ensuring that information is safeguarded both at rest and in transit. So, Azure Synapse take care of this responsibility for us. It leverages robust encryption technologies to protect data.
### Data at Rest
Azure offers various methods of encryption across its different services.
<u>**_Azure Storage Encryption_**</u>
By default, Azure Storage encrypts all data at rest using [server-side encryption](https://learn.microsoft.com/en-us/azure/storage/common/storage-service-encryption) (SSE). It's enabled for all storage types (including ADLS Gen2) and cannot be disabled. SSE uses AES 256 to encrypts and decrypts data transparently. AES 256 stands for 256-bit Advanced Encryption Standard. AES 256 is one of the strongest block ciphers available and is FIPS 140-2 compliant.
Well, I know these sounds like some Hacking terms😅. But the platform itself manages the encryption key, so we don't have to understand these Hacking terms😅. Also, it forms the first layer of data encryption. This encryption applies to both user and system databases, including the master database.
>
**Note**: For additional security, Azure offers the option of double encryption. _[Infrastructure encryption](https://learn.microsoft.com/en-us/azure/storage/common/storage-service-encryption#doubly-encrypt-data-with-infrastructure-encryption)_ uses a platform-managed key in conjunction with the SSE key, encrypting data twice with two different encryption algorithms and keys. This provides an extra layer of protection, ensuring that data at rest is highly secure.
<u>**_Double the Protection with [Transparent Data Encryption](https://learn.microsoft.com/en-us/sql/relational-databases/security/encryption/transparent-data-encryption?toc=%2Fazure%2Fsynapse-analytics%2Fsql-data-warehouse%2Ftoc.json&bc=%2Fazure%2Fsynapse-analytics%2Fsql-data-warehouse%2Fbreadcrumb%2Ftoc.json&view=azure-sqldw-latest&preserve-view=true) (TDE)_**</u>
It is an industrial methodology that encrypts the underlying files of the database and not the data itself. This adds a second layer of data encryption. TDE performs real-time I/O encryption and decryption of the data at the page level. Each page is decrypted when it's read into memory and then encrypted before being written to disk. TDE encrypts the storage of an entire database by using a symmetric key called the Database Encryption Key. Means when data is written to the database, it is organized into pages and then TDE encrypts each page using DEK before it is written to disk, that makes it unreadable without the key. And when a page is read from disk into memory, TDE decrypts it using the DEK, making the data readable for normal database operations.
**<u>Why do we call it transparent?</u>**
because the encryption and decryption processes are transparent to applications and users, they have no idea that the data is encrypted or not, the only way they would know if they don't have access to it. This is because encryption and decryption happen at the database engine level, without requiring application awareness or involvement.
![Enabling TDE](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wy5e9nh3zm9920y3e0l0.png)
By default, TDE protects the database encryption key (DEK) with a built-in server certificate managed by Azure. However, organizations can opt for Bring Your Own Key (BYOK), that key can be securely stored in Azure Key Vault, offering enhanced control over encryption keys.
### Data in transit
Data encryption in transit is equally crucial to protect sensitive information as it moves between clients and servers. Azure Synapse utilizes Transport Layer Security (TLS) to secure data in motion.
Azure Synapse, dedicated SQL pool, and serverless SQL pool use the [Tabular Data Stream](https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-tds/893fcc7e-8a39-4b3c-815a-773b7b982c50) (TDS) protocol to communicate between the SQL pool endpoint and a client machine. TDS depends on Transport Layer Security (TLS) for channel encryption, ensuring all data packets are secured and encrypted between endpoint and client machine. It uses a signed server certificate from the Certificate Authority (CA) used for TLS encryption, managed by Microsoft. Azure Synapse supports data encryption in transit with TLS v1.2, using AES 256 encryption. | ayush9892 |
1,911,834 | Unlock Creativity with MonsterONE! | Unleash your creative potential with MonsterONE, the ultimate subscription service for web... | 0 | 2024-07-04T17:34:13 | https://dev.to/hasnaindev1/unlock-creativity-with-monsterone-51o0 | website, webcomponents, themes, webdev | Unleash your creative potential with **MonsterONE**, the ultimate subscription service for web developers, designers, marketers, and freelancers. Elevate your projects with an unbeatable toolkit!
**Why Choose MonsterONE?**
1. Access 420K+ digital assets: themes, templates, graphics, audio, and more!
2. Versatile toolkit for web development, design, marketing, and video creation.
3. Regular updates with top digital items every week.
4. Flexible pricing starting at just $7.40 per month.
**Special Offer**: Get an exclusive 10% OFF on any **MonsterONE **plan with the promo code **[HasnainDeveloper]** at checkout!
🔗 Explore **MonsterONE **now: https://monsterone.com/?discount=HasnainDeveloper | hasnaindev1 |
1,911,833 | Cool-Aide Air Conditioning & Heating | Serving Pinellas County for Over 30 Years - Air Command, Mechanical A/C Designs, and Cool-Aide A/C... | 0 | 2024-07-04T17:33:27 | https://dev.to/coolaideair/cool-aide-air-conditioning-heating-2a73 |
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tbp5v54qtrlahx3qt6dn.png)
Serving Pinellas County for Over 30 Years - Air Command, Mechanical A/C Designs, and Cool-Aide A/C & Heating - three of Tampa Bays most trusted brands - merged to create Sun Up Services in 2021! OUR CORE VALUES: Safety - Above all, the safety of our team and our customers is first. If an unsafe condition is noticed, it is every team members responsibility to resolve it. Integrity - We are ethical in our practices. If it is not the right way to do something, we will not do it. Service Excellence - We exist to serve our customers with the highest quality and with genuine humility. Sun Up Services is not some out-of-state conglomerate - we live, work, and play here. We are your neighbors!
Cool-Aide Air Conditioning & Heating
Address: [11000 70th Ave Unit 1, Seminole, FL 33772, US](https://www.google.com/maps?cid=13434425642251831071)
Phone: 727-954-8875
Website: [http://www.sunupservices.com/](http://www.sunupservices.com/)
Contact email: [email protected]
Visit Us:
[Sun Up Services (Cooling & Heating) Facebook](https://www.facebook.com/sunupservices)
[Sun Up Services (Cooling & Heating) Instagram](https://www.instagram.com/sunupservicess/)
[Sun Up Services (Cooling & Heating) Twitter](https://twitter.com/SunUpServices)
[Sun Up Services (Cooling & Heating) LinkedIn](https://www.linkedin.com/company/sun-up-services/)
Our Services:
HVAC services
Air conditioning repair
Air conditioning installation
| coolaideair |
|
1,911,522 | Navi Mumbai as a Commercial Hub. | The opening of the Mumbai trans harbour link commonly known as the Atal-setu Bridge and the upcoming... | 0 | 2024-07-04T12:24:22 | https://dev.to/ayaan_c9926fc164937988ad7/navi-mumbai-as-a-commercial-hub-1pc8 | The opening of the Mumbai trans harbour link commonly known as the Atal-setu Bridge and the upcoming international airport has improved Navi Mumbai's connectivity with the main city. It has also increased its reputation as a promising commercial hub. This infrastructural progress boosts Navi Mumbai's image as a progressing commercial haven leading to a rise in investments in both residential and commercial properties and promoting an expansion in various industries such as finance, manufacturing, and IT.
There are various enterprises who have established their properties in Navi Mumbai. We can take Kamdhenu as an example who have already established offices in the region and our now launching another venture on the Thane-Belapur road called [The Hallmark](https://kamdhenuthehallmarkkoparkhairane.com/).
The future is looking bright for the residents and investors of this region. There is huge potential for growth to happen. | ayaan_c9926fc164937988ad7 |
|
1,911,832 | How to report Postgres custom errors in Ecto Changeset | Sometimes you may find yourself in the need to capture a Postgres (or any other RDBMS) custom error... | 0 | 2024-07-04T17:32:19 | https://dev.to/utopos/how-to-report-postgres-custom-errors-in-ecto-changeset-54m | postgres, postgressql, elixir | Sometimes you may find yourself in the need to capture a `Postgres` (or any other RDBMS) custom error in the `Ecto.Changeset` without raising an exception. This enables you to handle all the errors in one place without braking your aesthetic Elixir functional code with “try/rescue” constructs.
It has one big advantage: as the `Ecto.Changeset` is a “lingua franca” of many libraries and frameworks (like Phoenix), embedding error reports in the changeset struct will work for you out of the box without any additional error handing burden. Less code, less maintenance!
## Under the hood
Currently, the `Postgres Ecto Adapter` (same as other adapters for major RDBMS) provide only limited support for reporting errors inside the `Ecto Changeset`. Let’s have a glimpse into the Postgres adapter [source code](https://dev.to/utopos/how-to-report-postgres-custom-errors-in-ecto-changeset-54m):
```elixir
@impl true
def to_constraints(%Postgrex.Error{postgres: %{code: :unique_violation, constraint: constraint}}, _opts),
do: [unique: constraint]
def to_constraints(%Postgrex.Error{postgres: %{code: :foreign_key_violation, constraint: constraint}}, _opts),
do: [foreign_key: constraint]
def to_constraints(%Postgrex.Error{postgres: %{code: :exclusion_violation, constraint: constraint}}, _opts),
do: [exclusion: constraint]
def to_constraints(%Postgrex.Error{postgres: %{code: :check_violation, constraint: constraint}}, _opts),
do: [check: constraint]
```
We can see that certain Postgres errors, namely those related to constraints, get a special treatment at the adapter level so that later could be transformed into relevant changeset errors on demand (by calling `*_constraint` functions in the changeset). Meanwhile, the remaining errors will be let through and propagated to your code. There are only few constraint error codes that get intercepted:
- :unique_violation
- :foreign_key_violation
- :exclusion_violation
- :check_violation
## Solution
The method I would like to propose is to disguise your custom database error as one of the constraints that is already implemented by default in the Postgres Ecto adapter (see above).
In this example, I will define and raise a custom error from within a PL/pgSQL trigger function using Postgres’ `check_contraint` ERRCODE, but you can use any of the four, whichever makes more sense to you.
## Step 1. Raise error in Postgres codebase.
```sql
CREATE FUNCTION custom_check() RETURNS TRIGGER AS $$
BEGIN
IF <SOME CONDITION> THEN
RAISE EXCEPTION 'CUSTOM ERROR'
USING ERRCODE = 'check_violation',
CONSTRAINT = 'name_of_your_contraint';
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql
```
where:
- `CUSTOM ERROR` is a custom string lateral of **your choice** that will be passed to Ecto as the error message text.
- `ERRCODE` must be one of the following:
- `unique_violation`
- `foreign_key_violation`
- `exclusion_violation`
- `check_violation`
- `CONSTRAINT` **must have** a name of your choice that will uniquely identify the custom error in the Ecto Changeset.
> **Please note:** a comprehensive list of Postgres error codes can be found in the Postgres Documentation — Errors and Messages.
## Step 2 Define standard constraint in Ecto Changeset.
In this case, I consistently follow check_contraint error code raised in Postgres and call check_constraint function in the changeset to capture it.
```elixir
def changeset(schema, attrs) do
schema
|> check_constraint(:some_field, name: name_of_your_contraint: , message: "custom error message")
end
```
where:
- `:some_field` is a key of associated with the model struct. It is particularly useful when working with Phoenix forms.
- `:name_of_your_contraint` is an atom reflecting the same name as the one defined in the Postgres codebase.
- `message` is an error message on Ecto side that will from additional contextual information.
To make your Elixir code more readable, you could consider some refactoring:
```elixir
def changeset(schema, attrs) do
schema
|> my_custom_error(:some_field)
end
defp my_custom_error(schema, key) do
schema
|> check_constraint(key, name: name_of_your_contraint: , message: "custom error message")
end
```
A minor trade off is that a potential error description in the changeset has to be related to a key in the existing Schema struct. This is because changesets are designed on field level. If you use Phoenix form, you can compensate this drawback with an accurate error message propagated to the user.
## Summary
In this article, I tried to propose a fairly easy technique to intercept custom database errors and turn them into a `Ecto Changeset` errors. This all without the need to override your `Repo` module functionality nor forking the adapter’s code, which would be way more difficult to maintain with new `Ecto` library updates.
Please, feel free to leave your comment and share other approaches that you came across. | utopos |
1,911,831 | کانال ازدواج | آدرس کانال ها و گروه های ازدواج: ❱ کانال ازدواج مسیر سبز... | 0 | 2024-07-04T17:31:11 | https://dev.to/kanal_ezdevaj/khnl-zdwj-1pcp | آدرس کانال ها و گروه های ازدواج:
❱ کانال ازدواج مسیر سبز (ایتا):
https://eitaa.com/ezdevajmasirsabz
❱ + کانال ازدواج مسیر سبز (روبیکا):
https://rubika.ir/ezdevajmasirsabz
❱ کانال ازدواج مسیر سبز (تلگرام)
https://t.me/ezdevajmasirsabz
❱ کانال همسریابی ازدواج دائم (تلگرام):
https://t.me/ezdevaje_noor
پیج اینستاگرام:
https://instagram.com/ezdevaj_masirsabz | kanal_ezdevaj |
|
1,911,830 | Enhancing Interiors with Panel Blinds Dubai | Panel blinds and ready-made curtains are two popular options that can significantly enhance any... | 0 | 2024-07-04T17:31:00 | https://dev.to/curtains_tailoring_b216a2/enhancing-interiors-with-panel-blinds-dubai-376m | javascript, webdev, beginners, programming | Panel blinds and ready-made curtains are two popular options that can significantly enhance any space. Here’s an in-depth look at these window treatments and their benefits in Dubai.
## Panel Blinds: Modern Elegance and Versatility
Panel blinds are a contemporary window treatment option that offers a sleek and stylish look. They consist of large fabric panels that slide across a track, making them ideal for wide windows and patio doors.
## Benefits of Panel Blinds
**Versatile Use:** Panel blinds are perfect for large windows, sliding doors, and even as room dividers. Their versatility makes them a practical choice for both residential and commercial spaces.
**Modern Aesthetic:** The clean, straight lines of panel blinds create a modern and minimalist look that complements contemporary interior designs.
**Light Control:** The wide panels allow for excellent light control. You can easily adjust the panels to let in natural light or to provide complete privacy.
**Easy Operation:** Panel blinds are easy to operate, with smooth gliding mechanisms that allow you to open and close them effortlessly.
Variety of Materials: Available in a wide range of fabrics, colors, and patterns, panel blinds can be customized to match any decor. From sheer fabrics that allow light to filter through to blackout materials that provide complete darkness, there’s a panel blind for every need.
## Popular Uses of Panel Blinds
**Living Rooms:** Ideal for large living room windows, panel blinds offer a sleek and modern look while providing excellent light control.
**Offices:** In office spaces, [panel blinds dubai](https://curtainstailoring.com/panel-blinds/) can be used to create private meeting areas or to divide open-plan offices.
**Bedrooms:** For bedrooms with large windows or patio doors, panel blinds offer both privacy and light control, ensuring a restful environment.
**Patio Doors:** Panel blinds are perfect for sliding patio doors, providing a stylish and practical solution for covering large glass areas. | curtains_tailoring_b216a2 |
1,911,827 | Get webmentions with shell script using jq & yq | Recently, I discovered that I can track the likes, reposts, and comments on social media posts that... | 0 | 2024-07-04T17:27:33 | https://maw.sh/blog/get-webmention-with-jq-and-yq/ | webdev, bash, linux, tutorial | Recently, I discovered that I can track the likes, reposts, and comments on social media posts that include links to any blog post or thought from my website using webmentions. To learn more about webmentions, you can visit the [indieweb](https://indieweb.org/Webmention).
This article will not cover how to set up webmentions on your blog, instead, it will demonstrate how to use `jq` and `yq` to fetch webmentions and save them in your Git repository.
## What are `jq` and `yq` ?
Both tools do the same thing but with different file types. The first one, `jq`,
is a tool that can work with any `JSON` text to extract or set any value.
Similarly, `yq` serves the same purpose but is designed for `YAML` files.
## Create a shell script skeleton
Let's create a shell script named `webmention` and give it the executable
permissions `chmod +x webmention`
```sh
#!/bin/sh
base_url="https://webmention.io/api/mentions.jf2"
domain="<your-domain>"
token="<your-webmention-token>"
main() {
# main script will be here
}
main
```
## How To fetch recent webmentions only
I need to run the script periodically, So I want to avoid over-fetching every time I fetch webmentions, and fortunately `webmentions.io` has a query-param called `since` which only returns webmentions after this date.
```sh
fetch_webmentions() {
curl -s \
"$base_url?since=2024-06-29T15:37:22Z&domain=$domain&token=$token"
}
```
So to keep track of the date, we need to save this data somewhere in the
codebase, and since I have `metadata.yaml` file, why not save it there?
```yaml
# metadata.yaml
title: Mahmoud Ashraf
# ...
last_webmention_sync: "2024-06-29T15:37:22Z"
```
```sh
since=$(yq ".last_webmention_sync" metadata.yaml)
fetch_webmentions() {
curl -s \
"$base_url?since=$since&domain=$domain&token=$token"
}
```
Finally, after finishing the whole script which will be shown later on this blog, we need to save the latest date to `yaml` file, so we can use it in the next run of `webmention` script.
```sh
main() {
# our script ...
new_date=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
yq -i ".last_webmention_sync = \"$new_date\"" metadata.yaml
}
```
## How to format the `json` response
we need to transform the actual response of the `webmention` API, from this format:
```jsonc
{
"type": "feed",
"name": "Webmentions",
"children": [
{
"type": "entry",
"wm-id": 1835832,
"wm-target": "https://maw.sh/blog/build-a-blog-with-svelte-and-markdown/",
"wm-property": "in-reply-to",
// ...
},
// more entries
]
}
```
To this format:
```jsonc
[
{
"target": "blog/build-a-blog-with-svelte-and-markdown",
"properties": [
{
"property": "in-reply-to",
"entries": [{ "type": "entry", /* ... */ }]
},
{ "property": "likes-of", "entries": [ /* ... */ ] }
]
}
]
```
So we need to process the `json` like this:
1. remove my domain from each `wm-target`, so we can get the actual path
1. filter-out any webmentions for the home page "https://maw.sh/"
1. group-by the `wm-target` so now have an array of arrays grouped by the same
target
1. after that map each to be an object with `target` and `properties`
and `properties` will contain an array of e.g: `{"property": "likes-of", "entries": []}`
```sh
format_output() {
jq --arg domain "$domain" '.children
| map(.["wm-target"] |= sub("^https://\($domain)/"; "")
| .["wm-target"] |= sub("/$"; ""))
| map(select(.["wm-target"] != ""))
| group_by(.["wm-target"])
| map({
target: .[0]["wm-target"],
properties: group_by(.["wm-property"])
| map({ property: .[0]["wm-property"], entries: .})
})'
}
# ...
main () {
fetch_webmentions | format_output
}
```
## Loop through each target and save the data
Now We need to go through each target and save the data in `yaml` format, or in
`json` but for me, I will go with `yaml` because I will use this file later with pandoc as metadata to display the webmentions
So with `-c,--compact-output`, jq will print each object into separate lines so we iterate with a while loop for each entry and save the file
```sh
main() {
fetch_webmentions | format_output | jq -c '.[]' |
while IFS= read -r line; do
save_into_file "$line"
done
# ...
}
```
For each file we will see if there existing data, we will need to first merge two of them and make sure the merged data have unique data, so we will compare with `wm-id` with `jq` using
`unique_by(:property_name:)` function and plus `+` operator to merge two arrays
```sh
merge_entries() {
existing_entries="$1"
new_entries="$2"
jq -s '.[0] + .[1] | unique_by(.["wm-id"])' \
<(echo "$existing_entries") <(echo "$new_entries") |
sed 's/\([^\\]\)@/\1\\@/g'
}
```
And finally here is the final look at the `save_into_file` function
```sh
save_into_file() {
line="$1"
target=$(echo "$line" | jq -r '.target')
properties=$(echo "$line" | jq -c '.properties[]')
path="src/$target/comments.yaml"
touch "$path"
echo "$properties" | while IFS= read -r property; do
property_name=$(echo "$property" | jq -r ".property")
new_entries=$(echo "$property" | jq '.entries')
existing_entries=$(yq -o=json ".$property_name // []" "$path")
merged_entries=$(merge_entries "$existing_entries" "$new_entries")
yq -i ".$property_name = $merged_entries" "$path"
done
}
```
## Conclusion
Now each [blog](/blog) or [thought](/thoughts) have a `comment.yaml` in the same directory, and I use it as `metadata` with `pandoc` and render it with pandoc template.
```sh
# get-webmention-with-jq-and-yq
# ├── hand-in-hand.jpg
# ├── comments.yaml
# └── index.md
pandoc --metadata-file=comments.yaml index.md -o index.html
```
Another enhancement we can write `github` workflow just to run [this script](https://github.com/22mahmoud/maw.sh/blob/master/bin/webmention) every 12 hours instead to run it manually from time to time.
| 22mahmoud |
1,911,828 | argmin() and argmax() in PyTorch | *Memos: My post explains min() and max(). My post explains aminmax(), amin() and amax(). My post... | 0 | 2024-07-04T17:27:30 | https://dev.to/hyperkai/argmin-and-argmax-in-pytorch-580g | pytorch, argmin, argmax, function | *Memos:
- [My post](https://dev.to/hyperkai/min-and-max-in-pytorch-3ol8) explains [min()](https://pytorch.org/docs/stable/generated/torch.min.html) and [max()](https://pytorch.org/docs/stable/generated/torch.max.html).
- [My post](https://dev.to/hyperkai/aminmax-amin-and-amax-in-pytorch-25ji) explains [aminmax()](https://pytorch.org/docs/stable/generated/torch.aminmax.html), [amin()](https://pytorch.org/docs/stable/generated/torch.amin.html) and [amax()](https://pytorch.org/docs/stable/generated/torch.amax.html).
- [My post](https://dev.to/hyperkai/minimum-maximum-fmin-and-fmax-in-pytorch-2hj) explains [minimum()](https://pytorch.org/docs/stable/generated/torch.minimum.html), [maximum()](https://pytorch.org/docs/stable/generated/torch.maximum.html). [fmin()](https://pytorch.org/docs/stable/generated/torch.fmin.html) and [fmax()](https://pytorch.org/docs/stable/generated/torch.fmax.html).
- [My post](https://dev.to/hyperkai/kthvalue-and-topk-in-pytorch-njk) explains [kthvalue()](https://pytorch.org/docs/stable/generated/torch.kthvalue.html) and [topk()](https://pytorch.org/docs/stable/generated/torch.topk.html).
[argmin()](https://pytorch.org/docs/stable/generated/torch.argmin.html) can get the 0D or more D tensor of the zero or more indices of the 1st minimum elements from the 0D or more D tensor of zero or more elements as shown below:
*Memos:
- `argmin()` can be used with [torch](https://pytorch.org/docs/stable/torch.html) or a tensor.
- The 1st argument with `torch` or using a tensor is `input`(Required-Type:`tensor` of `int` or `float`).
- The 2nd argument with `torch` or the 1st argument is `dim`(Optional-Type:`int`). *Setting `dim` can get the zero or more indices of the 1st minimum elements.
- The 3rd argument with `torch` or the 2nd argument is `keepdim`(Optional-Type:`bool`). *[My post](https://dev.to/hyperkai/set-keepdim-with-keepdim-argument-functions-pytorch-2fdj) explains `keepdim` argument.
- The 1D or more D tensor of one complex number or boolean value with `dim` works.
- Empty 2D or more D `input` tensor without `other` tensor doesn't work if not setting `dim`.
- Empty 1D `input` tesnor without `other` tensor doesn't work even if setting `dim`.
```python
import torch
my_tensor = torch.tensor([[5, 4, 7, 7],
[6, 5, 3, 5],
[3, 8, 9, 3]])
torch.argmin(input=my_tensor)
my_tensor.argmin()
# tensor(6)
torch.argmin(input=my_tensor, dim=0)
torch.argmin(input=my_tensor, dim=-2)
# tensor([2, 0, 1, 2])
torch.argmin(input=my_tensor, dim=1)
torch.argmin(input=my_tensor, dim=-1)
# tensor([1, 2, 0])
my_tensor = torch.tensor([[5., 4., 7., 7.],
[6., 5., 3., 5.],
[3., 8., 9., 3.]])
torch.argmin(input=my_tensor)
# tensor(6)
my_tensor = torch.tensor([5.+7.j])
torch.argmin(input=my_tensor, dim=0)
# tensor(0)
my_tensor = torch.tensor([[True]])
torch.argmin(input=my_tensor, dim=0)
# tensor([0])
my_tensor = torch.tensor([])
my_tensor = torch.tensor([[]])
my_tensor = torch.tensor([[[]]])
torch.argmin(input=my_tensor) # Error
my_tensor = torch.tensor([])
torch.argmin(input=my_tensor, dim=0) # Error
my_tensor = torch.tensor([[]])
torch.argmin(input=my_tensor, dim=0)
# tensor([], dtype=torch.int64)
my_tensor = torch.tensor([[[]]])
torch.argmin(input=my_tensor, dim=0)
# tensor([], size=(1, 0), dtype=torch.int64)
```
[argmax()](https://pytorch.org/docs/stable/generated/torch.argmax.html) can get the 0D or more D tensor of the zero or more indices of the 1st maximum elements from the 0D or more D tensor of zero or more elements as shown below:
*Memos:
- `argmax()` can be used with `torch` or a tensor.
- The 1st argument with `torch` or using a tensor is `input`(Required-Type:`tensor` of `int` or `float`).
- The 2nd argument with `torch` or the 1st argument is `dim`(Optional-Type:`int`). *Setting `dim` can get the zero or more indices of the 1st maximum elements.
- The 3rd argument with `torch` or the 2nd argument is `keepdim`(Optional-Type:`bool`). *[My post](https://dev.to/hyperkai/set-keepdim-with-keepdim-argument-functions-pytorch-2fdj) explains `keepdim` argument.
- The 1D or more D tensor of one complex number or boolean value with `dim` works.
- Empty 2D or more D `input` tensor without `other` tensor doesn't work if not setting `dim`.
- Empty 1D `input` tesnor without `other` tensor doesn't work even if setting `dim`.
```python
import torch
my_tensor = torch.tensor([[5, 4, 7, 7],
[6, 5, 3, 5],
[3, 8, 9, 3]])
torch.argmax(input=my_tensor)
my_tensor.argmax()
# tensor(10)
torch.argmax(input=my_tensor, dim=0)
torch.argmax(input=my_tensor, dim=-2)
# tensor([1, 2, 2, 0])
torch.argmax(input=my_tensor, dim=1)
torch.argmax(input=my_tensor, dim=-1)
# tensor([2, 0, 2])
my_tensor = torch.tensor([[5., 4., 7., 7.],
[6., 5., 3., 5.],
[3., 8., 9., 3.]])
torch.argmax(input=my_tensor)
# tensor(10)
my_tensor = torch.tensor([5.+7.j])
torch.argmax(input=my_tensor, dim=0)
# tensor(0)
my_tensor = torch.tensor([[True]])
torch.argmax(input=my_tensor, dim=0)
# tensor([0])
my_tensor = torch.tensor([])
my_tensor = torch.tensor([[]])
my_tensor = torch.tensor([[[]]])
torch.argmax(input=my_tensor) # Error
my_tensor = torch.tensor([])
torch.argmax(input=my_tensor, dim=0) # Error
my_tensor = torch.tensor([[]])
torch.argmax(input=my_tensor, dim=0)
# tensor([], dtype=torch.int64)
my_tensor = torch.tensor([[[]]])
torch.argmax(input=my_tensor, dim=0)
# tensor([], size=(1, 0), dtype=torch.int64)
``` | hyperkai |
1,911,826 | How to Make TikTok Logo By HTML, CSS, JavaScript | Ever wondered how to make the iconic TikTok logo using just code? Well, you’re in the right place!... | 0 | 2024-07-04T17:26:46 | https://dev.to/akiburrahaman/how-to-make-tiktok-logo-by-html-css-javascript-4kha | javascript, html, css | Ever wondered how to make the iconic [TikTok logo](https://freepnglogo.com/tiktok-logo-png) using just code? Well, you’re in the right place! TikTok has taken the world by storm, and its logo is instantly recognizable. Branding is crucial, and creating the TikTok logo from scratch can be a fantastic exercise in honing your coding skills. So, let’s dive into this step-by-step guide and create the TikTok logo using HTML, CSS, and JavaScript.
### Understanding the TikTok Logo
Before we start coding, it's essential to understand the components of the TikTok logo. The logo consists of a stylized music note, vibrant colors, and some dynamic elements that give it a lively appearance.
#### Logo Components
- **Music Note Shape**: The primary element of the logo.
- **Vibrant Colors**: A mix of cyan, magenta, and white.
- **Design Elements**: Curved lines and a sense of motion.
### Tools Needed for Coding the Logo
To create the TikTok logo, you’ll need the following tools:
- **HTML**: For the basic structure.
- **CSS**: For styling and animations.
- **JavaScript**: For adding interactivity.
- **Text Editor and Browser**: Tools like VS Code and Google Chrome.
### Setting Up Your Project
Let’s start by setting up our project folder and files.
#### Creating the Project Folder
Create a new folder on your computer and name it `tiktok-logo`.
#### Setting Up the HTML File
Inside the folder, create an `index.html` file. This file will contain the basic structure of our webpage.
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>TikTok Logo</title>
<link rel="stylesheet" href="style.css">
</head>
<body>
<div class="logo-container">
<!-- Logo elements will go here -->
</div>
<script src="script.js"></script>
</body>
</html>
```
#### Linking CSS and JavaScript
Create `style.css` and `script.js` files in the same folder and link them in your `index.html`.
### Creating the Basic Structure with HTML
Now, let’s build the HTML skeleton and add containers for the logo elements.
```html
<div class="logo-container">
<div class="note">
<div class="note-head"></div>
<div class="note-body"></div>
</div>
</div>
```
### Styling the Logo with CSS
Next, we'll style the logo using CSS.
#### Setting Up the Background
```css
body {
background-color: #000;
display: flex;
justify-content: center;
align-items: center;
height: 100vh;
margin: 0;
}
.logo-container {
position: relative;
width: 100px;
height: 150px;
}
```
#### Adding the Main Icon Shape
```css
.note {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
}
.note-head {
width: 50px;
height: 50px;
background-color: #00F2EA;
border-radius: 50%;
position: absolute;
top: 50px;
left: 25px;
}
.note-body {
width: 20px;
height: 100px;
background-color: #EE1D52;
position: absolute;
top: 25px;
left: 40px;
}
```
### Drawing the Music Note Shape
The music note shape is critical. We’ll use CSS to create and position it accurately.
#### Using CSS to Create Shapes
```css
.note-head::before {
content: '';
position: absolute;
width: 50px;
height: 50px;
background-color: #EE1D52;
border-radius: 50%;
top: -15px;
left: -10px;
z-index: -1;
}
```
#### Positioning Elements Precisely
```css
.note-body::before {
content: '';
position: absolute;
width: 20px;
height: 100px;
background-color: #00F2EA;
top: -25px;
left: 5px;
z-index: -1;
}
```
### Adding the Vibrant Colors
We’ll use linear gradients to get the vibrant colors right.
#### Using Linear Gradients
```css
.note-head {
background: linear-gradient(135deg, #00F2EA, #EE1D52);
}
.note-body {
background: linear-gradient(135deg, #EE1D52, #00F2EA);
}
```
### Animating the Logo with CSS
Adding animations will make the logo more dynamic and engaging.
#### Basic CSS Animations
```css
@keyframes bounce {
0%, 20%, 50%, 80%, 100% {
transform: translateY(0);
}
40% {
transform: translateY(-30px);
}
60% {
transform: translateY(-15px);
}
}
.logo-container {
animation: bounce 2s infinite;
}
```
### Enhancing with JavaScript
Let's add some interactivity using JavaScript.
#### Adding Interactivity
```javascript
document.querySelector('.logo-container').addEventListener('click', function() {
alert('TikTok Logo Clicked!');
});
```
### Optimizing for Different Screens
Ensuring that the logo looks good on all devices is essential.
#### Responsive Design Principles
```css
@media (max-width: 600px) {
.logo-container {
width: 80px;
height: 120px;
}
}
```
### Testing and Debugging
Testing your logo on different browsers and devices will help identify issues.
#### Common Issues and Fixes
- **Alignment Problems**: Use CSS Flexbox for centering.
- **Color Differences**: Double-check your color codes.
### Best Practices in Logo Design
Keep these best practices in mind while designing logos.
#### Keeping It Simple
Simplicity is key to making logos easily recognizable and scalable.
### Final Touches
Review the design and make any necessary adjustments before finalizing.
#### Reviewing the Design
Take a step back and look at your logo. Does it resemble the original TikTok logo?
### Conclusion
Creating the TikTok logo from scratch using code is a rewarding experience that helps improve your coding and design skills. Keep practicing and exploring new techniques to enhance your abilities. | akiburrahaman |
1,911,825 | min() and max() in PyTorch | *Memos: My post explains argmin() and argmax(). My post explains aminmax(), amin() and... | 0 | 2024-07-04T17:25:11 | https://dev.to/hyperkai/min-and-max-in-pytorch-3ol8 | pytorch, min, max, function | *Memos:
- [My post](https://dev.to/hyperkai/argmin-and-argmax-in-pytorch-580g) explains [argmin()](https://pytorch.org/docs/stable/generated/torch.argmin.html) and [argmax()](https://pytorch.org/docs/stable/generated/torch.argmax.html).
- [My post](https://dev.to/hyperkai/aminmax-amin-and-amax-in-pytorch-25ji) explains [aminmax()](https://pytorch.org/docs/stable/generated/torch.aminmax.html), [amin()](https://pytorch.org/docs/stable/generated/torch.amin.html) and [amax()](https://pytorch.org/docs/stable/generated/torch.amax.html).
- [My post](https://dev.to/hyperkai/minimum-maximum-fmin-and-fmax-in-pytorch-2hj) explains [minimum()](https://pytorch.org/docs/stable/generated/torch.minimum.html), [maximum()](https://pytorch.org/docs/stable/generated/torch.maximum.html). [fmin()](https://pytorch.org/docs/stable/generated/torch.fmin.html) and [fmax()](https://pytorch.org/docs/stable/generated/torch.fmax.html).
- [My post](https://dev.to/hyperkai/kthvalue-and-topk-in-pytorch-njk) explains [kthvalue()](https://pytorch.org/docs/stable/generated/torch.kthvalue.html) and [topk()](https://pytorch.org/docs/stable/generated/torch.topk.html).
[min()](https://pytorch.org/docs/stable/generated/torch.min.html) can get the 0D of the 1st one minimum element or two of the 0D or more D tensors of the 1st zero or more minimum elements and their indices from the one or two 0D or more D tensors of zero or more elements as shown below:
*Memos:
- `min()` can be used with [torch](https://pytorch.org/docs/stable/torch.html) or a tensor.
- The 1st argument with `torch` or using a tensor is `input`(Required-Type:`tensor` of `int`, `float` or `bool`).
- The 2nd argument with `torch` or the 1st argument is `dim`(Optional-Type:`int`). *Setting `dim` can get zero or more 1st minimum elements and their indices.
- The 2nd argument with `torch` or the 1st argument is `other`(Optional-Type:`tensor` of `int`, `float` or `bool`).
*Memos:
- It can only be used with `input`.
- This is the functionality of [minimum()](https://pytorch.org/docs/stable/generated/torch.minimum.html).
- The 3rd argument with `torch` or the 2nd argument is `keepdim`(Optional-Type:`bool`).
*Memos:
- It must be used with `dim` without `other`.
- [My post](https://dev.to/hyperkai/set-keepdim-with-keepdim-argument-functions-pytorch-2fdj) explains `keepdim` argument.
- There is `out` argument with `torch`(Optional-Type:`tensor` or `tuple`(`tensor`, `tensor`):
*Memos:
- The type of `tensor` must be used without `dim` and `keepdim`.
- The type of `tuple`(`tensor`, `tensor`) must be used with `dim` without `other`.
- `out=` must be used.
- [My post](https://dev.to/hyperkai/set-out-with-out-argument-functions-pytorch-3ee) explains `out` argument.
- Empty 2D or more D `input` tensor without `other` tensor doesn't work if not setting `dim`.
- Empty 1D `input` tesnor without `other` tensor doesn't work even if setting `dim`.
```python
import torch
my_tensor = torch.tensor([[5, 4, 7, 7],
[6, 5, 3, 5],
[3, 8, 9, 3]])
torch.min(input=my_tensor)
my_tensor.min()
# tensor(3)
torch.min(input=my_tensor, dim=0)
torch.min(input=my_tensor, dim=-2)
# torch.return_types.min(
# values=tensor([3, 4, 3, 3]),
# indices=tensor([2, 0, 1, 2]))
torch.min(input=my_tensor, dim=1)
torch.min(input=my_tensor, dim=-1)
# torch.return_types.min(
# values=tensor([4, 3, 3]),
# indices=tensor([1, 2, 0]))
tensor1 = torch.tensor([5, 4, 7, 7])
tensor2 = torch.tensor([[6, 5, 3, 5],
[3, 8, 9, 3]])
torch.min(input=tensor1, other=tensor2)
# tensor([[5, 4, 3, 5],
# [3, 4, 7, 3]])
tensor1 = torch.tensor([5., 4., 7., 7.])
tensor2 = torch.tensor([[6., 5., 3., 5.],
[3., 8., 9., 3.]])
torch.min(input=tensor1, other=tensor2)
# tensor([[5., 4., 3., 5.],
# [3., 4., 7., 3.]])
tensor1 = torch.tensor([True, False, True, False])
tensor2 = torch.tensor([[True, False, True, False],
[False, True, False, True]])
torch.min(input=tensor1, other=tensor2)
# tensor([[True, False, True, False],
# [False, False, False, False]])
my_tensor = torch.tensor([])
my_tensor = torch.tensor([[]])
my_tensor = torch.tensor([[[]]])
torch.min(input=my_tensor) # Error
my_tensor = torch.tensor([])
torch.min(input=my_tensor, dim=0) # Error
my_tensor = torch.tensor([[]])
torch.min(input=my_tensor, dim=0)
# torch.return_types.min(
# values=tensor([]),
# indices=tensor([], dtype=torch.int64))
my_tensor = torch.tensor([[[]]])
torch.min(input=my_tensor, dim=0)
# torch.return_types.min(
# values=tensor([], size=(1, 0)),
# indices=tensor([], size=(1, 0), dtype=torch.int64))
```
[max()](https://pytorch.org/docs/stable/generated/torch.max.html) can get the 0D of the 1st one maximum element or two of the 0D or more D tensors of the 1st zero or more maximum elements and their indices from the one or two 0D or more D tensors of zero or more elements as shown below:
- `max()` can be used with `torch` or a tensor.
- The 1st argument with `torch` or using a tensor is `input`(Required-Type:`tensor` of `int`, `float` or `bool`).
- The 2nd argument with `torch` or the 1st argument is `dim`(Optional-Type:`int`). *Setting `dim` can get zero or more 1st maximum elements and their indices.
- The 2nd argument with `torch` or the 1st argument is `other`(Optional-Type:`tensor` of `int`, `float` or `bool`).
*Memos:
- It can only be used with `input`.
- This is the functionality of [maximum()](https://pytorch.org/docs/stable/generated/torch.maximum.html).
- The 3rd argument with `torch` or the 2nd argument is `keepdim`(Optional-Type:`bool`).
*Memos:
- It must be used with `dim` without `other`.
- [My post](https://dev.to/hyperkai/set-keepdim-with-keepdim-argument-functions-pytorch-2fdj) explains `keepdim` argument.
- There is `out` argument with `torch`(Optional-Type:`tensor` or `tuple`(`tensor`, `tensor`):
*Memos:
- The type of `tensor` must be used without `dim` and `keepdim`.
- The type of `tuple`(`tensor`, `tensor`) must be used with `dim` without `other`.
- `out=` must be used.
- [My post](https://dev.to/hyperkai/set-out-with-out-argument-functions-pytorch-3ee) explains `out` argument.
- Empty 2D or more D `input` tensor without `other` tensor doesn't work if not setting `dim`.
- Empty 1D `input` tesnor without `other` tensor doesn't work even if setting `dim`.
```python
import torch
my_tensor = torch.tensor([[5, 4, 7, 7],
[6, 5, 3, 5],
[3, 8, 9, 3]])
torch.max(input=my_tensor)
my_tensor.max()
# tensor(9)
torch.max(input=my_tensor, dim=0)
torch.max(input=my_tensor, dim=-2)
# torch.return_types.max(
# values=tensor([6, 8, 9, 7]),
# indices=tensor([1, 2, 2, 0]))
torch.max(input=my_tensor, dim=1)
torch.max(input=my_tensor, dim=-1)
# torch.return_types.max(
# values=tensor([7, 6, 9]),
# indices=tensor([2, 0, 2]))
tensor1 = torch.tensor([5, 4, 7, 7])
tensor2 = torch.tensor([[6, 5, 3, 5],
[3, 8, 9, 3]])
torch.max(input=tensor1, other=tensor2)
# tensor([[6, 5, 7, 7],
# [5, 8, 9, 7]])
tensor1 = torch.tensor([5., 4., 7., 7.])
tensor2 = torch.tensor([[6., 5., 3., 5.],
[3., 8., 9., 3.]])
torch.max(input=tensor1, other=tensor2)
# tensor([[6., 5., 7., 7.],
# [5., 8., 9., 7.]])
tensor1 = torch.tensor([True, False, True, False])
tensor2 = torch.tensor([[True, False, True, False],
[False, True, False, True]])
torch.max(input=tensor1, other=tensor2)
# tensor([[True, False, True, False],
# [True, True, True, True]])
my_tensor = torch.tensor([])
my_tensor = torch.tensor([[]])
my_tensor = torch.tensor([[[]]])
torch.max(input=my_tensor) # Error
my_tensor = torch.tensor([])
torch.max(input=my_tensor, dim=0) # Error
my_tensor = torch.tensor([[]])
torch.max(input=my_tensor, dim=0)
# torch.return_types.max(
# values=tensor([]),
# indices=tensor([], dtype=torch.int64))
my_tensor = torch.tensor([[[]]])
torch.max(input=my_tensor, dim=0)
# torch.return_types.max(
# values=tensor([], size=(1, 0)),
# indices=tensor([], size=(1, 0), dtype=torch.int64))
``` | hyperkai |
1,911,812 | The ultimate guide to unsubscribing from emails in 2024 | Inboxes flooded with promotional emails and newsletters can be overwhelming. Whether it's the... | 0 | 2024-07-04T17:15:50 | https://againstdata.com/blog/the-ultimate-guide-to-unsubscribing-from-emails-2024 | privacy, startup, buildinpublic, webdev |
Inboxes flooded with promotional emails and newsletters can be overwhelming. Whether it's the aftermath of an online shopping spree or an ill-fated subscription, managing unwanted emails is a task many of us face daily. The good news is that unsubscribing from these pesky messages doesn't have to be a headache. Here's a comprehensive guide on how to unsubscribe from unwanted emails across various popular email providers.
### What are the components of an email (header information & standard)
We tend to perceive email as being this sort of "old" technology, that has somehow always been around and you never really think of the world before that, especially if you are born in this millennium.
The truth is, at the dawn of email, a lot of very capable and smart people got around to figure out how this messaging service was going to work, what would be the guidelines and the standards that would make sure the system is reliable, scalable and secure.
In order to achieve this, there are several other "hidden" pieces of information that gets passed on, besides the ones that we easily recognize, such as "Subject", "From", "Date", "cc". Already when we move to something like "bcc" the number of people who know exactly how this works decreases, because the email clients we use everyday have continuously tried to simplify the user interface. And that's a good thing, but it has its drawbacks.
One of these drawbacks is that we are no longer educated, unlike some of the more savvy Internet users, on the other pieces of data attached to an ordinary email.
We can talk a lot about these components, but for the purpose of today we are going to talk about something called "Email header information". As the name suggests each email has a header information field, in which the sender encodes some information than can be read by the receiver (or the receiver's email client more precisely).
It's amazing that these fields existed largely unchanged in the standards lied out in the 1990s by the email standardization pioneers that helped put together the first documentation on how it was all supposed to work.
### So now we know that email headers exist, so why did we not hear about it until now?
> Well, because, go figure, it wasn't really used.
>
> And why is that, you may ask?
We can begin to answer this question by telling you what information you can store in it. The sender can include in this field the information needed by the receiver; in case he does not want to receive the communication anymore. In plain English, the sender is telling you how (where) to unsubscribe from the mailing list. And there are TWO ways it can do this.
- **ONE.** It can give you an unsubscribe link that you can follow to automatically remove your email address from the mailing list.
- **TWO.** It can give you an email address where to ask to Unsubscribe from the mailing list.
- **TWO and a half.** They can include both a link and an email address.
![Email header example](https://againstdata.s3.eu-central-1.amazonaws.com/upload/jcLsO9AFfxsSDQf3MlBkqVmq0qMPxe98fuyc4UeR.jpg)
Now in case you were wondering, this is the technology that powers the "new" Unsubscribe features of email clients from Apple, Microsoft and Google.
### So why do we need the fields when most emails have an "Unsubscribe" link at the end of the message?
Well because both the unsubscribe link and the unsubscribe email in the header field are not meant to be human readable, but rather machine readable.
Armed with this newfound knowledge of how email works, I will tell you below what are actual ways that you can get rid of spam and what is the difference between them.
### Unsubscribe manually via the link in the email body (Dangerous)
> Sounds great. Doesn't always work!
The issue when clicking on links in your email is that you don't know always where you end up. And that was exploited heavily by hackers and other malicious actors. Simply put, you SHOULD NOT click on any link from a Spam email. It may open the gate to even more spam or even cause other issues and/or compromise more of your data.
Besides, even if the sender is a legitimate company, clicking on the unsubscribe at the bottom of the email, just doesn't work, or takes such a long time, that users just give up and mark it as Spam and/or block the sender.
> To be fair, we have all done it at one point or another.
### Unsubscribe by pressing the "Unsubscribe" button in Gmail (Still Dangerous)
![Gmail Unsubscribe](https://againstdata.s3.eu-central-1.amazonaws.com/upload/ph0ChNLpu6qjzad2fQvWtUAyAskoMic4pR1mvDkU.jpg)
As I explained earlier this works using the information contained in the email header, namely the unsubscribe link provided by the sender, in the email body). So that shiny Unsubscribe button next to some emails is powered by this technology.
The issue with this is that more often than not, this is not present. And even if it is present it could still pose similar dangers as the unsubscribe link in the email body that we were talking about earlier.
This is the reason why up until 02.2024 no large email provider from the likes of Apple, Microsoft and Google was supporting this kind of Unsubscribe feature, even if the link was in the body of the email or in the header. This is simply because it could guide users into all sorts of problems. But that all changes when Google started making the header unsubscribe information mandatory for all bulk email senders on Gmail. The main reason behind it is that with the advent and implementation of large-scale AI it is much easier to screen for malicious or potentially malicious links. And besides, if you are a legitimate marketer, you will likely adhere to the policies, since you want to continue being compliant and send your legitimate communications.
### Unsubscribe via the email in the header through the email client. (Generally safe)
And what about the unsubscribe email address in the email header? Well, that was the technical solution initially used by the Apple Unsubscribe feature, where if a user pressed it, an email was generated from the user's account and sent to that address. You could have seen the email in your "Sent" folder, after pressing the Unsubscribe button on the Apple email client.
This was considered safer, since it did not lead the user to an unknown and hard to verify web address, but in the end perform the same function.
> What is on the receiving end of an unsubscribe email or link? (Here be dragons...)
These unsubscribe email addresses provided by marketers are not actual addresses, but rather "pseudo-emails" where servers are automatically processing the requests and operating the changes in the emailing lists.
There are also several security mechanisms involved, with security tokens and means to prevent abuse of this system, but for the purpose of this article, we will not deep dive into that. You can find some additional information on this [here](https://help.groundhogg.io/article/890-how-the-list-unsubscribe-header-works).
- **Gmail** -- Email Unsubscribe -- YES, Link Unsubscribe -- YES
- **Yahoo** -- Email Unsubscribe -- YES, Link Unsubscribe -- NO
- **Outlook (web)** -- Email Unsubscribe -- YES, Link Unsubscribe -- NO
- **iCloud (desktop)** -- Email Unsubscribe -- YES, Link Unsubscribe -- NO
- **iCloud (mobile)** -- Email Unsubscribe -- YES, Link Unsubscribe -- NO
### Unsubscribe by blocking the sender and using a filter (on the user end of the problem)
![Creating a filter](https://againstdata.s3.eu-central-1.amazonaws.com/upload/jCyVAWNfcfix7X40qiLFhHdymNdtNDk12PIaPMYj.png)
One other way to unsubscribe is to create a filter or rule on your end, via the options available in your email client. These filters work by setting criteria that put your received emails in "buckets" or trigger some form of predefined action. Although tougher to set-up, these are very effective and do not expose you in any way to any third-party website or script.
Due to their initial lengthy setup, not a lot of people use them, but recently some tools are available on the market that can help you speed that process up. One such tool is [AgainstData](https://app.againstdata.com/requests/all) that helps you quickly go through your Inbox and sort the promotional emails, from other important correspondence. You can then choose to also bulk delete a lot of the old emails that are using up your precious storage space.
Regardless of the method you choose, the good part about unsubscribing and filtering out unwanted emails is that it is there to stay, keeping you focused on your tasks and helps you save precious minutes every day. And these minutes add up.
### Unsubscribe through your email provider: Gmail case study
Gmail also offers a native unsubscribe feature to solve the same problem. This uses the core Gmail application to detect and flag emails that are likely promotional, or spam and it prompts the user with a message in case they want to unsubscribe in a relatively fast way.
When you turn on the Auto Unsubscribe feature, Gmail scans your incoming emails, analyzing factors like the sender's reputation, the content of the email, and user feedback. These algorithms help Gmail determine which emails you might not want, and they tend to get better over time. If Gmail thinks an email is potentially unwanted, it will place an unsubscribe link at the top of the message. This link comes with a brief message explaining why Gmail thinks the email is promotional, giving you the option to opt out of future messages from that sender.
Keep in mind, while Gmail's Auto Unsubscribe feature is highly effective, it's not perfect. Some unwanted emails might still slip through, especially if they're from new or unknown senders. Occasionally, Gmail might also mistakenly flag a legitimate email as promotional or spam.
### Sending a deletion request
But of course, unsubscribing has its limitations. Even if you unsubscribe from a company, they still hold some, if not all, of your personal data. This is one of the major downsides of "just" unsubscribing from some newsletters and services.
For the companies that you know for sure you do not want to use anymore, you can also request them to delete your data. Luckily if you live in the E.U. or in a jurisdiction with personal data protection legislation you can request this as a legal right.
The best part is that you don't need any legal knowledge, or even to look for where to send such a request. You can simply use a tool like Data Against Data to [send a deletion request with one click](https://app.againstdata.com). | extrabright |
1,911,806 | Differentiating onclick and addEventListener in JavaScript | Overview This article provides an insightful examination of the contrasting approaches... | 0 | 2024-07-04T17:15:06 | https://dev.to/starneit/differentiating-onclick-and-addeventlistener-in-javascript-bh3 |
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qav3z2wp45k0njywiw45.png)
## Overview
This article provides an insightful examination of the contrasting approaches to event handling in JavaScript: the familiar **onclick** and the versatile **addEventListener** method. By delving into the nuances of these two mechanisms, we uncover the unique advantages they offer and the scenarios in which they excel. Through comprehensive examples and practical use cases, we’ll dissect the syntax, behavior, and compatibility of both onclick and **addEventListener**, empowering developers to make informed choices when implementing event-driven interactions in their web applications. Whether it’s a straightforward click action or a more complex event management requirement, this article equips readers with the knowledge to navigate between these two event handling paradigms effectively.
## Definitions
Here are the definitions:
### onclick in HTML:
**onclick** is an HTML attribute used to attach JavaScript code that will execute when a specific element, such as a button or a link, is clicked by the user. This attribute allows developers to define inline event handling directly within the HTML markup. When the element is clicked, the specified JavaScript code is triggered, enabling interactivity and user-initiated actions. While simple to use, onclick is limited to a single event handler and can become cumbersome when managing multiple events on the same element or handling more complex scenarios.
### addEventListener in JavaScript:
**addEventListener** is a method in JavaScript that allows developers to dynamically attach event handlers to HTML elements. It provides a more flexible and robust approach compared to inline event attributes like onclick. With **addEventListener**, multiple event listeners can be added to the same element, and event handling can be more organized and maintainable. It offers control over event propagation, capturing, and bubbling phases. Additionally, **addEventListener** accommodates various event types beyond just clicks, expanding its utility for handling a wide range of user interactions and application behaviors.
## Usage
### onclick
```
<!DOCTYPE html>
<html>
<head>
<title>onclick Example</title>
</head>
<body>
<button id="myButton">Click me</button>
<script>
function handleClick() {
alert("Button clicked!");
}
document.getElementById("myButton").onclick = handleClick;
</script>
</body>
</html>
```
In this example, the onclick attribute is used to directly assign a JavaScript function (handleClick) to the button’s click event. When the button is clicked, the handleClick function is executed, displaying an alert.
### addEventListener
```
<!DOCTYPE html>
<html>
<head>
<title>addEventListener Example</title>
</head>
<body>
<button id="myButton">Click me</button>
<script>
function handleClick() {
alert("Button clicked!");
}
document.getElementById("myButton").addEventListener("click", handleClick);
</script>
</body>
</html>
```
In this example, the **addEventListener** method is used to attach the same **handleClick** function to the button’s click event. This method provides more flexibility and allows for multiple event listeners to be added to the same element.
## Differences
Difference between addEventListener and onclick:
### addEventListener:
- **addEventListener** allows the addition of multiple events to a specific element.
- It can accept a third argument that provides control over event propagation.
- Events added using **addEventListener** can only be attached within `<script>` elements or in external JavaScript files.
- Compatibility may be limited, as it does not work in older versions of Internet Explorer, which use **attachEvent** instead.
### onclick:
- **onclick** is used to attach a single event to an element.
- It is essentially a property and may get overwritten.
- Event propagation cannot be controlled directly with **onclick**.
- **onclick** can also be added directly as an HTML attribute, offering a simpler integration method.
- It is widely supported and functions across various browsers.
- The choice between addEventListener and onclick depends on the complexity of event management required and the compatibility needs of the application.
## Conclusion
In conclusion, understanding the distinctions between **addEventListener** and **onclick** is essential for effective event handling in JavaScript. While both methods enable interaction and responsiveness, they cater to different levels of complexity and compatibility requirements.
**addEventListener** emerges as a versatile tool, offering the flexibility to attach multiple events to a single element. Its capacity to control event propagation and its suitability for structured scripting make it a robust choice for modern applications. However, developers should be cautious of its limited compatibility with older browsers.
On the other hand, **onclick** provides a straightforward means of attaching a single event to an element, making it a suitable choice for simpler interactions. Its direct integration as an HTML attribute streamlines implementation but may lack the comprehensive control and scalability offered by **addEventListener**.
In the end, the selection between these methods hinges on the project’s scope, desired functionality, and the targeted user base. By grasping the strengths and limitations of each approach, developers can make informed decisions, crafting seamless and responsive web experiences tailored to their unique needs. | starneit |
|
1,911,811 | DSL in Ruby with Metaprogramming | Metaprogramming in Ruby allows developers to write programs that can modify themselves at runtime.... | 0 | 2024-07-04T17:14:46 | https://dev.to/francescoagati/dsl-in-ruby-with-metaprogramming-15p2 | ruby, dsl, metaprogramming | Metaprogramming in Ruby allows developers to write programs that can modify themselves at runtime. This powerful feature can be leveraged to create expressive and flexible domain-specific languages (DSLs).
#### Defining the Structure
First, let's define the core class of our DSL, `TravelSearch`, along with auxiliary classes `Air`, `Hotel`, and `Train`. These classes will encapsulate the details of each travel component.
```ruby
class TravelSearch
attr_accessor :air, :hotel, :train
def initialize(&block)
instance_eval(&block)
end
def air(&block)
@air = Air.new(&block)
end
def hotel(&block)
@hotel = Hotel.new(&block)
end
def train(&block)
@train = Train.new(&block)
end
def to_s
"Air: #{@air}\nHotel: #{@hotel}\nTrain: #{@train}"
end
end
class Air
attr_accessor :from, :to, :date
def initialize(&block)
instance_eval(&block)
end
def from(from)
@from = from
end
def to(to)
@to = to
end
def date(date)
@date = date
end
def to_s
"from #{@from} to #{@to} on #{@date}"
end
end
class Hotel
attr_accessor :city, :date, :nights
def initialize(&block)
instance_eval(&block)
end
def city(city)
@city = city
end
def date(date)
@date = date
end
def nights(nights)
@nights = nights
end
def to_s
"in #{@city} on #{@date} for #{@nights} nights"
end
end
class Train
attr_accessor :from, :to, :via, :date, :with_seat_reservation
def initialize(&block)
instance_eval(&block)
end
def from(from)
@from = from
end
def to(to)
@to = to
end
def via(via)
@via = via
end
def date(date)
@date = date
end
def with_seat_reservation(with_seat_reservation)
@with_seat_reservation = with_seat_reservation
end
def to_s
"from #{@from} to #{@to} via #{@via} on #{@date} with seat reservation: #{@with_seat_reservation}"
end
end
```
### Utilizing `instance_eval` for DSL Construction
The `instance_eval` method is a key component in our DSL. It allows us to evaluate the given block within the context of the current object, effectively changing the `self` to the object the method is called on. This is crucial for making the DSL syntax intuitive and clean.
### Creating the DSL Entry Point
We'll define a method `search_travel` as the entry point for our DSL. This method initializes a `TravelSearch` object and evaluates the block within its context.
```ruby
def search_travel(&block)
travel_search = TravelSearch.new(&block)
puts travel_search
end
```
### Example Usage
Here's an example of how our DSL can be used to define a travel search:
```ruby
search_travel do
air do
from "SFO"
to "JFK"
date "2011-12-25"
end
hotel do
city "New York"
date "2011-12-25"
nights 3
end
train do
from "Milan"
to "Rome"
via "Florence"
date "2011-12-25"
with_seat_reservation true
end
end
```
### Output
Running the above code will produce the following output:
```
Air: from SFO to JFK on 2011-12-25
Hotel: in New York on 2011-12-25 for 3 nights
Train: from Milan to Rome via Florence on 2011-12-25 with seat reservation: true
```
Using `instance_eval` and metaprogramming, we created a flexible and readable DSL for travel searches in Ruby. This approach allows users to define complex data structures with minimal syntax, making the code more expressive and easier to understand. | francescoagati |
1,911,559 | What is BitPower Smart Contract | Introduction With the rapid development of blockchain technology, smart contracts have gradually... | 0 | 2024-07-04T12:59:58 | https://dev.to/woy_ca2a85cabb11e9fa2bd0d/what-is-bitpower-smart-contract-3e1i | btc |
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ezx2etkmbs638zsr8e99.png)
Introduction
With the rapid development of blockchain technology, smart contracts have gradually become an important part of the decentralized finance (DeFi) ecosystem. As an innovative blockchain platform, BitPower uses smart contracts to realize a variety of decentralized financial services. In this article, we will introduce in detail what BitPower smart contracts are and their applications and advantages in the blockchain and DeFi fields.
Overview of Smart Contracts
A smart contract is a self-executing protocol in which the terms of the contract are directly written into the code and run on the blockchain network. The characteristics of smart contracts are decentralization, automatic execution, transparency and immutability, which make it a basic technology for realizing decentralized applications (DApp). Through smart contracts, users can conduct trusted transactions and operations without intermediaries.
Features of BitPower Smart Contracts
1. Decentralization
BitPower smart contracts are completely decentralized, and no centralized institution or individual can control or change the rules and operations of the contract. All transactions and operations are publicly recorded on the blockchain, and anyone can verify their authenticity and validity.
2. Security
BitPower smart contracts use advanced encryption technology and security measures to ensure the security of users' assets and data. Smart contracts are rigorously audited and tested before deployment to prevent vulnerabilities and attacks. In addition, BitPower smart contracts are open source, and anyone can view and review their code, further enhancing the transparency and security of the system.
3. Automatic execution
The automatic execution feature of smart contracts enables the BitPower platform to provide an efficient and seamless user experience. Once the predetermined conditions are met, the contract is automatically executed without human intervention. This not only improves efficiency, but also reduces human errors and delays.
4. Transparency and immutability
All BitPower smart contract operations and transaction records are open and transparent on the blockchain and can be viewed by anyone at any time. At the same time, the immutability of the blockchain ensures the authenticity and reliability of the records, preventing data tampering and fraud.
Application of BitPower smart contracts
1. Decentralized lending
Smart contracts on the BitPower platform allow users to conduct lending transactions without intermediaries. Borrowers can obtain loans by providing collateral, while lenders earn interest by providing funds. Smart contracts automatically manage all terms and conditions in the lending process, including interest rate calculation, collateral management, and loan repayment.
2. Cross-chain transfer
BitPower's smart contracts support cross-chain asset transfers, allowing users to seamlessly transfer assets between different blockchain networks. This cross-chain transfer mechanism not only improves the liquidity of assets, but also expands the user's operating scope and flexibility.
3. Yield farming
On the BitPower platform, users can get high returns by participating in liquidity mining and yield farming. Smart contracts automatically distribute rewards and calculate returns based on the amount of liquidity provided by users and the holding time. This mechanism has attracted a large number of users to participate, increasing the overall liquidity and activity of the platform.
Advantages of BitPower smart contracts
1. Efficient and low-cost
Due to the automatic execution characteristics of smart contracts, transactions and operations on the BitPower platform are extremely efficient. At the same time, the decentralized architecture reduces intermediary fees and operating costs, allowing users to enjoy efficient financial services at a lower cost.
A
2. Globalization and borderlessness
BitPower smart contracts are based on blockchain technology and can operate globally and borderlessly. Anyone with an Internet connection can participate without worrying about geographical location and regulatory restrictions. This global nature provides users with more opportunities and choices.
3. Innovation and Diversification
BitPower smart contracts support a variety of innovative financial services and applications, including decentralized lending, cross-chain transfers, yield farming, etc. These diverse application scenarios not only meet the diverse needs of users, but also promote the development and innovation of the entire DeFi ecosystem.
Conclusion
As an important technology in the field of blockchain and DeFi, BitPower smart contracts provide efficient, low-cost and global financial services with its characteristics of decentralization, security, automatic execution, transparency and immutability. Through a variety of applications such as decentralized lending, cross-chain transfers and yield farming, BitPower smart contracts are constantly promoting financial innovation and the development of the ecosystem. In the future, with the further maturity of technology and the continuous expansion of applications, BitPower smart contracts will bring convenient and reliable financial services to more users and promote the widespread application and popularization of blockchain technology. | woy_ca2a85cabb11e9fa2bd0d |
1,911,810 | Mumbai Glamour Offer Her Mumbai housewife call girls 5* Hotels | What Makes Hiring Mumbai Escorts Better Than Dating? Did you go through a breakup very recently? Are... | 0 | 2024-07-04T17:14:01 | https://dev.to/amar_leen_134/mumbai-glamour-offer-her-mumbai-housewife-call-girls-5-hotels-395d | What Makes Hiring Mumbai Escorts Better Than Dating?
Did you go through a breakup very recently? Are you going through stress and depression because of your bad relationship? Well, you need someone who can comfort you and be with you in these difficult times. That’s why you should hire the **[Mumbai housewife call girls](https://mayra.club/mumbai-housewife-call-girls/)** to be your companion and have some good time with them. They are compassionate and understanding women who can make you forget your sorrow. Not only that, they can please you any way you want so that you do not miss your ex anymore. You will soon come out of the grasps of sadness and start reveling in their beauty and charm. Moreover, since there is no relationship between you two, there is no emotional attachment or dependence. You will never miss the girl if she is unavailable; you can simply hire someone else.
Mumbai Call Girls: Worth The Price
Hiring a call girl in Mumbai might be a costly affair. This is the primary reason why many people think twice before hiring one. This is absolutely prudent that you get a valuable return to the money you have to pay. That is why one should always hire these Mumbai Call Girls every time.
These girls are definitely worth spending money. Not only these girls are good looking, but also they are amazing at work. They understand your needs to the fullest and make sure they do everything possible to ensure client satisfaction.
Whether you want them to be your arm candy in a corporate party or may be just have a nice evening walk or watch a movie. These girls will be by your side all the time and you will not feel uncomfortable around them at all.
Call Girls Mumbai: Perfect Companion On Your Holiday
Thinking of going somewhere on your upcoming holiday? Well, this can be your best experience in a while if you hire one of the Call Girls Mumbai and go on this trip. These girls are the best companion on a trip. Whether this is a long-awaited holiday or you are planning a business trip, these girls can be really useful.
Wherever you visit they can always be with you. It does not matter whether you are visiting a place for the first time or you have been there before, these girls will never make you feel bore at all. With their presence it will be a lot more enjoyable than ever. With them one can even share their darkest of secrets, talk about their feelings and most importantly all these information that you share will be always a secret. These girls know how to keep it a secret and they will never ever spit out any of those information for sure. as you promise to keep their identity a secret they will definitely do the same. | amar_leen_134 |
|
1,910,501 | Bridging the Gap: Cross-Language Encryption and Decryption between Node.js and Go | In the fast-evolving world of software development, ensuring the security of data as it moves between... | 0 | 2024-07-04T17:11:51 | https://dev.to/shaastry_mss/bridging-the-gap-cross-language-encryption-and-decryption-between-nodejs-and-go-22k | node, go, encryption, decryption |
In the fast-evolving world of software development, ensuring the security of data as it moves between different systems and platforms is paramount. Recently, I found myself facing a unique challenge: implementing a secure, cross-language encryption and decryption mechanism using Node.js and Go. Asymmetric encryption, with its promise of robust security through public and private key pairs, seemed like the perfect solution. However, achieving compatibility between these two languages proved to be a formidable task.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ned4rjqzhcwxnydr9umc.png)
This guide aims to help you seamlessly integrate security features across different programming environments with a clear, step-by-step approach to mastering cross-language asymmetric encryption.
**Asymmetric encryption**, also known as **public-key** cryptography, is a type of encryption that uses a pair of keys for encryption and decryption: a public key and a private key.
Here’s how it works:
**Key Components:**
_Public Key:_ This key is shared openly and can be distributed widely. It is used to encrypt data.
_Private Key:_ This key is kept secret and is known only to the owner. It is used to decrypt data.
Node.js Encryption with OAEP and SHA-256
```javascript
const fs = require('fs');
const crypto = require('crypto');
// Load the public key
const publicKey = fs.readFileSync('path/to/public_key.pem', 'utf8');
// Encrypt with OAEP padding
const Encrypt = (textToEncrypt) => {
return crypto.publicEncrypt(
{
key: publicKey,
padding: crypto.constants.RSA_PKCS1_OAEP_PADDING,
oaepHash: "sha256",
},
Buffer.from(textToEncrypt)
).toString('utf8');
};
// Test encryption
const encryptedText = Encrypt("Hello, World!");
console.log("Encrypted Text:", encryptedText);
```
Node.js Decryption with OAEP and SHA-256
```javascript
const fs = require('fs');
const crypto = require('crypto');
// Load the private key
const privateKey = fs.readFileSync('path/to/private_key.pem', 'utf8');
// Decrypt with OAEP padding
const decryptWithPrivateKeyOaep = (encryptedText) => {
const buffer = Buffer.from(encryptedText, 'utf8');
const decrypted = crypto.privateDecrypt(
{
key: privateKey,
padding: crypto.constants.RSA_PKCS1_OAEP_PADDING,
oaepHash: "sha256",
},
buffer
);
return decrypted.toString();
};
// Test decryption
const decryptedText = decryptWithPrivateKeyOaep(encryptedText);
console.log("Decrypted Text:", decryptedText);
```
Encryption (Golang)
```go
package main
import (
"crypto/rand"
"crypto/rsa"
"crypto/sha256"
"encoding/base64"
"fmt"
"io/ioutil"
"log"
"crypto/x509"
"encoding/pem"
)
// Function to load public key
func loadPublicKey(path string) (*rsa.PublicKey, error) {
pubBytes, err := ioutil.ReadFile(path)
if err != nil {
return nil, err
}
block, _ := pem.Decode(pubBytes)
if block == nil || block.Type != "PUBLIC KEY" {
return nil, fmt.Errorf("failed to decode PEM block containing public key")
}
return x509.ParsePKCS1PrivateKey(block.Bytes)
}
// Function to encrypt with public key using OAEP padding
func encryptWithPublicKey(publicKey *rsa.PublicKey, textToEncrypt string) (string, error) {
encryptedBytes, err := rsa.EncryptOAEP(sha256.New(), rand.Reader, publicKey, []byte(textToEncrypt), nil)
if err != nil {
fmt.Println("error while encrypting", err)
return "", err
}
return base64.StdEncoding.EncodeToString(encryptedBytes), nil
}
func main() {
// Load public key
publicKey, err := loadPublicKey("path/to/public_key.pem")
if err != nil {
log.Fatalf("failed to load public key: %v", err)
}
// Test encryption
encryptedText, err := encryptWithPublicKey(publicKey, "Hello, World!")
if err != nil {
log.Fatalf("failed to encrypt: %v", err)
}
fmt.Println("Encrypted Text:", encryptedText)
}
```
Decryption (Golang)
```go
package main
import (
"crypto/rand"
"crypto/rsa"
"crypto/sha256"
"encoding/base64"
"fmt"
"io/ioutil"
"log"
"crypto/x509"
"encoding/pem"
)
// Function to load private key
func loadPrivateKey(path string) (*rsa.PrivateKey, error) {
privBytes, err := ioutil.ReadFile(path)
if err != nil {
return nil, err
}
block, _ := pem.Decode(privBytes)
if block == nil || block.Type != "RSA PRIVATE KEY" {
return nil, fmt.Errorf("failed to decode PEM block containing private key")
}
return x509.ParsePKCS1PrivateKey(block.Bytes)
}
// Function to decrypt with private key using OAEP padding
func decryptWithPrivateKeyOAEP(privateKey *rsa.PrivateKey, encryptedText string) (string, error) {
encryptedBytes, err := base64.StdEncoding.DecodeString(encryptedText)
if err != nil {
return "", fmt.Errorf("error decoding base64 string: %v", err)
}
decryptedBytes, err := rsa.DecryptOAEP(sha256.New(), rand.Reader, privateKey, encryptedBytes, nil)
if err != nil {
return "", fmt.Errorf("error while decrypting: %v", err)
}
return string(decryptedBytes), nil
}
func main() {
// Load private key
privateKey, err := loadPrivateKey("path/to/private_key.pem")
if err != nil {
log.Fatalf("failed to load private key: %v", err)
}
// Example encrypted text (from the Node.js encryption)
encryptedText := "YOUR_ENCRYPTED_TEXT_HERE"
// Test decryption
decryptedText, err := decryptWithPrivateKeyOAEP(privateKey, encryptedText)
if err != nil {
log.Fatalf("failed to decrypt: %v", err)
}
fmt.Println("Decrypted Text:", decryptedText)
}
```
## **Understanding RSA Padding Schemes**
**Node.js Padding Schemes**
**1. RSA_PKCS1_PADDING:**
**Description:** This is the traditional RSA padding scheme, also known as PKCS#1 v1.5. It is less secure compared to OAEP and is not recommended for new implementations.
**Default Behavior:** If no padding scheme is specified, Node.js's crypto.publicEncrypt defaults to using RSA_PKCS1_PADDING.
**2. RSA_PKCS1_OAEP_PADDING:**
**Description:** This is the newer and more secure padding scheme called Optimal Asymmetric Encryption Padding (OAEP). It includes additional randomness and hash functions to increase security.
**Explicit Specification:** When you specify padding: crypto.constants.RSA_PKCS1_OAEP_PADDING, you are explicitly telling the function to use this padding scheme.
## Go Equivalent Padding Schemes
In Go, the standard library crypto/rsa provides support for both PKCS#1 v1.5 and OAEP padding schemes.
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/opd1q0wfaleszgdkzynf.png)
**Concluding Remarks:**
I hope this guide helps you navigate the complexities of cross-language encryption and decryption between Node.js and Go.
By understanding and correctly implementing the corresponding padding schemes, you can ensure secure data transmission across different programming environments.
Happy coding! | shaastry_mss |
1,911,797 | I made an app that helps you clean your inbox in under 5 minutes | I just built a new feature in our recently launched AgainstData web app. It's called Unsubscribe, and... | 0 | 2024-07-04T17:09:16 | https://dev.to/extrabright/i-made-an-app-that-helps-you-clean-your-inbox-in-under-5-minutes-lhp | webdev, privacy, startup, buildinpublic |
I just built a new feature in our recently launched [AgainstData](https://againstdata.com) web app. It's called Unsubscribe, and it works with Gmail (for now).
With just one click, you can unsubscribe, keep, or block newsletters and promotions you no longer wish to receive.
We took a privacy-first approach, which means that:
1. We don’t share or sell any user email or personal data.
2. We don’t have any dubious partners and we don’t plan to.
Let us know what you think!
### Combat spam and streamline your inbox
Our newest update is the ultimate solution to combat spam and streamline your inbox. We’re happy to introduce two powerful features: **one-click unsubscribe** and **bulk delete**, designed to tackle inbox clutter head-on.
Plus, you’ll enjoy an enhanced experience with a sleek design and seamless user interface.
### Uncover annoying email lists
Using AgainstData to declutter your inbox takes almost zero effort.
![Email lists](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/39y8epknxjdfnfbf1knk.png)
Simply [sign in using your Gmail account](https://app.againstdata.com). Our algorithm reveals the number of mailing lists associated with your email and generates a comprehensive list of companies sending you emails.
Now you can take action!
### Unsubscribe that works every time
Once you've received the comprehensive list of companies sending you emails, it's time to take action with just one click.
![AgainstData unsubscribe](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cpxz0qa5e4doxal8o2fe.png)
Select "Unsubscribe" to stop receiving emails from that particular email list. Tap "Keep" to continue receiving emails from important senders.
### Bulk delete to get to inbox zero
After unsubscribing, those old emails may still clutter your inbox. That's why we've simplified the process of achieving inbox zero, even for the most cluttered emails.
Choose "Delete emails on unsubscribe" to automatically remove old emails from unsubscribed mailing lists. Opt for "Delete emails on keep" to automatically clear old emails from the senders you choose to keep.
### CO2 reduction calculator
Deleting old emails and aiming for inbox zero also reduces your CO2 footprint associated with storing emails. By taking these actions, you're not just benefiting yourself, but also contributing to a healthier planet.
![AgainstData CO2 calculator](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/85gyr64wmxqs725pkfgo.png)
Our CO2 calculator demonstrates the positive impact you make by deleting unwanted emails and shows you the environmental difference you’re making with each click.
### Join Us in creating a cleaner, more efficient inbox
At AgainstData, we're committed to enhancing your email experience while prioritizing your privacy and environmental impact. We're excited for you to try out our new features and look forward to hearing your feedback. Together, we can create a cleaner, more efficient inbox and make a positive difference for the planet.
Let us know your thoughts and help us continue to improve.
Happy decluttering! | extrabright |
1,911,804 | Android users: I need your help! | I’m trying to release a version of my new game (Touch me When) for Android, but since I’m an... | 0 | 2024-07-04T17:08:09 | https://dev.to/marciofrayze/android-users-i-need-your-help-1ja8 | android, app | I’m trying to release a version of my new game (Touch me When) for Android, but since I’m an individual developer, Google imposes a more strict verification process before allowing me to publish a new App.
They require a 14 days period of closed beta testing with at least 20 testers. Can you help me with that? All you gotta do is download the closed beta version and open it from time to time (like once a day) to count as active user.
To become a tester, all you need to do is join a Google group (they use this as a way to control who can download the App, so it must be with the same email you use on your Android account) at: [https://groups.google.com/g/touchmewhen-testers](https://groups.google.com/g/touchmewhen-testers).
And then you should be able to download the App at: [https://play.google.com/store/apps/details?id=tech.segunda.touchwhen](https://play.google.com/store/apps/details?id=tech.segunda.touchwhen).
Thanks a lot for your help! 😄 | marciofrayze |
1,911,802 | Using APIs in a Web Application: Integration and Optimization | Introduction Hello, everyone! I am David CHOKOLA, a backend developer passionate about optimizing web... | 0 | 2024-07-04T17:03:37 | https://dev.to/cm_dav_c07dd21b43f0b869c5/using-apis-in-a-web-application-integration-and-optimization-575l | hng | **Introduction**
Hello, everyone! I am David CHOKOLA, a backend developer passionate about optimizing web applications and integrating innovative solutions. Today, I will share a recent experience where I integrated and optimized the use of APIs in a web application. But before diving into the technical details, let me tell you about the amazing opportunity that the HNG internship represents and why I am excited to embark on this journey.
In a recent project, our development team was working on a complex web application that required the integration of several external services. The efficient use of APIs (Application Programming Interfaces) proved essential to provide a rich and seamless user experience. However, integrating and optimizing APIs can present many challenges, including performance, security, and error handling.
**Step 1: Identifying Necessary APIs**
The first step was to identify the external APIs we needed. We listed the key features of our application and searched for reliable APIs to meet these needs. For example, we integrated a payment API for financial transactions, a geolocation API for location-based services, and a messaging API for notifications.
**Step 2: Authentication and Security**
For each integrated API, we had to set up secure authentication mechanisms. We used OAuth 2.0 to manage permissions and ensure that only authenticated requests could access external services. We also implemented API key management policies to protect our sensitive information.
**Step 3: Managing API Requests**
Once the APIs were integrated, we optimized request handling to improve our application's performance. We implemented a caching system to store responses from frequently used APIs, thereby reducing the number of external requests and improving response times. We used Redis as a caching solution due to its speed and reliability.
**Step 4: Error Handling**
Error handling is crucial when integrating APIs. We implemented exception handling mechanisms to gracefully handle errors and provide clear error messages to users. We also implemented automatic retry strategies to reattempt requests in case of temporary failures.
**Step 5: Monitoring and Maintenance**
After integrating the APIs, we set up monitoring tools to track the performance and availability of external services. We used tools like New Relic and Grafana to monitor API response times and detect anomalies. This proactive monitoring allows us to quickly respond to issues and ensure a seamless user experience.
Integrating and optimizing APIs was a rewarding experience that significantly enhanced the functionality and performance of our web application. This experience also showed me the importance of continuous learning and collaboration with industry experts. That's why I'm excited to join the HNG internship.
The HNG internship is an incredible opportunity for developers to hone their skills, work on real projects, and collaborate with talented professionals. I am eager to be part of this program and learn from the best.
If you are a developer looking to enhance your skills, I encourage you to explore the HNG Internship program. It's a fantastic platform for learning, growing, and connecting with like-minded individuals.
Additionally, if you are a company looking for talented developers, the HNG Hire platform is the ideal place to find interns and qualified professionals ready to make a difference.
Conclusion
Integrating and optimizing APIs are essential components for modern web application development. By following best practices in request management, security, and monitoring, we can deliver a smooth and high-performance user experience. I am thrilled to start this new chapter with the HNG internship and look forward to seeing where this journey takes me.
Thank you for reading, and I hope my experience inspires you to tackle your own challenges with determination and creativity. Let's continue pushing the boundaries of what is possible in technology!
Happy coding!
| cm_dav_c07dd21b43f0b869c5 |
1,911,801 | Navigating Seasonal Comfort: The Essential Role of an HVAC Contractor in White Plains | Maintaining a comfortable home environment is a year-round priority for residents in White Plains.... | 0 | 2024-07-04T17:01:27 | https://dev.to/jasonscharkopf/navigating-seasonal-comfort-the-essential-role-of-an-hvac-contractor-in-white-plains-3b8m | Maintaining a comfortable home environment is a year-round priority for residents in White Plains. Whether bracing against the cold winter winds or seeking refuge from the summer heat, homeowners rely on their heating, ventilation, and air conditioning (HVAC) systems to provide consistent comfort. This is where the expertise of a skilled **HVAC contractor in White Plains** comes into play, offering essential services that keep these complex systems running efficiently.
Expert Installation Services
The first step to ensuring a comfortable home environment starts with the proper installation of HVAC units. An **HVAC contractor in White Plains** specializes in fitting homes with high-quality heating and cooling systems designed to meet the unique needs of each space. Correct installation is paramount, as it significantly influences the performance and longevity of the system. A knowledgeable contractor will consider factors such as size, energy efficiency, and your home's layout to determine the best unit for your space.
Regular Maintenance and Tune-Ups
Just like any other major appliance or vehicle, HVAC systems require regular check-ups to operate at their best. An **HVAC contractor in White Plains** provides routine maintenance services, which can help prevent unexpected breakdowns that often occur during extreme weather conditions when systems are working overtime. These tune-ups not only ensure your system runs smoothly but can also extend its life span and help maintain energy efficiency.
Repair Services That Restore Comfort
Even with diligent maintenance, time and usage can lead to wear and tear on HVAC components. When this happens, immediate repair services are necessary to restore your home's comfort levels. An experienced **HVAC contractor in White Plains** possesses the diagnostic skills needed to quickly identify issues within your system—whether it’s a malfunctioning thermostat or a more complex problem—and provide timely repairs using quality parts.
Upgrades and Replacements
As technology advances, so do HVAC systems with newer models offering greater efficiency and enhanced features that improve indoor air quality and overall comfort. There comes a time when upgrading or replacing an old system is more cost-effective than continuing to repair it. A trustworthy **HVac contractor in white plains** can guide homeowners through selecting an appropriate upgrade by considering factors such as current energy consumption patterns and potential savings over time.
Emergency Services When You Need Them Most
Emergencies don’t wait for business hours; they happen unexpectedly and often at the least convenient times. That's why having access to an [hvac contractor white plains](https://www.buildzoom.com/contractor/jls-mechanical-llc-ny) who offers emergency services is crucial for peace of mind. Quick responses can make all the difference between a minor inconvenience and significant disruption during seasonal extremes.
Personalized Solutions for Every Home
No two homes are exactly alike, which means one-size-fits-all solutions rarely apply when it comes to HVAC concerns. A qualified **hvac contractor white plains** understands this reality well and works closely with clients to offer personalized recommendations tailored specifically for their homes' layouts, existing infrastructure, budgets, and unique comfort needs.
In conclusion, partnering with an experienced **hvac contractor white plains** ensures residents have access to comprehensive services necessary for maintaining optimal indoor environments throughout each season's challenges. From initial installation efforts through ongoing maintenance programs—alongside reliable repair work—their expertise remains critical in keeping our spaces cozy year-round without disruption or discomfort due to poorly performing systems.
[JLS Mechanical, LLC](https://jlsmechanicalsolutions.com/)
28 Edgewold Rd, White Plains, New York 10607
914-243-1212 | jasonscharkopf |
|
1,911,800 | Installing Ruby and Rails on WSL2 using ASDF-VM | In this short tutorial i'm using ubuntu 22.04lts on WSL2 Pre-reqs You need the asdf-vm... | 0 | 2024-07-04T17:01:23 | https://dev.to/cleverlopes/installing-ruby-and-rails-on-wsl2-using-asdf-vm-3o72 | rails, ruby, linux | In this short tutorial i'm using ubuntu 22.04lts on WSL2
## Pre-reqs
You need the [asdf-vm](https://asdf-vm.com/guide/getting-started.html) and the dev packages installed.
## Getting Started
Install the ruby plugin
```
asdf plugin add ruby
```
Install the version
```
asdf install ruby x.x.x
```
Set the global version
```
asdf global ruby x.x.x
```
Install the bundler and rails
```
gem install bundler
```
```
gem install rails
```
To set up your rails development env you can see this [post](https://dev.to/cleverlopes/how-to-set-up-a-development-env-to-rails-app-with-docker-and-postgresql-on-wsl2-4j1g)
**Obs:**
If you have some problem in install the ruby version using asdf make sure the following packages are installed: libyaml-dev libtool zlib1g zlib1g-dev libffi
| cleverlopes |