text
stringlengths 64
2.99M
|
---|
Please explain more detail about the Authentication not working, is there any error? Try to use F12 developer tools to check it. And please make sure you are not using the ex
As far as I know, when we create dotnet core application via Visual Studio, if we select the Individual User Accounts(using Identity), it will generally force the usage of https for best practice reasons.
In this scenario, to disable HTTPS, we could refer to the following steps:
remove the UseHttpsRedirection from the Startup.cs:
app.UseHsts();
app.UseHttpsRedirection();
Right click the project, and click the Properties, in the Debug tab, unchecked the Enable SSL option. Then, the application will be launched using HTTP requests.
If you are not using the Visual Studio, you could also removing the SSL references in the launchSettings.json file:
Besides, please make sure you are not using the external authentication services in your application. I have created new dotnet core application and use above method to disable HTTPS, and then, I could use Identity to register a new user and login. |
I would suggest using the default Laravel authentication mechanism so you don't have to mess with session handling etc.
You can remove all the default authentication routes and views. (If you've already scaffolded your application with the auth routes, remove Auth::routes() from your routes file.)
Then add one user to the database, perhaps called "Intranet user". It doesn't matter what you set as password/email. You can create this user with artisan tinker (like this).
Create a view with a form that only asks for the password and makes a POST request to a custom controller (for example LoginController). This controller simply checks if the password is correct, and logs in the user with: Auth::login($user).
<?php
namespace App\Http\Controllers;
use Illuminate\Support\Facades\Auth;
use App\Http\Controllers\Controller;
use App\User;
class LoginController extends Controller
{
public function showLoginForm()
{
return view("auth.login"); // Path to your login view (login form)
}
public function login($id)
{
$password = $request->input('password');
if ($password === "your-predefined-password") {
$user = User::findOrFail(1); // The ID of the only user in the system
Auth::login($user); // Log the user in
// Redirect somewhere
return redirect()->intended('dashboard');
}
// If the password didn't match, redirect back the the login page
return redirect('login')->with('error', 'Wrong password!');
}
}
routes.php:
Route::get('/login', 'LoginController@showLoginForm');
Route::post('/user', 'LoginController@login');
To limit the session lifetime, open up config/sessions.php and adjust the lifetime to 240 minutes (= 4 hours):
'lifetime' => env('SESSION_LIFETIME', 240),
Alternative solution:
Instead of hardcoding the password in LoginController, create a password and email for the user. Then make a login attempt like this:
$credentials = [
'email' => 'intranet@intranet',
'password' => $request->input('password')
];
if (Auth::attempt($credentials)) {
return redirect()->intended('dashboard');
}
|
After taking a look at io.micronaut.security.authentication.Authenticator I've seen it's possible to have multiple authenticationProviders in Micronaut.
The documentation says:
An Authenticator operates on several {@link AuthenticationProvider} instances returning the first authenticated {@link AuthenticationResponse}.
From what I've seen you just have to implement AuthenticationProvider and the Authenticator will include the implementations (even if it isn't annotated!) in an internal list of AuthenticationProviders.
IMHO this isn't a good way to provide multiple ways to authenticate. In the example provided in the question, the authentication for A and B both require calls to DB which means depending on the order of the execution of the AuthenticationProviders unneeded BD calls will be executed.
I think would be better to provide a way to indicate which AuthenticationProviders has to be used by controller or endpoint.
Maybe there is a way to do that and I just don't know, so feel free to comment if so. |
According to your description, I suggest you could try to use Policy-based authorization to achieve your requirement.
I suggest you could write a custom Handler to compare the appsetting.json value and the route value.
If the appsetting.json value contains the request the route value, then you could return Succeed to avoid auth.
More details, you could refer to below codes:
Appseting.json(DisableAuthController value)
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Information"
},
"DisableAuthController": "home,default,account"
}
}
Create a new class named: UserResourceHandler
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc.Filters;
using Microsoft.AspNetCore.Routing;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Localization.Internal;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
namespace FormAuthCore
{
public class UserResourceHandler : AuthorizationHandler<UserResourceRequirement>
{
private readonly IConfiguration Configuration;
private readonly IHttpContextAccessor _httpContextAccessor;
public UserResourceHandler(IConfiguration configuration, IHttpContextAccessor httpContextAccessor)
{
Configuration = configuration;
_httpContextAccessor = httpContextAccessor ?? throw new ArgumentNullException(nameof(httpContextAccessor));
}
protected override async Task HandleRequirementAsync(AuthorizationHandlerContext authHandlerContext, UserResourceRequirement requirement)
{
var re = Configuration.GetSection("DisableAuthController").Value;
var context = _httpContextAccessor.HttpContext.GetRouteData();
var area = (context.Values["area"] as string)?.ToLower();
var controller = (context.Values["controller"] as string)?.ToLower();
var action = (context.Values["action"] as string)?.ToLower();
if (re.Contains(controller))
{
authHandlerContext.Succeed(requirement);
}
}
}
}
Create a UserResourceRequirement class
using Microsoft.AspNetCore.Authorization;
namespace FormAuthCore
{
public class UserResourceRequirement : IAuthorizationRequirement { }
}
Add below codes into Startup.cs ConfigureServices method:
services.AddTransient<IHttpContextAccessor, HttpContextAccessor>();
services.AddAuthorization(options =>
{
options.AddPolicy("UserResource", policy => policy.Requirements.Add(new UserResourceRequirement()));
});
services.AddScoped<IAuthorizationHandler, UserResourceHandler>();
Enable Policy-based authorization for the controller:
[Authorize(Policy = "UserResource")]
public class HomeController : Controller
Update:
I added token1 jwt token auth in the startup.cs
services.AddAuthentication("Token1")
.AddJwtBearer("Token1", options =>
{
options.TokenValidationParameters = new TokenValidationParameters()
{
ValidateIssuer = true,
ValidIssuer = "abc",
ValidateAudience = true,
ValidAudience = "abc",
ValidateIssuerSigningKey = true,
IssuerSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes("aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa")),
};
options.Events = new JwtBearerEvents()
{
OnMessageReceived = context =>
{
var Token = context.Request.Headers["UserCred1"].ToString();
context.Token = Token;
return Task.CompletedTask;
},
};
});
services.AddAuthorization(options =>
{
options.AddPolicy("UserResource", policy => policy.Requirements.Add(new UserResourceRequirement()));
});
services.AddScoped<IAuthorizationHandler, UserResourceHandler>();
Then I could use _httpContextAccessor.HttpContext.AuthenticateAsync("Token1").Result to check if the token is valid or not.
protected override async Task HandleRequirementAsync(AuthorizationHandlerContext authHandlerContext, UserResourceRequirement requirement)
{
var re = Configuration.GetSection("DisableAuthController").Value;
var context = _httpContextAccessor.HttpContext.GetRouteData();
var re2 = _httpContextAccessor.HttpContext.AuthenticateAsync("Token1").Result;
var area = (context.Values["area"] as string)?.ToLower();
var controller = (context.Values["controller"] as string)?.ToLower();
var action = (context.Values["action"] as string)?.ToLower();
if (re.Contains(controller) || re2.Succeeded)
{
authHandlerContext.Succeed(requirement);
}
}
|
Not sure I understand correctly, so I try to summarize. You want to
retrieve a token from Okta with your SPA
exchange that token for a custom token generated by your API
Use the latter to communicate with the API.
Using 2 WebSecurityConfigurerAdapter implementations is one way to go about it and I think it's a good one. You would need 1 WebSecurityConfigurerAdapter that delegates to Okta and verifies that the token can be trusted on the exchange endpoint. Once it's verified, you can generate and return a token to the user.
The other WebSecurityConfigurerAdapter would be a simple one and you can find plenty of resources about it on the internet, but basically for each secure endpoint you need to verify the token.
I'm confused about the security filter details. Should I override one?
You can extend OncePerRequestFilter, as the name implies it will be invoked at most once for each request. This would verify that the request contains the header and it's a valid one, maps it to an Authentication and places it in the SecurityContextHolder
Where do I insert it?
You can insert it before/after the UsernamePasswordAuthenticationFilter, but you should probably read how the Spring Security architecture looks like to get a good idea. |
import GoogleSignIn
import GoogleAPIClientForREST
private let scopes = [kGTLRAuthScopeYouTube,
kGTLRAuthScopeYouTubeForceSsl,
kGTLRAuthScopeYouTubeUpload,
kGTLRAuthScopeYouTubeYoutubepartner]
private var service = GTLRYouTubeService()
private let youtubeObject = GTLRYouTube_Video()
func signInYoutube(){
GIDSignIn.sharedInstance()?.presentingViewController = self
GIDSignIn.sharedInstance()?.clientID = "Your_client_id"
GIDSignIn.sharedInstance().delegate = self
GIDSignIn.sharedInstance().scopes = scopes
if GIDSignIn.sharedInstance()?.hasPreviousSignIn() ?? false {
GIDSignIn.sharedInstance()?.restorePreviousSignIn()
} else {
GIDSignIn.sharedInstance()?.signIn()
}
}
func sign(_ signIn: GIDSignIn!, didSignInFor user: GIDGoogleUser!, withError error: Error!) {
if let error = error {
self.service.authorizer = nil
} else {
self.service.authorizer = user.authentication.fetcherAuthorizer()
uploadVideoOnYoutube()
}
}
func uploadVideoOnYoutube() {
guard let videoUrl = Bundle.main.url(forResource: "sample_iTunes", withExtension: "mov")else {return}
//Status
let status = GTLRYouTube_VideoStatus()
status.privacyStatus = kGTLRYouTube_ChannelStatus_PrivacyStatus_Public
//Snippet
let snippet = GTLRYouTube_VideoSnippet()
snippet.title = "YOUR_VIDEO_TITLE"
//Upload parameters
let params = GTLRUploadParameters.init(fileURL: videoUrl, mimeType: "video/mov")
//YouTube Video object
youtubeObject.status = status
youtubeObject.snippet = snippet
let query = GTLRYouTubeQuery_VideosInsert.query(withObject: youtubeObject, part: "snippet,status", uploadParameters: params)
service.executeQuery(query, completionHandler: { (ticket, anyobject, error) in
if error == nil {
if let videoObject = anyobject as? GTLRYouTube_Video {
print(videoObject.identifier ?? "upload")
}
} else {
print(error?.localizedDescription)
}
})
}
|
You should provide full stack trace
When the stack trace contains:
at oracle.jdbc.driver.T4CConnection.logon
Then connection was reset by the database, during authentication. (Not by any networking device on the way).
The root cause issue of such a situation (logon storm) is actually a problem on client's side. Due to lack of random numbers the client was not able to authenticate itself fast enough. When this happens then you can:
Tune kernel parameters to hold more random numbers in buffer(client)
Use that trick where JVM uses another /dev/*random device(client)
Change parameter SQLNET.INBOUND_CONNECT_TIMEOUT to extend the time window when servers waits for the client(db server)
But there can be other reasons why you do receive this error.
PS: JVM has explicit protections against 2nd "solution" (Java bug 6202721). As it compromises security. So if you set "java.security.egd=file:/dev/urandom" it will intentionally ignore you. While this device name is not black listed "java.security.egd=file:/dev/./urandom" |
As @nealio82 and @lavb said, you should have a look on Gedmo\Blameable which help you to handle properties as createdBy or updatedBy where you can store the User who create the ressource.
Blameable
StofDoctrineExtensionsBundle
Then to handle access, have a look on Voters which is awesome to handle security and different access.
Official Symfony documentation about Voters
e.g
Book entity
...
use Gedmo\Mapping\Annotation as Gedmo;
class Book {
...
/**
* @var string $createdBy
*
* @Gedmo\Blameable(on="create")
* @ORM\Column
*/
public User $owner;
public function getOwner() {
return $this->owner;
}
public function setOwner(User $owner) {
$this->owner = $owner
}
}
src/Security/Voter/BookVoter
namespace App\Security;
use App\Entity\Book;
use App\Entity\User;
use Symfony\Component\Security\Core\Authentication\Token\TokenInterface;
use Symfony\Component\Security\Core\Authorization\Voter\Voter;
class BookVoter extends Voter
{
const VIEW = 'view';
const EDIT = 'edit';
protected function supports(string $attribute, $subject)
{
// if the attribute isn't one we support, return false
if (!in_array($attribute, [self::VIEW, self::EDIT])) {
return false;
}
// only vote on `Book` objects
if (!$subject instanceof Book) {
return false;
}
return true;
}
protected function voteOnAttribute(string $attribute, $subject, TokenInterface $token) {
$user = $token->getUser();
if (!$user instanceof User) {
// the user must be logged in; if not, deny access
return false;
}
/** @var Book $book */
$book = $subject;
switch ($attribute) {
case self::VIEW:
return $this->canView($book, $user);
case self::EDIT:
return $this->canEdit($book, $user);
}
throw new \LogicException('This code should not be reached!');
}
private function canEdit(Book $book, User $user) {
// ONLY OWNER CAN EDIT BOOK
return $user === $book->getOwner();
}
private function canView(Book $book, User $user) {
// DIFFERENT LOGIC ?
return $user === $book->getOwner();
}
...
}
|
This may be due to the Unix Socket authentication plugin being used by the root user.
As the documentation for the plugin elaborates:
The unix_socket authentication plugin allows the user to use operating system credentials when connecting to MariaDB via the local Unix socket file.
The unix_socket authentication plugin works by calling the getsockopt system call with the SO_PEERCRED socket option, which allows it to retrieve the uid of the process that is connected to the socket. It is then able to get the user name associated with that uid. Once it has the user name, it will authenticate the connecting user as the MariaDB account that has the same user name.
Assuming that you aren't logged as root on your shell session, by running sudo mysql -u root -p you execute the command as root, and that's why you are not bypassing the authentication, it's just using the socket authentication as intended. It does not require a password since the OS user matches the MySQL user.
You can check if the root uses the Unix Socket authentication by doing the following:
MariaDB [(none)]> SELECT User, Host, plugin FROM mysql.user;
+------+-----------+-------------+
| User | Host | plugin |
+------+-----------+-------------+
| root | localhost | unix_socket |
+------+-----------+-------------+
I also suggest you to check this other question which addresses the same situation on MySQL. |
lDisclaimer: I have never heard of GitHub Packages before.
According to the example on the GitHub Packages website that shows some Docker CLI commands including docker login below,
$ docker login docker.pkg.github.com --username phanatic
Logged in successfully
$ docker tag app docker.pkg.github.com/phanatic/repo/app:1.0
$ docker push docker.pkg.github.com/phanatic/repo/app:1.0.0
I think MY_REGISTRY should be docker.pkg.github.com and MY_USERNAME should be your username (phanatic in the example above). Also your <to><image> (the target Docker image name) should start with docker.pkg.github.com/<your username >/..., as above.
The GitHub Packages docs (here and here) seem to suggest that you can use GITHUB_TOKEN as a password in GitHub Actions. I strongly recommend you encrypt the password value (GITHUB_TOKEN) for <password> in settings.xml. See the Maven doc for how to do so. You will need to create settings-security.xml.
Before using settings.xml and settings-security.xml, I would first locally test the username and GITHUB_TOKEN combination with <to><auth><username> and <to><auth><password> (unencrypted) for the purpose of checking if these values work. |
Json Web Tokens are a very specific format for a Bearer token. There are protocols like OpenID Connect that provide more structure around the login and trust process but at their heart, JWTs are just BASE64 encoded json with a verification hash.
You can roll your own SSO with JWT but as with everything in security, rolling your own comes with significant risks of making a bone head mistake and compromising your security. So research research and research some more if you take this route.
I did a very similar thing but stayed purely in the .net world. I used a .net library to build the JWT (https://docs.microsoft.com/en-us/previous-versions/visualstudio/dn464181(v%3Dvs.114)) and ASP.NET Core Identity to handle verification of the JWT (https://www.nuget.org/packages/Microsoft.AspNetCore.Authentication.JwtBearer) so I didn't write the code to actually generate the JWT. There is also only SSL connections made between the servers so some of the risk of the token getting sniffed is mitigated.
There are libraries for PHP to generate JWT or you could stand up your own JWT token provider in any language.
There also may be the possiblility of finding an OpenId Connect provider that could hook into your existing database. Identity Server 4 is one for .net but there may be one to be found in the PHP world. This introduces some overhead but does solve the problem of not having the ability to have a third party OpenId Connect provider.
Its not too terrible but security is one place where you wnat to be absolutely sure you get things right. |
I don't see where your actually subscribing to the user$ observable. looking at this it should work just fine assuming you have the proper imports and subscribe to user$. Login and getting the user state aren't coupled so I am guess your just missing the subscribing bit.
import {User} from 'firebase';
export interface FirebaseUser {
readonly uid: string;
readonly email: string;
readonly emailVerified: boolean;
}
export interface IUser {
id?: string;
email: string;
name: string;
gender: string;
}
@Injectable()
export class AuthService {
// To have the user's auth data from Firebase Authentication.
private bsCurrentUserAuth: BehaviorSubject<FirebaseUser> = new BehaviorSubject<FirebaseUser>(null);
readonly currentUserAuth$: Observable<FirebaseUser> = this.bsCurrentUserAuth.asObservable();
// To have the user's document data.
private bsCurrentUser: BehaviorSubject<IUser> = new BehaviorSubject<IUser>(null);
readonly currentUser$: Observable<IUser> = this.bsCurrentUser.asObservable();
private user$: Observable<IUser>;
constructor(
private afAuth: AngularFireAuth,
private afs: AngularFirestore,
) {
// Subscribe to the auth state, then get firestore user document || null
this.user$ = this.afAuth.authState.pipe(
switchMap(user => {
// Logged in
if (user) {
return this.afs.doc<IUser>(`${COLLECTIONS.USERS}/${user.uid}`).snapshotChanges()
.pipe(
map(changes => {
const data = changes.payload.data() as IUser;
const id = changes.payload.id;
const docData = {id, ...data};
this.saveUser(user, docData); // This is not being called.
return docData;
}));
} else {
// Logged out
this.clearAll();
return of(null);
}
})
).subscribe();
//you need to actually subscribe to the user state be sure to also unsubscribe when this is destroyed if not a singleton service
}
// This is not being called.
private saveUser(user: User, userDoc: IUser) {
console.log('saveUser');
this.bsCurrentUserAuth.next(user);
this.bsCurrentUser.next(userDoc);
}
private clearAll() {
this.bsCurrentUserAuth.next(null);
this.bsCurrentUser.next(null);
}
login(email: string, password: string): Promise<UserCredential> {
return this.afAuth.auth.signInWithEmailAndPassword(email, password);
}
}
|
I already posted an answer on your other question, but I see a number of misconceptions here that I considered worth addressing.
I have aspirations of installing this app at multiple companies. It will run on each company's intranet yet I do want the application to be accessible remotely (by setting up some port forwarders on the corporate firewall).
Note that security-wise this is practically identical to just running your application on the internet. Trying to rely on your application being hidden on some non-standard port is known as "security through obscurity", and it is a false sense of security.
If I want these warning to go away, will I have to buy a separate SSL key for each intranet the app runs on in order to prevent the user from ever seeing these warnings?
You'll need a certificate for your application, and you need visitors to trust your certificate. You can do this in two ways:
Obtain certificates from a CA, which are trusted out of the box.
Make your own certificates, and distribute them
Manually
Automated through some centralized means
These options are answered in more detail in your other question.
Can I make the expiration of these keys year 3000 (so they never practically expire)?
If you make your own certificates, you can. But you shouldn't. The cryptography behind certificates ages, attackers get more powerful, machines get broken into, keys get stolen. In theory, we have revocation for that, in practice it's... hairy. And that assumes you know there's an issue. Having certificates naturally expire after some time eases these problems a little. For that reason, having a 3000-year certificate is highly discouraged.
If you obtain certificates from a CA, you can't because they won't give you one, for the reasons above. Typical lifespan for certificates is one year. In fact, browsers are moving to block long-lived certificates.
I like https, but I despise paying money for official keys.
Then use a free certificate. They are offered by several parties. I like Let's Encrypt.
Here's why: With self generated keys, the encryption of the communication is just as secure as a purchased key.
No, it's not. At least, not if the browser complains.
The encryption of the channel is only secure if the certificate can be trusted. If you use a self-signed certificate, the browser has no way to trust the certificate (hence the warnings), which means a man-in-the-middle attack is possible just like it is for plain HTTP. The attacker can simply substitute their own certificate; your client has no way of knowing the certificate has been forged.
This is were CAs come in; they offer a way for browsers to have reasonable trust in a certificate.
You can avoid this problem by distributing your self-signed certificate yourself; if you do that properly, the connections will be perfectly secure. But this is more work, and scales poorly, which is why we use CAs for non-trivial situations.
Yet, the browser makes the user think he's going to get a virus if he accepts my generated certificate. The browser treats everything like online banking, when sometimes you have other reason for encryption. Ok, enough complaining.
Browsers are very right to be vocal about this; blindly accepting untrusted certificates gives a false sense of security; it's barely better than plain HTTP. It is rarely the right thing to do, and it almost always indicates a serious problem. Scaring users away is for the best.
So, ideally, I want one a key I can generate myself (to avoid fees), or maybe one key I purchase
You always generate the key yourself; I think you mean the certificate ;)
Anyway, use a CA. There are free ones available.
but I want that key to practically never expire
Ain't gonna happen.
and I want it to serve multiple installations (at different companies) for my app.
Bad idea; you want to isolate your customers from a compromise at another customer as much as possible, especially if you're going to distribute your own certificates.
I'd like the key to not care about domain names.
Same as above.
I want encrypted communication, but verification that I am who I say I am is not important to me at all.
The verification is integral to the encrypted communication; it is not optional.
How can I deploy an app like this in manner that will avoid browser warnings?
This is answered in your other question. |
Use @PreAuthorize which allows you to define a SpEL which will be evaluated as boolean to see if a method is allow to be executed.
You have several options:
(1) Use SpEL to refer to bean method that perform the checking :
@PreAuthorize("@authzService.isAllowToDo(#deliveryAddressId)")
public ResponseEntity<BasicDeliveryAddress> getDeliveryAddressById(Long deliveryAddressId {
}
@Service
public class AuthzService{
public boolean isAllowToDo(Long deliveryAddressId){
//Do the checking here....
}
}
(2) Use the built-in hasPermission expression :
@PreAuthorize("hasPermission(#deliveryAddressId, 'read')")
public ResponseEntity<BasicDeliveryAddress> getDeliveryAddressById(Long deliveryAddressId{
}
It requires to customise PermissionEvaluator to work. Same idea as (1) but it is a built-in solution.
(3) If the evaluation logic is simple and the method signature is
allowed, you can directly expressed it using SpEL :
@PreAuthorize("#deliveryAddress.userId == authentication.userId")
public ResponseEntity<BasicDeliveryAddress> getDeliveryAddressById(BasicDeliveryAddress deliveryAddress){
}
authentication is one of the built-in expression to access the Authentication object in the SecurityContextHolder and I assume you have already customised it to include userId. |
Basically, I am loading URL which redirects to another URL. In the
second URL, there is a form where I need to input username and
password to authenticate
From your description it sounds like you are accessing an API using OAuth2 with the authorization code grant flow ?...which by design requires the user (=resource owner) to authorise (via the form) your app (=client) to access his/her data provided by the API (=resource server). in this case, using basic auth (Attempt 1 and 2) will not help you as the API expects a token, not username/password. you'd probably need a refresh token that does not expire and would allow your client to request a fresh access token each time it wants to access the API...
It all depends on the authentication mechanism used by your API...I'd first figure out if your API is indeed using OAuth2, and if so, learn about authentication flows (e.g. https://www.udemy.com/course/learn-oauth-2/) ...client credentials flow is probably what you'd want if the API allows for it...
update: Attempt 3 might be worth a try, i've never done it though. you might be able to send the credentials by submitting the corresponding form data via python requests...then that should in theory provide you with an authorization code which you can use to get a token... |
TL;DR:
Calling db.collection() immediately after connection only works in versions of the driver less than 3.0.
Details:
Firstly, the official examples you sighted were from MongoDB driver at version 1.4.9, the driver is now at version 3.5.8, I would suggest you check out the latest documentation and examples here.
To clarify the confusion, the database path specified in the connection URI is the authentication database i.e the database used to log in, this is true even for the 1.4.9 version of the driver - reference.
However, the reason for the difference you mentioned, i.e being able to call db.collection() immediately after a connection in some cases is a result of the change in the MongoClient class in version 3 of the driver - reference.
Before version 3, MongoClient.connect would return a DB instance to its call back function and this instance would be referencing the database specified in the path of the connection URI, so you could call db.collection() straight away:
MongoClient.connect("<connection_URI>", function(err, db) {
// db is a DB instance, so I can access my collections straight away:
db.collection('sample_collection').find();
});
However, an update was made at version 3 such that, MongoClient.connect now returns a MongoClient instance not a DB instance anymore - reference:
MongoClient.connect("<connection_URI>", function(err, client) {
// client is a MongoClient instance, you would have to call
// the Client.db() method to access your database
const db = client.db('sample_database');
// Now you can access your collections
db.collection('sample_collection').find();
});
|
Deprecated: see the update solution in the original post
Before the official reactive AuditAware is provided, there is an alternative to implement these via Spring Data Mongo specific ReactiveBeforeConvertCallback.
Do not use @EnableMongoAuditing
Implement your own ReactiveBeforeConvertCallback, here I use a PersistentEntity interface for those entities that need to be audited.
public class PersistentEntityCallback implements ReactiveBeforeConvertCallback<PersistentEntity> {
@Override
public Publisher<PersistentEntity> onBeforeConvert(PersistentEntity entity, String collection) {
var user = ReactiveSecurityContextHolder.getContext()
.map(SecurityContext::getAuthentication)
.filter(it -> it != null && it.isAuthenticated())
.map(Authentication::getPrincipal)
.cast(UserDetails.class)
.map(userDetails -> new Username(userDetails.getUsername()))
.switchIfEmpty(Mono.empty());
var currentTime = LocalDateTime.now();
if (entity.getId() == null) {
entity.setCreatedDate(currentTime);
}
entity.setLastModifiedDate(currentTime);
return user
.map(u -> {
if (entity.getId() == null) {
entity.setCreatedBy(u);
}
entity.setLastModifiedBy(u);
return entity;
}
)
.defaultIfEmpty(entity);
}
}
Check the complete codes here. |
You are setting your access token with the refresh token You should be using
$client->fetchAccessTokenWithRefreshToken($client->getRefreshToken());
Oauthcallback.php
require_once __DIR__ . '/vendor/autoload.php';
require_once __DIR__ . '/Oauth2Authentication.php';
// Start a session to persist credentials.
session_start();
Oauth2Authncation.php
require_once __DIR__ . '/vendor/autoload.php';
/**
* Gets the Google client refreshing auth if needed.
* Documentation: https://developers.google.com/identity/protocols/OAuth2
* Initializes a client object.
* @return A google client object.
*/
function getGoogleClient() {
$client = getOauth2Client();
// Refresh the token if it's expired.
if ($client->isAccessTokenExpired()) {
$client->fetchAccessTokenWithRefreshToken($client->getRefreshToken());
file_put_contents($credentialsPath, json_encode($client->getAccessToken()));
}
return $client;
}
/**
* Builds the Google client object.
* Documentation: https://developers.google.com/identity/protocols/OAuth2
* Scopes will need to be changed depending upon the API's being accessed.
* Example: array(Google_Service_Analytics::ANALYTICS_READONLY, Google_Service_Analytics::ANALYTICS)
* List of Google Scopes: https://developers.google.com/identity/protocols/googlescopes
* @return A google client object.
*/
function buildClient(){
$client = new Google_Client();
$client->setAccessType("offline"); // offline access. Will result in a refresh token
$client->setIncludeGrantedScopes(true); // incremental auth
$client->setAuthConfig(__DIR__ . '/client_secrets.json');
$client->addScope([YOUR SCOPES HERE]);
$client->setRedirectUri(getRedirectUri());
return $client;
}
/**
* Builds the redirect uri.
* Documentation: https://developers.google.com/api-client-library/python/auth/installed-app#choosingredirecturi
* Hostname and current server path are needed to redirect to oauth2callback.php
* @return A redirect uri.
*/
function getRedirectUri(){
//Building Redirect URI
$url = $_SERVER['REQUEST_URI']; //returns the current URL
if(strrpos($url, '?') > 0)
$url = substr($url, 0, strrpos($url, '?') ); // Removing any parameters.
$folder = substr($url, 0, strrpos($url, '/') ); // Removeing current file.
return (isset($_SERVER['HTTPS']) ? "https" : "http") . '://' . $_SERVER['HTTP_HOST'] . $folder. '/oauth2callback.php';
}
/**
* Authenticating to Google using Oauth2
* Documentation: https://developers.google.com/identity/protocols/OAuth2
* Returns a Google client with refresh token and access tokens set.
* If not authencated then we will redirect to request authencation.
* @return A google client object.
*/
function getOauth2Client() {
try {
$client = buildClient();
// Set the refresh token on the client.
if (isset($_SESSION['refresh_token']) && $_SESSION['refresh_token']) {
$client->refreshToken($_SESSION['refresh_token']);
}
// If the user has already authorized this app then get an access token
// else redirect to ask the user to authorize access to Google Analytics.
if (isset($_SESSION['access_token']) && $_SESSION['access_token']) {
// Set the access token on the client.
$client->setAccessToken($_SESSION['access_token']);
// Refresh the access token if it's expired.
if ($client->isAccessTokenExpired()) {
$client->fetchAccessTokenWithRefreshToken($client->getRefreshToken());
$client->setAccessToken($client->getAccessToken());
$_SESSION['access_token'] = $client->getAccessToken();
}
return $client;
} else {
// We do not have access request access.
header('Location: ' . filter_var( $client->getRedirectUri(), FILTER_SANITIZE_URL));
}
} catch (Exception $e) {
print "An error occurred: " . $e->getMessage();
}
}
// Handle authorization flow from the server.
if (! isset($_GET['code'])) {
$client = buildClient();
$auth_url = $client->createAuthUrl();
header('Location: ' . filter_var($auth_url, FILTER_SANITIZE_URL));
} else {
$client = buildClient();
$client->authenticate($_GET['code']); // Exchange the authencation code for a refresh token and access token.
// Add access token and refresh token to seession.
$_SESSION['access_token'] = $client->getAccessToken();
$_SESSION['refresh_token'] = $client->getRefreshToken();
//Redirect back to main script
$redirect_uri = str_replace("oauth2callback.php",$_SESSION['mainScript'],$client->getRedirectUri());
header('Location: ' . filter_var($redirect_uri, FILTER_SANITIZE_URL));
}
|
Your first filter is set to cater for /signin as per your code:
filter.setFilterProcessesUrl("/signin");
Now, you would need a second filter, to cater for everything else, example:
.and()
.addFilter(getAuthenticationFilter())
.addFilter(getAuthorizationFilter());
...
@Bean
public AuthorizationFilter getAuthorizationFilter(){
AuthorizationFilter a = new AuthorizationFilter(authenticationManager());
a.setSecret(secret);
return a;
}
You could for example:
public class AuthorizationFilter extends BasicAuthenticationFilter {
....
@Override
protected void doFilterInternal(HttpServletRequest req, HttpServletResponse res,
FilterChain chain) throws IOException, ServletException {
//try/catch
String jwt = getJWT(r);
if(jwt != null){
Authentication a = getAuthentication(jwt);
if(a != null){
SecurityContextHolder.getContext().setAuthentication(a);
}
}
chain.doFilter(req, res);
}
private String getJWT(HttpServletRequest r){
String bearerToken = r.getHeader("Authorization");
//Do your checks here
return ...;
}
private Authentication getAuthentication(String jwt){
//Parse the jwt etc
return new UsernamePasswordAuthenticationToken(...);
}
}
As for Zuul, you also need to add: ignoredServices: '*' |
The request parameters for the Cloud Watch Rest API should be sent in the JSON format inside the {}, similar to the way in the POST example given. Of these only logGroupName is required, while the other paramaters mentioned are optional:
**
endTime
filterPattern
interleaved
limit
logGroupName
logStreamNamePrefix
logStreamNames
nextToken
startTime
**
In the context of the entire HTTP request:
POST / HTTP/1.1
Host: logs.<region>.<domain>
X-Amz-Date: <DATE>
Authorization: AWS4-HMAC-SHA256 Credential=<Credential>, SignedHeaders=content-type;date;host;user-agent;x-amz-date;x-amz-target;x-amzn-requestid, Signature=<Signature>
User-Agent: <UserAgentString>
Accept: application/json
Content-Type: application/x-amz-json-1.1
Content-Length: <PayloadSizeBytes>
Connection: Keep-Alive
X-Amz-Target: Logs_20140328.FilterLogEvents
{
"endTime": number,
"filterPattern": "mystring",
"interleaved": boolean,
"limit": number,
"logGroupName": "string",
"logStreamNamePrefix": "string",
"logStreamNames": [ "string" ],
"nextToken": "string",
"startTime": number
}
The Common Parameters are sent as HTTP params as seen in the example above. They are needed here for signing your requests to AWS with the proper authentication.
(Authentication occurs automatically in the background when using the CLI)
This is an official walkthrough of how to construct a canonical,signed HTTP request for AWS APIs
For example:
Action=ListUsers&
Version=2010-05-08&
X-Amz-Algorithm=AWS4-HMAC-SHA256&
X-Amz-Credential=AKIDEXAMPLE%2F20150830%2Fus-east-1%2Fiam%2Faws4_request&
X-Amz-Date=20150830T123600Z&
X-Amz-SignedHeaders=content-type%3Bhost%3Bx-amz-date
|
Input text must be having a special character punctuation that is not being taken as part of the literal text as your code is doing a plain concatenation.
Try the below code that fixes two issues:
Most importantly, the SQL injection vulnerability.
Secondly, your issue (if it is something related to the input string having special characters)
private void SubmitButton_Click(object sender, EventArgs e)
{
SqlParameter joke = new SqlParameter();
joke.ParameterName = "@joke";
joke.SqlDbType = SqlDbType.VarChar;
joke.Value = EnterJoke.Text;
SqlParameter answer = new SqlParameter();
answer.ParameterName = "@answer";
answer.SqlDbType = SqlDbType.VarChar;
answer.Value = EnterAnswer.Text;
cmd = new SqlCommand("INSERT INTO Jokes VALUES(@joke, @answer)", con);
cmd.Parameters.Add(joke);
cmd.Parameters.Add(answer);
con.Open();
cmd.ExecuteNonQuery();
MessageBox.Show(" Data Has Been Saved In Database ");
con.Close();
}
|
Assuming cookie based authentication, you can extend the client to validate the user session provided that the client keeps track of user sessions.
For that you can create a session manager where you add a session for the user (sub) after login, this also includes automatic login sessions (SSO). Remove one or all sessions on logout, which should also be updated on back channel logout (LogoutCallback).
Assuming you use middleware, you can consult the session manager there and decide what to do. Make sure that the current session isn't already activated after login. It has to step through the middleware at least once. Some pseudo code to illustrate the idea:
public Task Invoke(HttpContext context, SessionManager sessionManager)
{
if (context.Principal.Identity.IsAuthenticated)
{
var sub = context.Principal.FindFirst("sub")?.Value;
var sid = context.Principal.FindFirst("sid")?.Value;
// User is allowed when the current session is active.
if (!sessionManager.CurrentSessionIsActive(sub, sid))
{
// Rewrite path if user needs and is allowed to choose: redirect to session selector or
// Activate the current session and deactivate other sessions, if any.
if (sessionManager.HasMultipleSessions(sub) && sessionManager.CanSelectSession(sub))
context.Request.Path = new PathString("/SelectSession");
else
sessionManager.ActivateCurrentSession(sub, sid);
}
}
return _next(context);
}
On post of the SelectSession form you can mark in the session manager which sessions are active. If the old session should be preserved, then ignore the old session (remains active) and mark the current session as active.
Make sure to add the middleware after authenticating the user.
Please note that for access tokens you'll need a different strategy. |
Is the CSR(Certificate Signing Request) containing the public key and the organization details encrypted with the private key?
It is easy to check that it isn't.
1) Create a private key and associated CSR:
openssl req -new -sha256 -newkey rsa:2048 -nodes -keyout example.key -out example.csr
2) I now have 2 files, the private key and the CSR.
Let us show the content of the CSR after having deleted the private key, just to make sure it is not needed, and comparing with the private key.
With private key still there:
$ openssl req -noout -text -in example.csr
Certificate Request:
Data:
Version: 1 (0x0)
Subject: C = AU, ST = Some-State, O = Internet Widgits Pty Ltd
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
RSA Public-Key: (2048 bit)
Modulus:
00:e2:23:3c:4e:d8:39:ce:9a:16:2f:e2:ef:e7:9b:
5d:7f:20:a7:9a:4b:dd:54:ad:6b:b3:ff:33:78:65:
f2:b1:e1:e3:b5:eb:23:9d:da:b3:8d:3c:2f:1f:60:
9a:17:36:df:0f:4e:3a:bd:fb:9f:73:d5:00:c2:65:
04:a2:77:e6:5b:27:f2:30:8f:57:31:c8:bf:d1:0a:
cc:db:f5:95:8e:98:ff:34:c5:ed:68:57:f8:43:47:
41:ff:cb:6d:27:ae:de:33:95:cd:d6:0a:f8:0b:25:
27:99:4e:6b:7d:d8:c4:dd:83:97:57:7a:42:69:4c:
41:e2:d6:7f:86:d0:6f:1b:c2:30:b2:e7:a9:ee:5b:
9d:a1:ce:80:ec:45:a6:ad:a4:6e:b1:6a:b1:68:ef:
c4:7d:5b:6c:e5:24:fe:54:f9:bb:09:48:5c:49:ca:
fe:41:28:bc:48:e8:02:bf:ac:b0:5b:c6:3f:bb:0e:
17:d4:31:02:31:27:b1:a3:7a:ff:82:49:f0:11:10:
64:53:44:ca:61:82:fd:3a:82:5c:07:48:23:1f:db:
e5:0f:64:79:09:19:25:b4:a5:07:42:d3:b4:54:75:
61:13:43:63:34:a2:72:55:07:d6:d1:8c:74:31:cb:
5c:54:1e:6a:e7:04:86:35:4c:d9:a4:31:3f:fd:36:
9c:59
Exponent: 65537 (0x10001)
Attributes:
a0:00
Signature Algorithm: sha256WithRSAEncryption
6d:fb:a6:e5:2b:89:5c:ef:5c:ca:cc:d3:9a:3d:b1:c1:41:9d:
b5:55:ca:2c:17:ca:ea:74:1d:79:b9:16:ec:81:08:95:94:98:
e1:2b:50:c7:46:eb:d4:97:09:25:cc:da:b4:bd:34:3c:5a:14:
c8:88:ed:21:99:63:e9:c0:0e:fa:bb:5d:a7:27:11:22:61:a1:
1f:d3:65:c8:cc:14:ff:d7:ce:19:29:14:67:ed:e5:b8:31:b5:
25:55:8e:59:42:f1:2a:6d:f9:fe:4a:be:08:b9:23:c5:b6:3b:
c8:7e:3f:0c:bd:bb:37:f6:fd:5a:0e:50:50:43:8e:59:f7:b6:
77:06:50:b2:45:2a:17:f4:53:5a:7c:3c:50:6d:de:74:e3:0e:
df:94:48:bc:a9:fa:b8:a1:9a:3e:dc:10:c8:50:cb:9b:a7:49:
cc:ac:88:66:54:e6:d3:06:81:95:f4:ac:e1:61:d7:88:18:74:
e8:8e:d2:8d:e9:71:7f:99:41:b9:b3:a1:ad:af:d6:0b:2f:46:
8d:fa:c4:29:b4:40:38:fb:80:31:33:5c:62:67:62:dd:62:14:
36:fe:8f:8d:36:dc:0c:52:7b:0b:46:1c:58:94:2f:84:a9:54:
b0:a8:78:a0:9d:30:e9:0d:2f:a5:09:7d:3e:4e:75:16:56:f7:
94:a7:09:8f
Now removing private key:
rm example.key
and decoding the CSR again:
$ openssl req -noout -text -in example.csr
Certificate Request:
Data:
Version: 1 (0x0)
Subject: C = AU, ST = Some-State, O = Internet Widgits Pty Ltd
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
RSA Public-Key: (2048 bit)
Modulus:
00:e2:23:3c:4e:d8:39:ce:9a:16:2f:e2:ef:e7:9b:
5d:7f:20:a7:9a:4b:dd:54:ad:6b:b3:ff:33:78:65:
f2:b1:e1:e3:b5:eb:23:9d:da:b3:8d:3c:2f:1f:60:
9a:17:36:df:0f:4e:3a:bd:fb:9f:73:d5:00:c2:65:
04:a2:77:e6:5b:27:f2:30:8f:57:31:c8:bf:d1:0a:
cc:db:f5:95:8e:98:ff:34:c5:ed:68:57:f8:43:47:
41:ff:cb:6d:27:ae:de:33:95:cd:d6:0a:f8:0b:25:
27:99:4e:6b:7d:d8:c4:dd:83:97:57:7a:42:69:4c:
41:e2:d6:7f:86:d0:6f:1b:c2:30:b2:e7:a9:ee:5b:
9d:a1:ce:80:ec:45:a6:ad:a4:6e:b1:6a:b1:68:ef:
c4:7d:5b:6c:e5:24:fe:54:f9:bb:09:48:5c:49:ca:
fe:41:28:bc:48:e8:02:bf:ac:b0:5b:c6:3f:bb:0e:
17:d4:31:02:31:27:b1:a3:7a:ff:82:49:f0:11:10:
64:53:44:ca:61:82:fd:3a:82:5c:07:48:23:1f:db:
e5:0f:64:79:09:19:25:b4:a5:07:42:d3:b4:54:75:
61:13:43:63:34:a2:72:55:07:d6:d1:8c:74:31:cb:
5c:54:1e:6a:e7:04:86:35:4c:d9:a4:31:3f:fd:36:
9c:59
Exponent: 65537 (0x10001)
Attributes:
a0:00
Signature Algorithm: sha256WithRSAEncryption
6d:fb:a6:e5:2b:89:5c:ef:5c:ca:cc:d3:9a:3d:b1:c1:41:9d:
b5:55:ca:2c:17:ca:ea:74:1d:79:b9:16:ec:81:08:95:94:98:
e1:2b:50:c7:46:eb:d4:97:09:25:cc:da:b4:bd:34:3c:5a:14:
c8:88:ed:21:99:63:e9:c0:0e:fa:bb:5d:a7:27:11:22:61:a1:
1f:d3:65:c8:cc:14:ff:d7:ce:19:29:14:67:ed:e5:b8:31:b5:
25:55:8e:59:42:f1:2a:6d:f9:fe:4a:be:08:b9:23:c5:b6:3b:
c8:7e:3f:0c:bd:bb:37:f6:fd:5a:0e:50:50:43:8e:59:f7:b6:
77:06:50:b2:45:2a:17:f4:53:5a:7c:3c:50:6d:de:74:e3:0e:
df:94:48:bc:a9:fa:b8:a1:9a:3e:dc:10:c8:50:cb:9b:a7:49:
cc:ac:88:66:54:e6:d3:06:81:95:f4:ac:e1:61:d7:88:18:74:
e8:8e:d2:8d:e9:71:7f:99:41:b9:b3:a1:ad:af:d6:0b:2f:46:
8d:fa:c4:29:b4:40:38:fb:80:31:33:5c:62:67:62:dd:62:14:
36:fe:8f:8d:36:dc:0c:52:7b:0b:46:1c:58:94:2f:84:a9:54:
b0:a8:78:a0:9d:30:e9:0d:2f:a5:09:7d:3e:4e:75:16:56:f7:
94:a7:09:8f
Conclusion: same results, proving the key is not needed.
Of course:
1) it was trivial to see that immediately, because if the key was needed when decoding the CSR, you would have need to specify it on command line of openssl (it does not poke out randomly at files)
2) it is of course silly to have deleted the private key because now if some certificate is indeed created out of this CSR, it is useless as the attached private key does not exist anymore. |
My Workaround:
1. Create a Custom-Filter and add it to the (Spring) Security-Chain in early position.
2. Create a flag in the application.yml (securityEnabled)
3. Query the flag in the Custom-Filter. If 'true' simply go on with the next filter by calling chain.doFilter(). If 'false' create a dummy Keycloak-Account set the roles you need and set it to the context.
4. By the way the roles are also outsourced to the application.yml
5. Skip the rest of the filters in the Security-Chain (so no keycloak-stuff is executed and the corresponding Authorization happend)
In Detail:
1. Class of Custom-Filter
public class CustomFilter extends OncePerRequestFilter {
@Value("${securityEnabled}")
private boolean securityEnabled;
@Value("${grantedRoles}")
private String[] grantedRoles;
@Override
public void doFilterInternal(HttpServletRequest req, HttpServletResponse res,
FilterChain chain) throws IOException, ServletException {
if (!securityEnabled){
// Read roles from application.yml
Set<String> roles = Arrays.stream(grantedRoles)
.collect(Collectors.toCollection(HashSet::new));
// Dummy Keycloak-Account
RefreshableKeycloakSecurityContext session = new RefreshableKeycloakSecurityContext(null, null, null, null, null, null, null);
final KeycloakPrincipal<RefreshableKeycloakSecurityContext> principal = new KeycloakPrincipal<>("Dummy_Principal", session);
final KeycloakAccount account = new SimpleKeycloakAccount(principal, roles, principal.getKeycloakSecurityContext());
// Dummy Security Context
SecurityContext context = SecurityContextHolder.createEmptyContext();
context.setAuthentication(new KeycloakAuthenticationToken(account, false));
SecurityContextHolder.setContext(context);
// Skip the rest of the filters
req.getRequestDispatcher(req.getServletPath()).forward(req, res);
}
chain.doFilter(req, res);
}
}
2. Insert Custom-Filter in the http-Configuration of Spring-Security
protected void configure(HttpSecurity http) throws Exception {
super.configure(http);
http
.cors()
.and()
.csrf()
.disable()
.sessionManagement().sessionCreationPolicy(SessionCreationPolicy.STATELESS)
.sessionAuthenticationStrategy(sessionAuthenticationStrategy())
.and()
.addFilterAfter(CustomFilter(), CsrfFilter.class)
.authorizeRequests()
.anyRequest().permitAll();
}
Have a look at the default Filter-Chain after configuring Keycloak:
Filter-Chain
So it´s obvious to insert the Custom-Filter at position 5 to avoid the whole Keycloak-Magic.
I have used this workaround to defeat the method security and it´s @Secured-Annotation. |
It is a bad idea to store the state variable like isUserAuthenticated inside the protected Route which is the Dashboard in your case. Let me explain what the above code does, It first renders the dashboard component then after the first mount of dashboard the code decides whether the user is authenticated or not and conditionally redirects. And also to mention the variable unauthenticated is a component-level state which means there is no way for other components to know if user is authenticated or not.
Now where it could go wrong, you might need the auth state of the user in the header component where you would render a logout button if the user is authenticated or a login button. Some other component might also need the auth state of the user.
So the best approach would be to use better state management like context API or redux if you are comfortable with it.
Whether to use redux or not is whole another discussion. So here is a simple solution using context API to solve the problem.
in AuthContext.js
const AuthContext = React.CreateContext(null)
const AuthContextProvider = (props) => {
const [isAuthenticated, setIsAuthenticated] = useState(false)
const login = () => {
// your authentication logic
setIsAuthenticated(true)
}
const logout = () => {
// your logout logic
setIsAuthenticated(false)
}
return (
<AuthContext.Provider value={{isAuthenticated, login, logout}}>
{props.children}
<AuthContext.Provider>
)
}
export default AuthContextProvider;
In App.js wrap your app with the AuthContextProvider
function App(props) {
return (
<AuthContextProvider>
// all other app logic
// like <Switch>
// <Route exact path="/" component={Home}/>
// <Switch/>
<AuthContextProvider>
)
}
now to make the dashboard route protected you can take this approach make a new private route component
in PrivateRoute.js
function PrivateRoute(props) {
// keep in mind path is required as a prop
const { path, children, ...rest } = props;
// using the AuthContext to get the state variable isAuthenticated
const { isAuthenticated } = useContext(AuthContext);
return (
<Route
exact
path={path}
render={({ location }) =>
isAuthenticated ? (
children
) : (
<Redirect to={{ pathname: '/login', state: { from: location } }} />
)
}
/>
);
}
when rendering the protected pages like the dashboard in your case render the route like this inside the Switch, same goes for any other private routes
<Switch>
// other routes
// for example <Route exact path="/" component={Home}/>
// The Dashboard route
<PrivateRoute path="/dashboard">
<Dashboard/>
<PrivateRoute/>
<Switch/>
Keep in mind that the login and logout function comes from the context so any component using the login or logout functions needs to use the AuthContext as well. And also you could implement the same logic with redux if your app is already using it or if you are more comfortable with redux.
This is a lot harder to implement in code so let me know if you could implement it in your app.
Here are some links to the documentation if you need to reference it.
https://reactjs.org/docs/context.html
https://reacttraining.com/react-router/web/example/auth-workflow |
I've noticed the same error, when my GCP projectID was not defined during Bigquery Node.js Client initialization.
According to the documentation, NodeJS developing environment has to be properly prepared, once you've decided to use @google-cloud/bigquery library, by setting up:
Authentication parameters;
Google Cloud BigQuery API to be enabled;
Target Bigquery GCP ProjectID explicitly via gcloud
config set for user shell session on the relevant NodeJS
executor's machine or individually initializing for each client
const bigquery = new BigQuery({projectId: 'my-project'}); connection attempt to Bigquery REST API.
I've made some correctives in you initial code snippet to fix this issue:
line = {"id":"123", "dttme":"201807012130", "brwsr":"Chrome", "pg_id":"hpv1"};
const datajson=line;
const {BigQuery} = require('@google-cloud/bigquery');
const TableObjectHeader = {
"tableReference": {
"datasetId": "datasetId",
"tableId": "tableId",
}
}
const bigqueryClient = new BigQuery({projectId: 'my-project'});
const dataset = bigqueryClient.dataset(TableObjectHeader['tableReference']['datasetId']);
const table = dataset.table(TableObjectHeader['tableReference']['tableId']);
table.insert(datajson, function(err, response) {
console.log("error:"+JSON.stringify(err));
console.log("response:"+JSON.stringify(response));
});
Be also aware and don't share any user sensitive and confidential data across your code examples or/and explanation details to keep them only for private usage. |
This has been working for me for years in 32-bit Office
It could not possibly work with the Declare that you have shown.
MultiByteToWideChar expects an LPWSTR as the output buffer. VB performs automatic conversion from Unicode to ANSI when passing strings into Declared functions, so there is no way that the function would receive a pointer to a wide string buffer when lpWideCharStr is declared As String. At best, it would receive a buffer that is large enough so no buffer overflow would occur, and then VB would perform conversion back to Unicode when returning from the function, so you will end up with a double-unicode string.
lpMultiByteStr is not a string either, it's an array of bytes in some encoding.
The code inside EncodedStringByteArrayToString seems to know all that, because it correctly passes a byte array for lpMultiByteStr and an StrPtr for lpWideCharStr. This could have not happened with the current declaration of MultiByteToWideChar.
The declaration that is assumed by the code in EncodedStringByteArrayToString is:
Declare PtrSafe Function MultiByteToWideChar Lib "kernel32" ( _
ByVal CodePage As Long, _
ByVal dwFlags As Long, _
ByVal lpMultiByteStr As LongPtr, _
ByVal cchMultiByte As Long, _
ByVal lpWideCharStr As LongPtr, _
ByVal cchWideChar As Long _
) As Long
Apparently you had that before, so just put it back. |
I don't use Okta thus I don't know how exactly it works. But I have 2 assumptions:
Every request contains an accessToken in the Authorization header
You make a POST request to ${baseUrl}/v1/introspect and it will answer you with true or false to indicate that accessToken is valid or not
With these 2 assumptions in mind, if I have to manually implement custom security logic authentication, I would do following steps:
Register and implement a CustomAuthenticationProvider
Add a filter to extract access token from request
Registering custom authentication provider:
// In OktaOAuth2WebSecurityConfig.java
@Override
protected void configure(AuthenticationManagerBuilder auth) throws Exception {
auth.authenticationProvider(customAuthenticationProvider());
}
@Bean
CustomAuthenticationProvider customAuthenticationProvider(){
return new CustomAuthenticationProvider();
}
CustomAuthenticationProvider:
public class CustomAuthenticationProvider implements AuthenticationProvider {
private static final Logger logger = LoggerFactory.getLogger(CustomAuthenticationProvider.class);
@Override
public Authentication authenticate(Authentication authentication) throws AuthenticationException {
logger.debug("Authenticating authenticationToken");
OktaTokenAuthenticationToken auth = (OktaTokenAuthenticationToken) authentication;
String accessToken = auth.getToken();
// You should make a POST request to ${oktaBaseUrl}/v1/introspect
// to determine if the access token is good or bad
// I just put a dummy if here
if ("ThanhLoyal".equals(accessToken)){
List<GrantedAuthority> authorities = Collections.singletonList(new SimpleGrantedAuthority("USER"));
logger.debug("Good access token");
return new UsernamePasswordAuthenticationToken(auth.getPrincipal(), "[ProtectedPassword]", authorities);
}
logger.debug("Bad access token");
return null;
}
@Override
public boolean supports(Class<?> clazz) {
return clazz == OktaTokenAuthenticationToken.class;
}
}
To register the filter to extract accessToken from request:
// Still in OktaOAuth2WebSecurityConfig.java
@Override
protected void configure(HttpSecurity http) throws Exception {
http
.addFilterAfter(accessTokenExtractorFilter(), UsernamePasswordAuthenticationFilter.class)
.authorizeRequests().anyRequest().authenticated();
// And other configurations
}
@Bean
AccessTokenExtractorFilter accessTokenExtractorFilter(){
return new AccessTokenExtractorFilter();
}
And the filter it self:
public class AccessTokenExtractorFilter extends OncePerRequestFilter {
private static final Logger logger = LoggerFactory.getLogger(AccessTokenExtractorFilter.class);
@Override
protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws ServletException, IOException {
logger.debug("Filtering request");
Authentication authentication = getAuthentication(request);
if (authentication == null){
logger.debug("Continuing filtering process without an authentication");
filterChain.doFilter(request, response);
} else {
logger.debug("Now set authentication on the request");
SecurityContextHolder.getContext().setAuthentication(authentication);
filterChain.doFilter(request, response);
}
}
private Authentication getAuthentication(HttpServletRequest request) {
String accessToken = request.getHeader("Authorization");
if (accessToken != null){
logger.debug("An access token found in request header");
List<GrantedAuthority> authorities = Collections.singletonList(new SimpleGrantedAuthority("USER"));
return new OktaTokenAuthenticationToken(accessToken, authorities);
}
logger.debug("No access token found in request header");
return null;
}
}
I have uploaded a simple project here for your easy reference: https://github.com/MrLoyal/spring-security-custom-authentication
How it works:
The AccessTokenExtractorFilter is placed right after the UsernamePasswordAuthenticationFilter, which is a default filter by Spring Security
A request arrives, the above filter extracts accessToken from it and place it in the SecurityContext
Later, the AuthenticationManager calls the AuthenticationProvider(s) to authenticate request. This case, the CustomAuthenticationProvider is invoked
BTW, your question should contain spring-security tag.
Update 1: About AuthenticationEntryPoint
An AuthenticationEntryPoint declares what to do when an unauthenticated request arrives ( in our case, what to do when the request does not contain a valid "Authorization" header).
In my REST API, I simply response 401 HTTP status code to client.
// CustomAuthenticationEntryPoint
@Override
public void commence(HttpServletRequest request, HttpServletResponse response, AuthenticationException authException) throws IOException, ServletException {
response.reset();
response.setStatus(401);
// A utility method to add CORS headers to the response
SecUtil.writeCorsHeaders(request, response);
}
Spring's LoginUrlAuthenticationEntryPoint redirects user to login page if one is configured.
So if you want to redirect unauthenticated requests to Okta's login page, you may use a AuthenticationEntryPoint. |
I think that there might be some confusion here as I see a reference to the official Firebase Unity SDK mixed with raw REST calls in RestClient. I'll answer this assuming that you're ok with using the Unity SDK. This is a far simpler integration than attempting to manually use the Firebase SDK (and gets you nice benefits - such as local caching).
1) Firebase Authentication
Once Firebase Authentication is initialized, FirebaseAuth.DefaultInstance.CurrentUser will always contain your currently signed in user or null if the user is not signed in. This value is actually stored in C++ and accessed through C#, meaning that it doesn't actually know about or abide by Unity's typical object lifecycles. That means that once you've signed in a user, this value will always hold the current user without the need to persist it across scene boundaries. In fact, this value is even preserved across runs of your game (meaning that your players don't have to log in every time).
A warning about this though: CurrentUser is updated asynchronously -- that is that there is no real guarantee that CurrentUser is up to date -- so it's generally safe to register a StateChanged listener as Puf suggested from the documentation:
Firebase.Auth.FirebaseAuth auth;
Firebase.Auth.FirebaseUser user;
// Handle initialization of the necessary firebase modules:
void InitializeFirebase() {
Debug.Log("Setting up Firebase Auth");
auth = Firebase.Auth.FirebaseAuth.DefaultInstance;
auth.StateChanged += AuthStateChanged;
AuthStateChanged(this, null);
}
// Track state changes of the auth object.
void AuthStateChanged(object sender, System.EventArgs eventArgs) {
if (auth.CurrentUser != user) {
bool signedIn = user != auth.CurrentUser && auth.CurrentUser != null;
if (!signedIn && user != null) {
Debug.Log("Signed out " + user.UserId);
}
user = auth.CurrentUser;
if (signedIn) {
Debug.Log("Signed in " + user.UserId);
}
}
}
void OnDestroy() {
auth.StateChanged -= AuthStateChanged;
auth = null;
}
I would highly recommend watching my tutorial on Firebase Authentication to see how I think about integrating this with a game. The link you shared is appropriate, but I'm a little curious about the various REST calls I see in your code.
If you're using the Firebase Unity SDK, email/password authentication should be as easy as:
auth.CreateUserWithEmailAndPasswordAsync(email, password).ContinueWith(task => {
if (task.IsCanceled) {
Debug.LogError("CreateUserWithEmailAndPasswordAsync was canceled.");
return;
}
if (task.IsFaulted) {
Debug.LogError("CreateUserWithEmailAndPasswordAsync encountered an error: " + task.Exception);
return;
}
// Firebase user has been created.
Firebase.Auth.FirebaseUser newUser = task.Result;
Debug.LogFormat("Firebase user created successfully: {0} ({1})",
newUser.DisplayName, newUser.UserId);
});
to create a user and
auth.SignInWithEmailAndPasswordAsync(email, password).ContinueWith(task => {
if (task.IsCanceled) {
Debug.LogError("SignInWithEmailAndPasswordAsync was canceled.");
return;
}
if (task.IsFaulted) {
Debug.LogError("SignInWithEmailAndPasswordAsync encountered an error: " + task.Exception);
return;
}
Firebase.Auth.FirebaseUser newUser = task.Result;
Debug.LogFormat("User signed in successfully: {0} ({1})",
newUser.DisplayName, newUser.UserId);
});
to sign them in. That is, there should be no need to use RestClient.
2) Realtime Database
Once you're authenticated, any calls to the Firebase Realtime Database SDK will automatically use the CurrentUser value (as I mentioned before - it persists on the C++ side of the SDK).
If you're hoping to use rules to secure user data such as:
{
"rules": {
"users": {
"$user_id": {
// grants write access to the owner of this user account
// whose uid must exactly match the key ($user_id)
".write": "$user_id === auth.uid"
}
}
}
}
Then writing data:
Database.DefaultInstance.GetReference($"/users/{FirebaseAuth.DefaultInstance.CurrentUser.UserId}/mySecret").SetValueAsync("flowers");
should just work.
I hope that all helps!
--Patrick |
ASCII for z is 122. You add 2 to that. The ASCII for 124 is | symbol.
You need to check if your addition is going out of range (i.e. above 122).
Note: this won't work is N is greater than 26. Check the solution just below that implements modulo to handle that.
public static String uno (String s, int N) {
String f, n = "";
int c;
int length = s.length();
for (int i = 0; i < length; i++) {
c = s.charAt(i);
c = c + N;
if (c >= 122) {
c -= 26;
}
f = Character.toString((char) c);
n = n + f;
}
return n;
}
Side note: Never concatenate a string in a loop using +. It is very inefficient. Using StringBuilder.
Handle case sensitive letters concisely:
public static String uno (String s, int N) {
StringBuilder n = new StringBuilder();
int bound = s.length();
IntStream.range(0, bound).forEach(i -> {
char c = s.charAt(i);
n.append(Character.isUpperCase(c) ?
(char) ((c + N - 'A') % 26 + 'A') :
(char) ((c + N - 'a') % 26 + 'a'));
});
return n.toString();
}
Handling negative numbers:
public static String uno (String s, int N) {
StringBuilder n = new StringBuilder();
int bound = s.length();
IntStream.range(0, bound).forEach(i -> {
char c = s.charAt(i);
if (N > 0) {
n.append(Character.isUpperCase(c) ?
(char) ((c + N - 'A') % 26 + 'A') :
(char) ((c + N - 'a') % 26 + 'a'));
} else {
n.append((char) (c + N % 26 + 26));
}
});
return n.toString();
}
Check this comment for a good point on your naming conventions. |
The approach depends a bit depending on your requirements. If you plan to accept only U.S and Canadian cards then the simplest approach would be to confirm the PaymentIntent server-side as described in this guide here:
https://stripe.com/docs/payments/without-card-authentication
The gist is that you collect the credit card information client-side (preferably by tokenizing the details using one of our client-libraries), then call the PaymentIntents API much like you would the Charges API:
var options = new PaymentIntentCreateOptions
{
Amount = 1099,
Currency = "usd",
PaymentMethodId = request.PaymentMethodId,
// A PaymentIntent can be confirmed some time after creation,
// but here we want to confirm (collect payment) immediately.
Confirm = true,
// If the payment requires any follow-up actions from the
// customer, like two-factor authentication, Stripe will error
// and you will need to prompt them for a new payment method.
ErrorOnRequiresAction = true,
};
paymentIntent = service.Create(options);
The key parameters here are:
Confirm: needs to be set to true so that the payment is processed right away.
ErrorOnRequiresAction: needs to be set to true to prevent the payment from entering a state where it expects some form of authentication (e.g. 3D Secure)
If SCA and global regulatory requirements are a concern. Then you will need to find a way to confirm the payment client-side so users can authenticate a payment if they need to. Right now, the available integration paths are unfortunately quite limited for hybrid mobile technologies like Cordova, React Native, and Xamarin. Generally speaking there are two paths you can take:
run Stripe.js in a WebView
This would allow you to use all the methods described here: https://stripe.com/docs/js, and follow our default integration path for accepting payments: https://stripe.com/docs/payments/accept-a-payment. For the Xamarin side of things a good place to start would be the official WebView example: https://docs.microsoft.com/en-us/samples/xamarin/xamarin-forms-samples/workingwithwebview/.
build a bridge to Stripe's native iOS and Android SDKs
This is a bit more complex than running Stripe.js in a WebView, but would likely be more performant and give a slightly more polished user experience. With this approach you would build a bridge into Stripe's Android and iOS SDKs using the approaches outlined here: https://devblogs.microsoft.com/xamarin/binding-ios-swift-libraries/ (iOS), https://docs.microsoft.com/en-us/xamarin/android/platform/binding-java-library/ (Android) |
Additional space is allocated for the stack protection variable - I highlighted this place in red (my code is compiled as x64 - this does not change the essence). If your buffer overflows, the security variable will be damaged and ___stack_chk_fail will be called.
Hex-rays decompiler hides this variable from the output:
ssize_t myread()
{
char v1[10]; // [rsp+Eh] [rbp-12h] BYREF
return read(0, v1, 0x64uLL);
}
Small hint: if you want to analyze the stack variables - double click any variable in the output of disassembler or decompiler - the stack window will open:
-0000000000000020 var_20 dq ?
-0000000000000018 db ? ; undefined
-0000000000000017 db ? ; undefined
-0000000000000016 db ? ; undefined
-0000000000000015 db ? ; undefined
-0000000000000014 db ? ; undefined
-0000000000000013 db ? ; undefined
-0000000000000012 s db 10 dup(?)
-0000000000000008 stack_protection dq ?
+0000000000000000 s db 8 dup(?)
+0000000000000008 r db 8 dup(?)
+0000000000000010
+0000000000000010 ; end of stack variables
|
To do this, you need to create a custom authentication backend that validates api keys.
In this example, the request is checked for a valid token automatically. You don't need to modify and of your views at all. This is because it includes custom middleware that authenticates the user.
For brevity, I'm assuming that the valid user tokens are stored in a model that is foreign keyed to the django auth.User model.
# my_project/authentication_backends.py
from django.contrib import auth
from django.contrib.auth.backends import ModelBackend
from django.contrib.auth.models import User
from django.contrib.auth.middleware import AuthenticationMiddleware
TOKEN_QUERY_PARAM = "token"
class TokenMiddleware(AuthenticationMiddleware):
def process_request(self, request):
try:
token = request.GET[TOKEN_QUERY_PARAM]
except KeyError:
# A token isn't included in the query params
return
if request.user.is_authenticated:
# Here you can check that the authenticated user has the same `token` value
# as the one in the request. Otherwise, logout the already authenticated
# user.
if request.user.token.key == token:
return
else:
auth.logout(request)
user = auth.authenticate(request, token=token)
if user:
# The token is valid. Save the user to the request and session.
request.user = user
auth.login(request, user)
class TokenBackend(ModelBackend):
def authenticate(self, request, token=None):
if not token:
return None
try:
return User.objects.get(token__key=token)
except User.DoesNotExist:
# A user with that token does not exist
return None
def get_user(self, user_id):
try:
return User.objects.get(pk=user_id)
except User.DoesNotExist:
return None
Now, you can add the paths to AUTHENTICATION_BACKENDS and MIDDLEWARE in your settings.py in addition to any existing backends or middleware you may already have. If you're using the defaults, it would look like this:
MIDDLEWARE = [
# ...
"django.contrib.auth.middleware.AuthenticationMiddleware",
# This is the dotted path to your backend class. For this example,
# I'm pretending that the class is in the file:
# my_project/authentication_backends.py
"my_project.authentication_backends.TokenMiddleware",
# ...
]
AUTHENTICATION_BACKENDS = [
"django.contrib.auth.backends.ModelBackend",
"my_project.authentication_backends.TokenBackend",
]
|
The firebase and swiftUI combination is kinda tricky at first, but you will figure out that the same pattern is used in every single project, no worries.
Just follow my steps and customise on your project, here is our strategy.
- This might be a long answer, but i want to leave it as a refrence to all Firebase-SwiftUI user Managing in Stack OverFlow. -
Creating a SessionStore class which provides the BindableObject, and listen to your users Authentification and Handle the Auth and CRUD methods.
Creating a Model to our project ( you already did it)
Adding Auth methods in SessionStore Class.
Listening for changes and putting things together.
Let s start by SessionStore Class:
import SwiftUI
import Firebase
import Combine
class SessionStore : BindableObject {
var didChange = PassthroughSubject<SessionStore, Never>()
var session: User? { didSet { self.didChange.send(self) }}
var handle: AuthStateDidChangeListenerHandle?
func listen () {
// monitor authentication changes using firebase
handle = Auth.auth().addStateDidChangeListener { (auth, user) in
if let user = user {
// if we have a user, create a new user model
print("Got user: \(user)")
self.session = User(
uid: user.uid,
displayName: user.displayName
)
} else {
// if we don't have a user, set our session to nil
self.session = nil
}
}
}
// additional methods (sign up, sign in) will go here
}
Notice that we’ve declared that our session property is an optional User type, which we haven’t yet defined. Let’s quickly make one:
class User {
var uid: String
var email: String?
var displayName: String?
init(uid: String, displayName: String?, email: String?) {
self.uid = uid
self.email = email
self.displayName = displayName
}
}
Now, adding signUp, signIn and signOut methods
class SessionStore : BindableObject {
// prev code...
func signUp(
email: String,
password: String,
handler: @escaping AuthDataResultCallback
) {
Auth.auth().createUser(withEmail: email, password: password, completion: handler)
}
func signIn(
email: String,
password: String,
handler: @escaping AuthDataResultCallback
) {
Auth.auth().signIn(withEmail: email, password: password, completion: handler)
}
func signOut () -> Bool {
do {
try Auth.auth().signOut()
self.session = nil
return true
} catch {
return false
}
}
}
Finally, we need a way to stop listening to our authentication change handler.
class SessionStore : BindableObject {
// prev code...
func unbind () {
if let handle = handle {
Auth.auth().removeStateDidChangeListener(handle)
}
}
}
Finally, Making our content view:
import SwiftUI
struct ContentView : View {
@EnvironmentObject var session: SessionStore
var body: some View {
Group {
if (session.session != nil) {
Text("Hello user!")
} else {
Text("Our authentication screen goes here...")
}
}
}
}
|
Here is the solution that worked for me (Tested on ASP .NET Core 2.1 and 3.1)
Don't set a default authentication scheme since you have 2 types (Cookies and JWT). i.e. your call to AddAuthentication should be without parameters:
services.AddAuthentication()
.AddAzureAD(options => Configuration.Bind("AzureAd", options))
.AddJwtBearer(o=> {
o.Authority = "https://login.microsoftonline.com/common";
o.TokenValidationParameters.ValidateAudience = false;
o.TokenValidationParameters.ValidateIssuer = false;
});
Note that I explicitly didn't bind your AD configuration because /common is needed to be applied to the authority (or the tenant id)
Also I set validation for audience and issuer to false so that any AAD token will work for testing. You should obviously set the correct audience/issuer
I used AddAzureAd and not AddSignIn (is that a custom external library you are using?)
Create a policy that accepts both authentication schemes:
services.AddAuthorization(options =>
{
options.AddPolicy("UserAndApp", builer =>
{
builer.AuthenticationSchemes.Add(JwtBearerDefaults.AuthenticationScheme);
builer.AuthenticationSchemes.Add(AzureADDefaults.AuthenticationScheme);
builer.RequireAuthenticatedUser();
});
});
Replace this with your existing authorization setup
Use the new policy name in your controller:
[Authorize("UserAndApp")]
public class HomeController : Controller
Some explanation on the mechanics:
You don't want to setup automatic authentication scheme since this will be the default schema run in the authorization middleware, while you have 2 different types
The policy will try run both authentication handlers, if one of them succeeds then authentication succeeded
Note: if you send a request with an invalid Bearer token, both authetnication handlers will fail, in this case the AzureADDefaults will "win" since it actually implement a challenge method and will redirect you (status code 302), so make sure to handle this in your app |
Assuming your application adheres to best practices and does not have vulnerabilities, this is absolutely safe, as long as you do not (accidentally) include any credentials or secrets as you have mentioned.
If your application does have vulnerabilities, putting it on GitHub might actually decrease the danger. If the vulnerability is in a dependency you are using, GitHub might alert you to the vulnerability, making you aware of it and allowing you to fix it. Furthermore, other users might find the flaw, report an issue or PR and help you fix it. Another added benefit is that your code is securely stored off-site, should your own computer become compromised.
On the other hand, a motivated attacker might want to exploit the vulnerability. In order to do that, they'll still need to sift through your code to find it, and then attack you are someone using your software. Unless your software is used by high-value targets or lots of targets, this isn't economical for the attacker.
Is it safe if the github repo is set to private?
Pretty much so. The contents of private repos are regulated in section E of the ToS:
Short version: You may have access to private repositories. We treat the content of private repositories as confidential, and we only access it for support reasons, with your consent, or if required to for security reasons.
I encourage you to read the whole section of the ToS, it is not that long but a worthwhile read if you have concerns about the confidentiality of the private repo.
Note that Microsoft themselves nowadays host the Windows source code on GitHub, in a private repo. And many other companies do as well. GitHub has managed to gain a reputation for being trustworthy in that regard.
Imho, I would not hesitate to publish open source projects publicly on GitHub. But if the project is a closed-source, for profit application, the question arises why you'd like to make the source code available in the first place. A private repo would be much better suited for that. |
As of Fargate platform 1.4, released on 04/2020, ephemeral storage is now 20 GB, instead of 10 GB.
Additionally, you can now mount persistent EFS storage volumes in Fargate tasks.
For example:
{
"containerDefinitions": [
{
"name": "container-using-efs",
"image": "amazonlinux:2",
"entryPoint": [
"sh",
"-c"
],
"command": [
"ls -la /mount/efs"
],
"mountPoints": [
{
"sourceVolume": "myEfsVolume",
"containerPath": "/mount/efs",
"readOnly": true
}
]
}
],
"volumes": [
{
"name": "myEfsVolume",
"efsVolumeConfiguration": {
"fileSystemId": "fs-1234",
"rootDirectory": "/path/to/my/data",
"transitEncryption": "ENABLED",
"transitEncryptionPort": integer,
"authorizationConfig": {
"accessPointId": "fsap-1234",
"iam": "ENABLED"
}
}
}
]
}
Taken from:
efs-volumes in Fargate |
Theory
TrueDepth sensor lets iPhone X / 11 / 12 / 13 generate a high quality ZDepth channel in addition to RGB channels that are captured through a regular selfie camera. ZDepth channel allows us visually make a difference whether it's a real human face or just a photo. In ZDepth channel, a human face is represented as a gradient, but photo has almost solid color because all pixels on a photo's plane are equidistant from camera.
AVFoundation
At the moment AVFoundation API has no Bool-type instance property allowing you to find out if it's a real face or a photo, but AVFoundation's capture subsystem provides you with AVDepthData class – a container for per-pixel distance data (depth map) captured by camera device. A depth map describes at each pixel the distance to an object, in meters.
@available(iOS 11.0, *)
open class AVDepthData: NSObject {
open var depthDataType: OSType { get }
open var depthDataMap: CVPixelBuffer { get }
open var isDepthDataFiltered: Bool { get }
open var depthDataAccuracy: AVDepthDataAccuracy { get }
}
A pixel buffer is capable of containing the depth data's per-pixel depth or disparity map.
var depthDataMap: CVPixelBuffer { get }
ARKit
ARKit heart is beating thanks to AVFoundation and CoreMotion sessions (in a certain extent it also uses Vision). Of course you can use this framework for Human Face detection but remember that ARKit is a computationally intensive module due to its "heavy metal" tracking subsystem. For a successful real face (not a photo) detection, use ARFaceAnchor allowing you to register head's motion and orientation at 60 fps and facial blendshapes allowing you to register user's facial expressions in real time.
Vision
Implement Apple Vision and CoreML techniques to recognize and classify a human face contained in CVPixelBuffer. But remember, you need ZDepth-to-RGB conversion in order to work with Apple Vision – AI / ML mobile frameworks don't work with Depth map data directly, at the moment. When you want to use RGBD data for authentication, and there will be just one or two users' faces to recognize, it considerably simplifies a task for Model Learning process. All you have to do is to create an mlmodel for Vision containing many variations of ZDepth facial images.
You can use Apple Create ML app for generating a lightweight and effective mlmodel files.
Useful links
Sample codes for detecting and classifying images using Vision you can find here and here. Also you can read this post to find out how to convert AVDepthData to regular RGB pattern. |
Disclaimer: I am a maintainer of the free open source package below, but I think it's appropriate here as it's a common question there isn't a great answer for, as many of the popular solutions have the specific security flaws raised in the question (such as not using CSRF where appropriate and exposing Session Tokens or web tokens to client side JavaScript).
The package NextAuth.js attempts to address the issues raised above, with free open source software.
It uses httpOnly cookies with secure.
It has CSRF protection (double submit cookie method, with signed cookies).
Cookies are prefixed as appropriate (e.g. __HOST- or __Secure).
It supports email/passwordless signin and OAuth providers (with many included).
It supports both JSON Web Tokens (signed + encrypted) and Session Databases.
You can use it without a database (e.g. any ANSI SQL, MongoDB).
Has a live demo (view source).
It is 100% FOSS, it is not commercial software or a SaaS solution (is not selling anything).
Example API Route
e.g. page/api/auth/[...nextauth.js]
import NextAuth from 'next-auth'
import Providers from 'next-auth/providers'
const options = {
providers: [
// OAuth authentication providers
Providers.Apple({
clientId: process.env.APPLE_ID,
clientSecret: process.env.APPLE_SECRET
}),
Providers.Google({
clientId: process.env.GOOGLE_ID,
clientSecret: process.env.GOOGLE_SECRET
}),
// Sign in with email (passwordless)
Providers.Email({
server: process.env.MAIL_SERVER,
from: '<[email protected]>'
}),
],
// MySQL, Postgres or MongoDB database (or leave empty)
database: process.env.DATABASE_URL
}
export default (req, res) => NextAuth(req, res, options)
Example React Component
e.g. pages/index.js
import React from 'react'
import {
useSession,
signin,
signout
} from 'next-auth/client'
export default () => {
const [ session, loading ] = useSession()
return <p>
{!session && <>
Not signed in <br/>
<button onClick={signin}>Sign in</button>
</>}
{session && <>
Signed in as {session.user.email} <br/>
<button onClick={signout}>Sign out</button>
</>}
</p>
}
Even if you don't choose to use it, you may find the code useful as a reference (e.g. how JSON Web Tokens are handled and how they are rotated in sessions. |
First of all, don't confuse encrypting with hashing, in Eastrall's answer they imply that you could use encryption for a password field. Do not do this
Also, you should change the initialisation vector every time you encrypt a new value, which means you should avoid implementations like Eastrall's library that set a single IV for the whole database.
Modern encryption algorithms are designed to be slow, so encrypting everything in your database is going to affect your performance at least marginally.
If done properly, your encrypted payload is not going to just be the cipher text, but should also contain the ID of the encryption key, details about the algorithm used, and a signature. This means your data is going to take up a lot more space compared to the plain text equivalent. Take a look at https://github.com/blowdart/AspNetCoreIdentityEncryption if you want to see how you could implement that yourself. (The readme in that project is worth reading anyway)
With that in mind, the best solution for your project might depend on how important it is for you to minimise those costs.
If you're going to use the .NET Core Aes.Create(); like in the library in Eastrall's answer, the cipher text is going to be a byte[] type. You could use the column type in your database provider for byte[], or you could encode as base64 and store as a string. Typically storing as a string is worthwhile: base64 will take up about 33% more space than byte[], but is easier to work with.
I suggest making use of the ASP.NET Core Data Protection stack instead of using the Aes classes directly, as it helps you do key rotation and handles the encoding in base64 for you. You can install it into your DI container with services.AddDataProtection() and then have your services depend upon IDataProtectionProvider, which can be used like this:
// Make sure you read the docs for ASP.NET Core Data Protection!
// protect
var payload = dataProtectionProvider
.CreateProtector("<your purpose string here>")
.Protect(plainText);
// unprotect
var plainText = dataProtectionProvider
.CreateProtector("<your purpose string here>")
.Unprotect(payload);
Of course, read the documentation and don't just copy the code above.
In ASP.NET Core Identity, the IdentityUserContext uses a value converter to encrypt personal data marked with the [ProtectedPersonalData] attribute.
Eastrall's library is also using a ValueConverter.
This approach is handy because it doesn't require you to write code in your entities to handle conversion, something that might not be an option if you are following a Domain Driven Design approach (e.g. like the .NET Architecture Seedwork).
But there is a drawback. If you have a lot of protected fields on your entity. The code below would cause every single encrypted field on the user object to get decrypted, even though not a single one is being read.
var user = await context.Users.FirstOrDefaultAsync(u => u.Id == id);
user.EmailVerified = true;
await context.SaveChangesAsync();
You could avoid using a value converter by instead using a getter and setter on your property like the code below. However that means you will need to place encryption specific code in your entity, and you will have to wire up access to whatever your encryption provider is. This could be a static class, or you'll have to pass it in somehow.
private string secret;
public string Secret {
get => SomeAccessibleEncryptionObject.Decrypt(secret);
set => secret = SomeAccessibleEncryptionObject.Encrypt(value);
}
You would then be decrypting every time you access the property, which could cause you unexpected trouble elsewhere. For example the code below could be very costly if emailsToCompare was very large.
foreach (var email in emailsToCompare) {
if(email == user.Email) {
// do something...
}
}
You can see that you'd need to memoize your encrypt and decrypt calls, either in the entity itself or in the provider.
Avoiding the value converter while still hiding the encryption from outside the entity or the database configuration is complex. And so if performance is so much of an issue that you can't go with the value converters, then your encryption is possibly not something that you can hide away from the rest of your application, and you would want to be running the Protect() and Unprotect() calls in code completely outside of your Entity Framework code.
Here is an example implementation inspired by the value converter setup in ASP.NET Core Identity but using an IDataProtectionProvider instead of IPersonalDataProtector:
public class ApplicationUser
{
// other fields...
[Protected]
public string Email { get; set; }
}
public class ProtectedAttribute : Attribute
{
}
public class ApplicationDbContext : DbContext
{
public ApplicationDbContext(DbContextOptions options)
: base(options)
{
}
public DbSet<ApplicationUser> Users { get; set; }
protected override void OnModelCreating(ModelBuilder builder)
{
// other setup here..
builder.Entity<ApplicationUser>(b =>
{
this.AddProtecedDataConverters(b);
});
}
private void AddProtecedDataConverters<TEntity>(EntityTypeBuilder<TEntity> b)
where TEntity : class
{
var protectedProps = typeof(TEntity).GetProperties()
.Where(prop => Attribute.IsDefined(prop, typeof(ProtectedAttribute)));
foreach (var p in protectedProps)
{
if (p.PropertyType != typeof(string))
{
// You could throw a NotSupportedException here if you only care about strings
var converterType = typeof(ProtectedDataConverter<>)
.MakeGenericType(p.PropertyType);
var converter = (ValueConverter)Activator
.CreateInstance(converterType, this.GetService<IDataProtectionProvider>());
b.Property(p.PropertyType, p.Name).HasConversion(converter);
}
else
{
ProtectedDataConverter converter = new ProtectedDataConverter(
this.GetService<IDataProtectionProvider>());
b.Property(typeof(string), p.Name).HasConversion(converter);
}
}
}
private class ProtectedDataConverter : ValueConverter<string, string>
{
public ProtectedDataConverter(IDataProtectionProvider protectionProvider)
: base(
s => protectionProvider
.CreateProtector("personal_data")
.Protect(s),
s => protectionProvider
.CreateProtector("personal_data")
.Unprotect(s),
default)
{
}
}
// You could get rid of this one if you only care about encrypting strings
private class ProtectedDataConverter<T> : ValueConverter<T, string>
{
public ProtectedDataConverter(IDataProtectionProvider protectionProvider)
: base(
s => protectionProvider
.CreateProtector("personal_data")
.Protect(JsonSerializer.Serialize(s, default)),
s => JsonSerializer.Deserialize<T>(
protectionProvider.CreateProtector("personal_data")
.Unprotect(s),
default),
default)
{
}
}
}
Finally, the responsibility of encryption is complex and I would recommend ensuring you have a firm grasp of whatever setup you go with to prevent things like data loss from losing your encryption keys. Also, the DotNet Security CheatSheet from the OWASP Cheatsheet Series is a useful resource to read. |
I struggled very much authenticating from Apps Script to invoke a Cloud Run application and just figured it out, and I believe it's similar for calling any Google Cloud application including Cloud Functions. Essentially the goal is to invoke an HTTP method protected by Google Cloud IAM using the authentication information you already have running Apps Script as the user.
The missing step I believe is that the technique you're using will only work if the Apps Script script and Google Cloud Function (or Run container in my case) are in the same GCP project. (See how to associate the script with the GCP project.)
Setting it up this way is much simpler than otherwise: when you associate the script with a GCP project, this automatically creates an OAuth Client ID configuration to the project, and Apps Script's getIdentityToken function returns an identity token that is only valid for that client ID (it's coded into the aud field field of the token). If you wanted an identity token that works for another project, you'd need to get one another way.
If you are able to put the script and GCP function or app in the same GCP project, you'll also have to do these things, many of which you already did:
Successfully test authentication of your cloud function via curl https://MY_REGION-MY_PROJECT.cloudfunctions.net/MY_FUNCTION -H "Authorization: Bearer $(gcloud auth print-identity-token)" (as instructed here). If this fails then you have a different problem than is asked in this Stack Overflow question, so I'm omitting troubleshooting steps for this.
Ensure you are actually who the script is running as. You cannot get an identity token from custom function in a spreadsheet as they run anonymously. In other cases, the Apps Script code may be running as someone else, such as certain triggers.
Redeploy the Cloud Function as mentioned here (or similarly redeploy the Cloud Run container as mentioned here) so the app will pick up any new Client ID configuration. This is required after any new Client ID is created, including the one created automatically by adding or re-adding the script to the GCP project. (If you move the script to another GCP project and then move it back again, it seems to create another Client ID rather than reuse the old one and the old one will stop working.)
Add the "openid" scope (and all other needed scopes, such as https://www.googleapis.com/auth/script.external_request) explicitly in the manifest. getIdentityToken() will return null without the openid scope which can cause this error. Note to readers: read this bullet point carefully - the scope name is literally just "openid" - it's not a URL like the other scopes.
"oauthScopes": ["openid", "https://...", ...]
Use getIdentityToken() and do NOT use getOAuthToken(). According to what I've read, getOAuthToken() returns an access token rather than an identity token. Access tokens do not prove your identity; rather they just give prove authorization to access some resources.
If you are not able to add the script to the same project as the GCP application, I don't know what to do as I've never successfully tried it. Generally you're tasked with obtaining an OAuth identity token tied to one of your GCP client ids. I don't think one app (or GCP project) is supposed to be able to obtain an identity token for a different OAuth app (different GCP project). Anyway, it may still be possible. Google discusses OAuth authentication at a high level in their OpenID Connect docs. Perhaps an HTML service to do a regular Google sign-in flow with a web client, would work for user-present operations if you get the user to click the redirect link as Apps Script doesn't allow browser redirects. If you just need to protect your service from the public, perhaps you could try other authentication options that involve service accounts. (I haven't tried this either.) If the service just needs to know who the user is, perhaps you could parse the identity token and send the identifier of the user as part of the request. If the service needs to access their Google resources, then maybe you could have the user sign in to that app separately and use OAuth generally for long term access to their resources, using it as needed when called by Apps Script. |
Recommended Approach
The best way to handle authentication for Azure Functions is to leverage the built-in Authentication and Authorization feature. This uses an existing auth provider to authenticate your users allowing you to avoid creating/storing/maintaining user ids & passwords.
Here's a walkthrough adding Azure AD B2C Authentication to Azure Functions: https://github.com/jimbobbennett/MobileAppsOfTomorrow-Lab/blob/master/Workshop/2-SetupAzureFunctions.md#4-setup-function-app-authentication
Alternative Approach
Since it sounds like you aren't using authentication and you want to have a secure API that only your app can access, we can use AuthorizationLevel=Function and inject the API key into our app using our Continuous Integration server at build-time.
I do this for my GitTrends app. Here's how:
Create a AzureConstants.cs that will store the Functions API Key: https://github.com/brminnick/GitTrends/blob/master/GitTrends.Shared/Constants/AzureConstants.cs
Use git update-index --assume-unchanged AzureConstants.cs to ensure the API Keys aren't committed into source control: https://www.jimbobbennett.io/hiding-api-keys-from-git/
On your Continous Integration Server, Add the API Keys as Environment Variables. Here's how to do it in App Center: https://docs.microsoft.com/appcenter/build/custom/variables/
Add a pre-build script to your Continuous Integration server to inject the API Keys. Here's the pre-build script I use in App Center Build for GitTrends: https://github.com/brminnick/GitTrends/blob/master/GitTrends.iOS/appcenter-pre-build.sh
|
Not sure if totally get your point. Seems you just want to update TFS test case result.
You could use Rest API to handle this. It will update test results in a test run.
PATCH https://dev.azure.com/{organization}/{project}/_apis/test/Runs/{runId}/results?api-version=5.1
Since you are using Python, it's able to use Python Script to Access Team Foundation Server (TFS) Rest API.
First you need to use Python to connect your TFS server. TFS uses NTLM authentication protocol , you should use HTTP NTLM authentication using the requests library.
Code Snippet:
import requests
from requests_ntlm import HttpNtlmAuth
username = '<DOMAIN>\\<UserName>'
password = '<Password>'
tfsApi = 'https://{myserver}/tfs/collectionName/_apis/projects?api-version=2.0'
tfsResponse = requests.get(tfsApi,auth=HttpNtlmAuth(username,password))
if(tfsResponse.ok):
tfsResponse = tfsResponse.json()
print(tfsResponse)
else:
tfsResponse.raise_for_status()
More details take a look at this blog. |
You could try using this, adapted from the example in this tutorial:
from django.shortcuts import render, redirect
from django.contrib.auth import authenticate, login
from django.contrib.auth.forms import AuthenticationForm
def login(request):
if request.user.is_authenticated:
return redirect('/')
if request.method == 'POST':
username = request.POST['username']
password = request.POST['password']
user = authenticate(request, username=username, password=password)
if user is not None:
login(request, user)
return redirect('/')
else:
form = AuthenticationForm(request.POST)
return render(request, 'blog/login.html', {'form': form, 'title': 'Login'})
else:
form = AuthenticationForm()
return render(request, 'blog/login.html', {'form': form, 'title': 'Login'})
To use that you would have to change the login line in your urls.py to this:
path('login/', blog_views.login, name='login'),
|
I just made a test on a MySQL 8 server where I successfully ran the following:
CREATE DATABASE webApp;
CREATE USER 'php_script'@'localhost' IDENTIFIED BY 'php_script';
GRANT INSERT,SELECT ON webApp.* TO 'php_script'@'localhost';
As I expected, the wildcard grant worked without any issues.
So, I'm afraid you will need to search the problem somewhere else.
For example, the target database on one of your example is named webApp an on the next example is webAppDB. So, better check the consistency of the names.
Otherwise, your code syntax looks ok for MySQL 8.
[Edit] - additional suggestions
Make sure skip-name-resolve is set to OFF.
Looks like it is OFF by default on MySQL 8, but it might happen that under some circumstances it is set to ON.
In my case, I run MySQL 8 in a docker container and that variable was set to ON.
If the variable is set to ON, mysql will not resolve localhost to the actual IP of localhost and in your case that might be the problem.
More details on skip-name-resolve here: https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_skip_name_resolve
Make sure you finish the GRANTs commands with a FLUSH PRIVILEGES;
Try to switch default-authentication-plugin to mysql_native_password
By default on MySQL 8 default-authentication-plugin is set to caching_sha2_password. But looks like mysqli doesn't have support for this authentication method (according to this article from 2018: https://mysqlserverteam.com/upgrading-to-mysql-8-0-default-authentication-plugin-considerations/) |
If you want to delete a user existing in Firebase authentication you have two possibilities:
1/ Using the JavaScript SDK (since your app is made with angular)
You call the delete() method, as follows:
const user = firebase.auth().currentUser;
user.delete()
.then(() => {
//....
})
.catch(err => {
if (err.code === "auth/requires-recent-login") {
//Re-authenticate the user
} else {
//....
}
})
Note however, that this method "requires the user to have recently signed in. If this requirement isn't met, ask the user to authenticate again and then call firebase.User.reauthenticateWithCredential". An error with the auth/requires-recent-login code is "thrown if the user's last sign-in time does not meet the security threshold".
So, only the logged-in user can call this method from a front-end, in order to delete his/her own account.
2/ Using the Admin SDK
You can use the Admin SDK's deleteUser() method, for example within a Cloud Function.
In this case, there is no need to have the user logged-in since this is executed in the back-end and it is therefore possible to delete any user.
For example, you could have a Callable Cloud Function triggered by an admin user.
Another possibility, is to trigger a Cloud Function upon the Firestore user's document deletion.
Update based on your Question update:
I understand that you want to delete the user record in the Auth service upon deletion. For that you can write a Cloud Function as follows:
exports.deleteUser = functions.firestore
.document('Students/{studentID}')
.onDelete((snap, context) => {
const deletedValue = snap.data();
const userEmail = deletedValue.Email;
return admin.auth().getUserByEmail(userEmail)
.then(userRecord => {
const userID = userRecord.uid;
return admin.auth().deleteUser(userID)
})
.catch(error => {
console.log(error.message);
return null;
})
});
|
Go to app/config/encryption.php and set your secret key and driver.
or
You can replace the config file’s settings by passing a configuration object of your own to the Services call. The $config variable must be an instance of either the Config\Encryption class or an object that extends CodeIgniter\Config\BaseConfig.
$config = new Config\Encryption();
$config->key = 'aBigsecret_ofAtleast32Characters';
$config->driver = 'OpenSSL';
$encrypter = \Config\Services::encrypter($config);
By the way codeigniter documnentation says:
DO NOT use this or any other encryption library for password storage! Passwords must be hashed instead, and you should do that through PHP’s Password Hashing extension.
password_hash() creates a new password hash using a strong one-way hashing algorithm. password_hash() is compatible with crypt(). Therefore, password hashes created by crypt() can be used with password_hash().
Various hash algorithms are supported by password_hash you can use anyone of them, here is an example.
$hashedpass = password_hash($password, PASSWORD_ARGON2I);
To verify your password on login for example, use password_verify function it took the users password and the hash and returns boolean value.
password_verify($usepassword, $hash));
For more details about hashing passwords see this link:
https://www.php.net/manual/en/function.password-hash.php |
For a certificate handling on the session level I used URLProtectionSpace on the URLCredentialStorage shared storage and then set that to Alamofire.Session configuration
here is an example to set it up (port 443 might be enough)
fileprivate func registerURLCredential() {
let storage = URLCredentialStorage.shared
do {
let credential: URLCredential = try loadURLCredential("certificate", password: "blablabla")
let url = URL.API
let host = url.host ?? ""
let ports: [Int] = [80, 443, url.port ?? 0]
for port in ports {
let space = URLProtectionSpace(
host: host,
port: port,
protocol: url.scheme,
realm: nil,
authenticationMethod: NSURLAuthenticationMethodClientCertificate
)
storage.set(credential, for: space)
}
} catch {
print(error)
}
}
fileprivate func createSession(_ configurationHandler: ((_ configuration: URLSessionConfiguration) -> Void)? = nil) -> Alamofire.Session {
let configuration = URLSessionConfiguration.af.default
registerURLCredential()
configuration.urlCredentialStorage = .shared
configurationHandler?(configuration)
let session = Session(
configuration: configuration,
requestQueue: .global(qos: .background),
serializationQueue: .global(qos: .background)
)
return session
}
A simple use for that would like:
let sesstion = createSession({ configuration in
configuration.httpMaximumConnectionsPerHost = 1
})
|
I'd be surprised if Identity does it by default, only by setting up the EmailSender. You do not seem to provide any logic for the confirmation and nowhere to call the EmailSender.
You need to inject the IEmailSender as a service in your controller where you are creating the user, and add the logic to generate a confirmation token and actually send the email.
I'd expect something in the lines of:
var token = await userManager.GenerateEmailConfirmationTokenAsync(user);
var confirmationLink = Url.Action(nameof(ConfirmEmail), "Account",
new { token , email = user.Email },
Request.Scheme);
await _emailSender.SendEmailAsync(user.Email, "Confirmation email link", confirmationLink);
Of course you could further look how to make your email prettier, but that's the core of it.
Also, just to make sure that you have the whole picture, Identity does not also provide an email implementation by default, you also have to set it up: https://docs.microsoft.com/en-us/aspnet/core/security/authentication/accconfirm?view=aspnetcore-3.1&tabs=visual-studio#install-sendgrid |
I guess that the issue is that when you have more than one error, you only see one message The given data was invalid.
Here is why:
Whit this syntax:
"errors":{
"password":["The password confirmation does not match."]}
}
You are not using the error.message properly, as you use in the notify call:
this.$vs.notify({
time: 6000,
title: 'Authentication Error',
text: error.message,
iconPack: 'feather',
icon: 'icon-alert-circle',
color: 'danger'
})
What you can do is return the same array of errors always, even if you have one error or ten errors:
"errors": [
{
type: "password",
message: "Wrong password"
},
{
type: "user",
message: "Wrong user"
},
// As recommendation, notify only "Authentication error", instead the specific field
{
type: "Authentication error",
message: "Wrong credentials"
},
]
And you can notify this way:
.catch(error => { // You return errors inside error object
const errors = error.errors; // and you can get it here
// const { errors } = error; // alternative syntax
for (let err on errors) {
this.$vs.notify({
time: 6000,
title: 'Authentication Error', // You can use err.type
text: err.message,
iconPack: 'feather',
icon: 'icon-alert-circle',
color: 'danger'
})
}
}
Alternative for:
.catch(error => { // You return errors inside error object
const errors = error.errors; // and you can get it here
// const { errors } = error; // alternative syntax
for (let i = 0; i < errors.length; i++) {
this.$vs.notify({
time: 6000,
title: 'Authentication Error', // You can use errors[i].type
text: errors[i].message,
iconPack: 'feather',
icon: 'icon-alert-circle',
color: 'danger'
})
}
}
If you want to keep the structure in your error response, then check what kind of response are you getting and send the notification:
.catch(err => { // You return errors inside error object
const { error, errors } = err;
if (error) { // only one error
this.$vs.notify({
time: 6000,
title: 'Authentication Error',
text: error.message,
iconPack: 'feather',
icon: 'icon-alert-circle',
color: 'danger'
})
} else {
for (let errorType of errors) { // more than one error
this.$vs.notify({
time: 6000,
title: 'Authentication Error',
text: errors[errorType][0],
iconPack: 'feather',
icon: 'icon-alert-circle',
color: 'danger'
})
}
}
}
It's not a good idea this approach, but if you want to use it, then it's fine. |
Best to use it as a DataFrame, then you can filter on columns:
keyword = 'oracle'
url = 'https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword={}'.format(keyword)
html_data = requests.get(url).text
df = pd.read_html(html_data)
df = df[2]
df['Year'] = df['Name'].str.split('-').str[1].astype(int)
df = df[df['Year']>2016]
print(df)
Name Description Year
0 CVE-2020-9402 Django 1.11 before 1.11.29, 2.2 before 2.2.11,... 2020
1 CVE-2020-9315 ** PRODUCT NOT SUPPORTED WHEN ASSIGNED ** Orac... 2020
2 CVE-2020-9314 ** PRODUCT NOT SUPPORTED WHEN ASSIGNED ** Orac... 2020
3 CVE-2020-8428 fs/namei.c in the Linux kernel before 5.5 has ... 2020
4 CVE-2020-7221 mysql_install_db in MariaDB 10.4.7 through 10.... 2020
... ... ... ...
2632 CVE-2017-10001 Vulnerability in the Oracle Hospitality Simpho... 2017
2633 CVE-2017-1000030 Oracle, GlassFish Server Open Source Edition 3... 2017
2634 CVE-2017-1000029 Oracle, GlassFish Server Open Source Edition 3... 2017
2635 CVE-2017-1000028 Oracle, GlassFish Server Open Source Edition 4... 2017
2636 CVE-2017-10000 Vulnerability in the Oracle Hospitality Report... 2017
|
You can loop over the array of objects using forEach and loop over the features property of each object.
const projects = [
{"Title":"InstaJam",
"image":"img/ig.jpg",
"Gif":"gif-title",
"Github":"github",
"description":["first","second", "third"],
"features":["PHP / Laravel", "Html and CSS", "Blade", "Composer",
"User authentication", "MySQL"
],
"Link":"link",
"class": "app"
},
{"Title":"Kayak HTML Email",
"image":"img/kayak.png",
"Gif":"gif-title",
"Github":"github",
"description":["first","second", "third"],
"features":[
"User authentication", "MySQL"
],
"Link":"link"
},
];
const peet = document.querySelectorAll('.projectInserts');
projects.forEach((project,i)=>{
peet[i].textContent += "Title: " + project.Title + "\n";
peet[i].textContent += "Features:\n";
project.features.forEach(feature=>peet[i].textContent+=feature+"\n");
});
.projectInserts {
border: 1px solid red;
}
<pre class="projectInserts"></pre>
<pre class="projectInserts"></pre>
|
I don't think you need this if you installed PHPMailer via composer so I have removed this part from your code.
require 'vendor/phpmailer/phpmailer/src/Exception.php';
require 'vendor/phpmailer/phpmailer/src/PHPMailer.php';
require 'vendor/phpmailer/phpmailer/src/SMTP.php';
Try the below code. I have reformatted your code.
<?php
use PHPMailer\PHPMailer\PHPMailer;
use PHPMailer\PHPMailer\Exception;
use PHPMailer\PHPMailer\SMTP;
// Include Composer autoload.php file
require 'vendor/autoload.php';
// Create object of PHPMailer class
$mail = new PHPMailer(true);
$output = '';
if (isset($_POST['submit'])) {
$name = $_POST['contactName'];
$email = $_POST['contactEmail'];
$subject = $_POST['contactSubject'];
$message = $_POST['contactMessage'];
try {
$mail->isSMTP();
$mail->Host = 'smtp.gmail.com';
$mail->SMTPAuth = true;
// Gmail ID which you want to use as SMTP server
$mail->Username = '[email protected]';
// Gmail Password
$mail->Password = 'secret';
$mail->SMTPSecure = PHPMailer::ENCRYPTION_STARTTLS;
$mail->Port = 587;
// Email ID from which you want to send the email
$mail->setFrom('[email protected]');
// Recipient Email ID where you want to receive emails
$mail->addAddress('[email protected]');
// $mail->addAttachment('');
$mail->isHTML(true);
$mail->Subject = 'Form Submission';
$mail->Body = "<h3>Name : $name <br>Email : $email <br>Message : $message</h3>";
$mail->send();
$output = '<div class="alert alert-success"><h5>Thankyou! for contacting us, We\'ll get back to you soon!</h5></div>';
}
catch (Exception $e) {
$output = '<div class="alert alert-danger"><h5>' . $e->getMessage() . '</h5></div>';
}
}
?>
<!DOCTYPE html>
<html lang="en">
<head>
<meta content="text/html;charset=utf-8" http-equiv="Content-Type">
<title>insert page</title>
<script type="text/javascript">
function back_to_main() {
setTimeout(function () {
//Redirect with JavaScript
window.location = './index.html'
}, 5000);
}
</script>
</head>
<body onload='back_to_main();'>
thank you...
</body>
</html>
Please note I have not tested the above code.
For more information please read https://github.com/PHPMailer/PHPMailer |
The problem is where you are checking for your request parameters.
Your View
<body>
<div class="bg">
@if(count($errors) > 0)
@foreach($errors as $error)
<p>{{$error}}</p>
@endforeach
@endif
<div class="a">
<form method="post" action="/" enctype='multipart/form-data'>
{{ csrf_field() }}
<input type="text" placeholder="Username" name="username">
<input type="Password" placeholder="Password" name="psw">
<button type="submit" name="submit">Login</button>
</form>
</div>
</div>
</body>
Your routes
Route::post('/', function (Request $request) {
$errors = [];
if($request->has('username') && $request->has('psw')){
if($request->input('username') === 'admin' && $request->input('psw') === 'admin'){
return redirect('/admin_page');
}
else{$errors[] = "Invalid login attempt";}
}
return view('admin', ['errors' => $errors]);
});
But this is not the way to do it this just solves your current problem. I would advise on looking into using laravel's built in authentication. |
TL;DR: The application authentication and authorization level is managed by service account. But putting a service account key file in your Javascript app (and thus viewable by any user in their browser) is useless because your secret becomes public!
With Cloud Run, you have 2 mode: private and public
If public, no security, all the requests go to your Cloud Run service
If private, Google Front End check the identity of the requester and if they have the run.invoker permission. If so, the request pass through, else it's blocked.
For being authenticated, today, you need a service account. If you aren't on Google Cloud Platform (here in the browser of the users for example), you need a service account key file. But, if you put it in your website, it's not secure because anyone can take it and use it outside your website.
So, today, you can't do this: Either your Cloud Run is public, without any check, or private with authentication (and IAM authorization)
But, soon, at least in 2020 I hope, you should be able to put a load balancer in front of Cloud Run and to activate IAP on it. Thus, the users will be authenticated thanks to their Google account authentication cookie (SSO). However, in private mode in your browser, the user will be asked for being authenticated before going to the website. It's not authentication free, it's just authentication not manage by your own. |
From the listing, there's no call. It loads the PC-relative offset into the register, then does nothing with it. But look at the ADR/NOP sequences in a couple of places. It looks as if the NOP's are meant to be patched with a call command (BL).
Several possibilities here. It could be that the disassembler is misreading the command. Unlikely, but check the machine code encoding, just in case. There could be a relocation directive that would patch those spots during module loading. Finally, there could be a piece of explicit self patching logic in the program itself, introduced specifically to thwart reverse engineers like yourself. The self patching will definitely be elsewhere in the module, to throw you off the scent.
EDIT: the code has a couple of funny tells. I don't know what is __NSConcreteStackBlock, but it strongly suggests Apple/Cocoa. At the same time, the PACIA command is specific to the ARM8.3 instruction set, and has to do with the pointer authentication logic for jump integrity protection. To the best of my knowledge, this is the stuff of the upcoming ptrauth/ARM64E initiative by Apple, which is supported by the latest Xcode and the latest iDevices, but not accepted by the App Store yet (as of the time of this writing, 5/18/2020). Pointer authentication is potentially fragile, Apple's official line is "try it in dev for now". My point is, it could be that some kind of function call point postprocessing that's a part of the ptrauth-aware code generation pipeline. I don't know enough about ptrauth to tell. :(
EDIT: yet another possibility. Did the assembly come, by any chance, from disassembling an object file, as opposed to a linked executable? Then the NOPs could be placeholders for cross module calls. That would also explain inconsistent label naming. |
If you want to call Azure AAD graph API to assign permissions with OAuth 2.0 client credentials flow, we need to provide enough permissions(Azure AD Graph -> Aapplication permissions -> Application.ReadWrite.All)
Besides, regarding how to assign permissions to AD application with PowerShell, we also can use PowerShell module AzureAD.
For example
Connect-AzureAD
$AppAccess = [Microsoft.Open.AzureAD.Model.RequiredResourceAccess]@{
ResourceAppId = "00000003-0000-0000-c000-000000000000";
ResourceAccess =
[Microsoft.Open.AzureAD.Model.ResourceAccess]@{
Id = "";
Type = ""},
[Microsoft.Open.AzureAD.Model.ResourceAccess]@{
Id = "";
Type = ""}
}
Set-AzureADApplication -ObjectId <the app object id> -RequiredResourceAccess $AppAccess
Update
According to my test, when we use Az module, we can use the following method to get access token and call AAD graph rest API. But please note that when you use the method, the account you use to run Connect-AzAccount should be Azure AD Global Admin
Connect-AzAccount
$context =Get-AzContext
$dexResourceUrl='https://graph.windows.net/'
$token = [Microsoft.Azure.Commands.Common.Authentication.AzureSession]::Instance.AuthenticationFactory.Authenticate($context.Account,
$context.Environment,
$context.Tenant.Id.ToString(),
$null,
[Microsoft.Azure.Commands.Common.Authentication.ShowDialog]::Never,
$null, $dexResourceUrl).AccessToken
# assign permissions
$headers =@{}
$headers.Add("Content-Type", "application/json")
$headers.Add("Accept", "application/json")
$headers.Add("Authorization", "Bearer $($token)")
$body = "{
`n `"requiredResourceAccess`": [{
`n `"resourceAppId`": `"00000003-0000-0000-c000-000000000000`",
`n `"resourceAccess`": [
`n {
`n `"id`": `"405a51b5-8d8d-430b-9842-8be4b0e9f324`",
`n `"type`": `"Role`"
`n },
`n {
`n `"id`": `"09850681-111b-4a89-9bed-3f2cae46d706`",
`n `"type`": `"Role`"
`n }
`n ]
`n }
`n ]
`n}
`n"
$url ='https://graph.windows.net/hanxia.onmicrosoft.com/applications/d4975420-841f-47d5-a3d2-0870901f13cd?api-version=1.6'
Invoke-RestMethod $url -Method 'PATCH' -Headers $headers -Body $body
#check if adding the permissions you need
$headers =@{}
$headers.Add("Accept", "application/json")
$headers.Add("Authorization", "Bearer $($token)")
$url ='https://graph.windows.net/hanxia.onmicrosoft.com/applications/<aad application object id>?api-version=1.6'
$response=Invoke-RestMethod $url -Method 'GET' -Headers $headers
$response.requiredResourceAccess | ConvertTo-Json
|
tl;dr;
Your problem is in CoreModule.cs
Configuration.Modules.ZeroLdap().Enable(typeof(LdapSettings));
According to the docs the Enable method takes an auth source type as a parameter, but you've passed a settings type. Change it to use LdapAuthenticationSource instead.
How you could have figured this out
The error message says there was a failed cast from LdapSettings to IExternalAuthenticationSource. That's strange because there's no reason your code should be trying to cast between those types!
If you look down the stack, you can see the error is happening inside your TokenAuthController's Authenticate / GetLoginResultAsync method. You could check the code in that method, you probably won't find any direct mention of either LdapSettings or IExternalAuthenticationSource. You will however find a call to ApbLoginManger.LoginAsync. Follow that back up the stack and you can see ApbLoginManager uses IoC to resolve an auth source, and the exception is thrown in the ResolveAsDisposable method of IoC!
It gets a bit trickier here. The bug is presenting itself deep inside ABP and the IoC framework. It's possible there's an obscure bug in one of those frameworks causing the problem, but it's much more likely to be a configuration error. That means the next step is to look through your configuration code for anywhere you may have told the IoC framework to use LdapSettings for an IExternalAuthenticationSource.
All the config happens in the CoreModule.cs file, so let's look there. You have a call to
IocManager.Register<ILdapSettings, LdapSettings>();
which seems to properly register LdapSettings for ILdapSettings. The only other call to IocManager is the standard call to IocManager.RegisterAssemblyByConvention in the Initialize method. No obvious misconfiguration there. There is however a call that uses typeof(LdapSettings) as a parameter.
Configuration.Modules.ZeroLdap().Enable(typeof(LdapSettings));
It's not obvious from the method call what that parameter is for, and LdapSettings is definitely a reasonable possibility for the correct parameter. However, there are two good reasons to look into this method further.
Because the parameter is a Type, there won't be compile time checking if we've passed an appropriate type.
LdapSettings is part of the actual exception so any method that uses it is suspect
That brings us to the documentation where we see the problem. We need to pass the auth source, not the settings.
Why the code "seems to run fine"
The configuration used a Type parameter instead of generics. That means there's no compile time checking if you've passed a valid type (as mentioned above). The program compile and run fine until you try to use the misconfigured code. In this case, the misconifguration won't be used until you try to login, which triggers the IoC resolver, which accesses the config, and throws the error. |
You should either :
Install an HTTPS certificate for your endpoint and run a full end-to-end HTTPS (Recommended)
To setup kestrel with a certificate on docker read this doc
Override the OIDC config used by your app :
Create a metadata.json file
{
"issuer": "http://YYY.azurewebsites.net",
"jwks_uri": "https://YYY.azurewebsites.net/.well-known/openid-configuration/jwks",
"authorization_endpoint": "https://YYY.azurewebsites.net/connect/authorize",
"token_endpoint": "https://YYY.azurewebsites.net/connect/token",
"userinfo_endpoint": "https://YYY.azurewebsites.net/connect/userinfo",
"end_session_endpoint": "https://YYY.azurewebsites.net/connect/endsession",
"check_session_iframe": "https://YYY.azurewebsites.net/connect/checksession"
}
"issuer": "http://YYY.azurewebsites.net" is an HTTP url not HTTPS
Configure the application to get metadata from your custom file
public class Program
{
public static async Task Main(string[] args)
{
var builder = WebAssemblyHostBuilder.CreateDefault(args);
builder.RootComponents.Add<App>("app");
builder.Services.AddOidcAuthentication<RemoteAuthenticationState, RemoteUserAccount>(options =>
{
var providerOptions = options.ProviderOptions;
providerOptions.Authority = "https://YYY.azurewebsites.net";
providerOptions.MetadataUrl = "https://YYY.azurewebsites.net/metadata.json";
providerOptions.PostLogoutRedirectUri = "https://YYY.azurewebsites.net/authentication/logout-callback";
providerOptions.RedirectUri = "https://YYY.azurewebsites.net/login-callback";
});
await builder.Build().RunAsync();
}
}
|
There is a problem with the algorithm of the do-while loop.
The counter i increments short before the condition check.
If '\0' is found in the next array element (Note, that i is incremented) the loop breaks immediately and won´t be able to set length to i at the next iteration (because there is no next iteration).
Since length is not initialized, the program has undefined behavior.
Change:
do
{
if (ch[i] == '\0')
{
length = i;
}
else
{
i++;
}
}
while (ch[i] != '\0');
to
while (ch[i] != '\0') i++;
length = i;
or even simpler:
while (ch[i] != '\0') length++;
and omit the counter i, but you need to initialize length by 0 then.
Side Notes:
Change scanf("%s", &ch); to scanf("%s", ch);. - ch decays to a pointer to its first element.
Use a length modifier at scanf() -> scanf("%50s", ch); to ensure that no buffer overflow occurs when the user inputs a string longer than 50 characters.
Always check the return value of scanf() if an error occurred at consuming input.
Never ignore at the compiler warnings. For scanf("%50s", ch); the compiler should have raised a warning.
|
Please make sure you are making acquire token request once you are successfully logged in
// No callback. App resumes after closing or moving to new page.
// Check token and username
updateDataFromCache(_msal.loginScopes);
if (!_oauthData.isAuthenticated && _oauthData.userName && !_msal._renewActive) {
// id_token is expired or not present
var self = $injector.get('msalAuthenticationService');
self.acquireTokenSilent(_msal.loginScopes).then(function (token) {
if (token) {
_oauthData.isAuthenticated = true;
}
}, function (error) {
var errorParts = error.split('|');
$rootScope.$broadcast('msal:loginFailure', errorParts[0], errorParts[1]);
});
}
It would be better for you to avoid Angular js and upgrade to Angular with msal. As there will be more support and updates available. |
The problem with tokens is that you can't delete the token, you'll have to revoke it by removing it from the store so it can't be used anymore. In fact, with back channel logout you have the same problem. You can't directly delete the cookie, but the client can reject it on the next request.
With a user session, back channel logout can read the session information from the cookie available at the IdentityServer website. The admin however, has no access to this information so you'll need to store the user sessions server side.
This can be either at the client or at the IdentityServer. I would implement a session manager at the client, because it's the client that validates the cookie and can delete the cookie (on the next request).
This allows IdentityServer to perform a normal back channel logout and leave it to the client to remove one or all entries from the session manager on LogoutCallback. This way you can implement different strategies for different clients.
The client can consult the session manager on cookie validation and deny access if the session is not available. Something like:
public class CookieEventHandler : CookieAuthenticationEvents
{
private SessionManager _sessionManager { get; }
public CookieEventHandler(SessionManager sessionManager)
{
_sessionManager = sessionManager;
}
public override async Task ValidatePrincipal(CookieValidatePrincipalContext context)
{
if (context.Principal.Identity.IsAuthenticated)
{
var sub = context.Principal.FindFirst("sub")?.Value;
if (!_sessionManager.HasSession(sub))
{
context.RejectPrincipal();
await context.HttpContext.SignOutAsync(CookieAuthenticationDefaults.AuthenticationScheme);
}
}
}
}
In startup:
services
.AddAuthentication(options =>
{
options.DefaultScheme = "Cookies";
options.DefaultChallengeScheme = "oidc";
})
.AddCookie("Cookies", options =>
{
options.EventsType = typeof(CookieEventHandler);
})
|
How is size of chArray not changing from 30 to the size of the string we entered using cin.get()?
Arrays in C++ have fixed size. They are created on the stack with a fixed size given by the programmer. That means you give them a specific size and it is known to the compiler at compile time. This size does not change. Ever.
If you write more characters into the array than the size for example writing 100 characters in an array of size 30, it is called buffer overflow or buffer overrrun. It basically means you crossed the boundary i.e., the fixed size set, which is 30 in this case.
The other characters entered (after the limit of 30) can go anywhere in the memory because it is undefined where they will go. If you try to print this array, your program will terminate with an error:
*** stack smashing detected ***: terminated
The error in this particular case means you tried to put more data into the stack than it's capacity.
However, we have string in C++, which you can use if you want a container which changes its size as required. Example:
std::string mystr;
std::cout << "Mystr size before: " << mystr.size() << '\n';
std::getline (std::cin, mystr);
std::cout << "Mystr size after: " << mystr.size() << '\n';
|
The solution was... to read the docs
var token = "eyJ";
hubConnection = new HubConnectionBuilder()
.WithUrl($"{Configuration["Url"]}/chathub?access_token={token}")
.Build();
Token is provided at connection estabilishing via url
We need to modify startup.cs to support OnMessageReceived
docs url:
https://docs.microsoft.com/en-us/aspnet/core/signalr/authn-and-authz?view=aspnetcore-3.1#authenticate-users-connecting-to-a-signalr-hub
services.AddAuthentication(options =>
{
// Identity made Cookie authentication the default.
// However, we want JWT Bearer Auth to be the default.
options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
})
.AddJwtBearer(options =>
{
// Configure the Authority to the expected value for your authentication provider
// This ensures the token is appropriately validated
options.Authority = /* TODO: Insert Authority URL here */;
// We have to hook the OnMessageReceived event in order to
// allow the JWT authentication handler to read the access
// token from the query string when a WebSocket or
// Server-Sent Events request comes in.
// Sending the access token in the query string is required due to
// a limitation in Browser APIs. We restrict it to only calls to the
// SignalR hub in this code.
// See https://docs.microsoft.com/aspnet/core/signalr/security#access-token-logging
// for more information about security considerations when using
// the query string to transmit the access token.
options.Events = new JwtBearerEvents
{
OnMessageReceived = context =>
{
var accessToken = context.Request.Query["access_token"];
// If the request is for our hub...
var path = context.HttpContext.Request.Path;
if (!string.IsNullOrEmpty(accessToken) &&
(path.StartsWithSegments("/hubs/chat")))
{
// Read the token out of the query string
context.Token = accessToken;
}
return Task.CompletedTask;
}
};
});
|
If you want to connect Azure SQL database with Azure MSI in python application, we can use the SDK pyodbc to implement it.
For example
Enable system-assigned identity for your Azure app service
Add the MSi as contained database users in your database
a. Connect your SQL database with Azure SQL AD admin (I use SSMS to do it)
b. run the following the script in your database
CREATE USER <your app service name> FROM EXTERNAL PROVIDER;
ALTER ROLE db_datareader ADD MEMBER <your app service name>
ALTER ROLE db_datawriter ADD MEMBER <your app service name>
ALTER ROLE db_ddladmin ADD MEMBER <your app service name>
Code
import os
import pyodbc
import requests
import struct
#get access token
identity_endpoint = os.environ["IDENTITY_ENDPOINT"]
identity_header = os.environ["IDENTITY_HEADER"]
resource_uri="https://database.windows.net/"
token_auth_uri = f"{identity_endpoint}?resource={resource_uri}&api-version=2019-08-01"
head_msi = {'X-IDENTITY-HEADER':identity_header}
resp = requests.get(token_auth_uri, headers=head_msi)
access_token = resp.json()['access_token']
accessToken = bytes(access_token, 'utf-8');
exptoken = b"";
for i in accessToken:
exptoken += bytes({i});
exptoken += bytes(1);
tokenstruct = struct.pack("=i", len(exptoken)) + exptoken;
conn = pyodbc.connect("Driver={ODBC Driver 17 for SQL Server};Server=tcp:andyserver.database.windows.net,1433;Database=database2", attrs_before = { 1256:bytearray(tokenstruct) });
cursor = conn.cursor()
cursor.execute("select @@version")
row = cursor.fetchall()
For more details, please refer to the
https://github.com/AzureAD/azure-activedirectory-library-for-python/wiki/Connect-to-Azure-SQL-Database
https://docs.microsoft.com/en-us/azure/app-service/overview-managed-identity
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-aad-authentication-configure |
I assume you're using the DataStax C# driver. If that is the case there's this documentation section on SSL/TLS which also has links to a couple of examples: https://docs.datastax.com/en/developer/csharp-driver/3.15/features/tls/
If that snippet is accurate, you're not actually setting the SSLOptions on the Builder.WithSSL() method.
If that doesn't work and the code examples don't help you, please show us the ValidateServerCertificate method so we can see what might be going wrong on the certificate validation.
Edit (from my comment below):
On TLS/SSL documentation page there is a section that is relevant here: Enabling server authentication with a custom root certificate.
As mentioned in the documentation, you either have to install that certificate in the machine where the application is running or you have to provide a custom certificate validator similar to this one.
The SSLOptions.SetCertificateCollection() method is used for client authentication so it is not useful for your situation where you want server authentication. |
I am more and more inclined to say "Web Components" are a language construct.
It is called the Custom Elements API, so no different from the Fetch API, or the MutationObserver API
Then your question is: How can I build an application with the [name here] API?
Superduper "Tools"
Tools like Lit, Hybrids, HyperHTML, Lego, Stencil, all have a polyfill background, they made "Web Components" possible in the olden days when Browsers didn't fully support the Custom Elements API.
They have evolved to all claiming "This is the best Tool to develop Web Components"
In that sense they can be compared to jQuery.
Once a must for Web Developers,
and then selectors etc. became part of the W3C standard.
With the advent of IE9 in 2011, there was no real need for jQuery anymore.
Today's playing field
Now, Edge is running on Chromium, and Microsoft pushes Edge by default. All modern Browsers are up to par with the Custom Elements API
To take the jQuery comparison one step further back in history. There were dozens of jQuery alternatives 10 years ago. If you happened to invest in the "wrong" tool, you eventually had to convert to jQuery (or just Native JavaScript if IE9 was the oldest browser you had to support and you understood W3C standards (nearly) always win)
The same is going to happen with Lit, Hybrids, HyperHTML, Lego, Stencil and all others.
The odd one out
Angular or Svelte or Vue all play 100% nicely with the Custom Elements API
React scores 71% at https://custom-elements-everywhere.com/
The 60% React heads will say the W3C standard does not support React.
If you have been around long enough (> 20 years) you understand React can be compared to ECMAScript-4 (the W3C standard that never made it)
Great technology, but if the Browser Vendors don't implement it in the Browser, it has no future. That means React is a potential "jQuery" also. Or maybe Flash (ActionScript had ES4 constructs) is a better comparison.
Makes for an interesting future:
Will Facebook solve that 71% score?
Will all Browser vendors (Mozilla,Google/Microsoft,Apple) implement React(Native)?
The Future is now
If you do not have to support IE11 there is a modern, level Custom Elements API playing field.
If you are learning, learn the API first, then see if Tools can make your development life easier (and accept the risk it all needs to be refactored when your tool of choice goes where MooTools, YUI and many others went) ...
Then again... banks still run Cobol... maybe React is the new Cobol?
Your questions
What are the best practices for the structural architecture of an enterprise-level application made with web components?
Is separation of core logic such as encryption, datastreaming, and so on something you do when using web components, and if so how?
You built applications with Web Components as you built applications with Classes or Proxies. Components encapsulate logic, only difference being the Custom Elements API also makes for great (really great) semantic HTML.
Alas, I see companies and developers focussing on the "Tools" instead of on the API
To me, a fool with a tool, is still a fool.
I was in the Microsoft SharePoint world, when TypeScript was launched.
Made good money refactoring MVPs "great" TypeScript (alas in ES3 syntax because they forgot to keep up with JavaScript) to ES6
I left that world when Microsoft went all-in on React.
Component developers now learn tools, like they learned jQuery...
Enough rambling
The Custom Elements API is a JavaScript language construct.
It does some things really well and others not so well.
Will the API make an impact? Yes, just like Classes and Array methods did. And those required a mind-set change also.
My advice:
Play with them, like you learned .map and .reduce
don't try to write full blown applications, start small
create TicTacToe in a JSFiddle or CodePen.
Ask here on StackOverflow Code Review for feedback.
make mistakes
make more mistakes
make more mistakes
learn
The Custom Elements API is a W3C standard, supported by all Browsers,
this technology will work for as long as JavaScript runs in the Browser. |
An alternate approach would be to use Amazon S3 Replication, which can replicate bucket contents:
Within the same region, or between regions
Within the same AWS Account, or between different Accounts
Replication is frequently used when organizations need another copy of their data in a different region, or simply for backup purposes. For example, critical company information can be replicated to another AWS Account that is not accessible to normal users. This way, if some data was deleted, there is another copy of it elsewhere.
Replication requires versioning to be activated on both the source and destination buckets. If you require encryption, use standard Amazon S3 encryption options. The data will also be encrypted during transit.
You configure a source bucket and a destination bucket, then specify which objects to replicate by providing a prefix or a tag. Objects will only be replicated once Replication is activated. Existing objects will not be copied. Deletion is intentionally not replicated to avoid malicious actions. See: What Does Amazon S3 Replicate?
There is no "additional" cost for S3 replication, but you will still be charge for any Data Transfer charges when moving objects between regions, and for API Requests (that are tiny charges), plus storage of course. |
Sometimes it's good to start with a fresh set of files. In my GitHub-Repository
https://github.com/java-crypto/Stackoverflow/tree/master/PGP_Encrypt_Decrypt_Armor_Error
you can download the secret_rsa_kleo passphrase.zip with the following contents:
Main3.java: simple test program as shown below
PGPExampleUtil.java: taken from Bouncy Castle PGP examples
secret_rsa_kleo.asc: brandnew generated private key (done with Kleopatra 3.1.11)
pub_rsa_kleo.asc: public key
bcprov-jdk15to18-165.jar + bcpg-jdk15on-165.jar: Bouncy Castle jars
I generated my keypair with the passphrase "mypassphrase" (without "" :-). The output is as follows:
Test with Java version: 11.0.6+8-b520.43 BouncyCastle Version: BC version 1.65
plaintext: 54686973206973206d7920706c61696e74657874
decrytext: 54686973206973206d7920706c61696e74657874
plaintext equals decryptedtext: true
ciphetext: 85010c03f9a05b3a12b538270107ff4c960552ca571ff4a24518189e038bd574e64504398b10fc85375e5f6b62ea3f69f686ebd20a1ef7cd0bd59823c025470d85930b89a5ee2c97683d39685c32a607f8c4ecb7a8270c4aff359f0b20a4e76599894f6d987c3d2d710e56a6354001fd4bfa54770609e917915dc51994feb49155a6b2259f3f1c449baca58e43440e6aee527f56cbbd024b463ec76dceab40ffbd940297115b93a535f00ca6c7880b449077d04e35ef1e2c35f579a4df8267be809c7ce5b82f627f1e4b45e9ae0cbd79f88d3c1621b45b6a7c527e86529480949fe9f69b31b79612a91248f2f5fad6750c46d2b4d025da9b70b18d3377938e73e4f941c969f722d2b2b21a44233cf5d24701a6363eb6e28a9b4c2431db135ff4be3423a5138f70aba971173d72df910b6a336c7f15158abcd7d40c2b491d4af7732de9b0783fc8887f9ca068d8274632a42fa876d0986208
ciphertext String: ���[:�8'�L�R�W��E���t�E9���7^kb�?i����
�����,�h=9h\2���취'J�5� ��e��Om�|=-qV�5@�K�Tw ��]�����U��%�?D����CDj�RV˽KF>�mΫ@����[��5��LjD�w�N5�,5�y�߂g���|�/bKE��y��<!�[j|R~�R�������1���H����uFҴ�%ڛp��3w��s��A�i�"Ҳ�D#<��G�6>�⊛L$1���4#��p��q=rߑj3l����+IJ�s-�x?Ȉ��h�'F2�/�vИ
Just one thing to check on yout original system: the private key is taken from this line:
String privateKeyPath = System.getProperty("user.dir")+"/keys/Private_Key.asc";
but in the encrypt/decrypt methods the variable is stsPrivateKeyPath.
Main3.java:
import org.bouncycastle.bcpg.ArmoredOutputStream;
import org.bouncycastle.jce.provider.BouncyCastleProvider;
import org.bouncycastle.openpgp.*;
import org.bouncycastle.openpgp.jcajce.JcaPGPObjectFactory;
import org.bouncycastle.openpgp.operator.jcajce.JcaKeyFingerprintCalculator;
import org.bouncycastle.openpgp.operator.jcajce.JcePGPDataEncryptorBuilder;
import org.bouncycastle.openpgp.operator.jcajce.JcePublicKeyDataDecryptorFactoryBuilder;
import org.bouncycastle.openpgp.operator.jcajce.JcePublicKeyKeyEncryptionMethodGenerator;
import java.io.*;
import java.security.NoSuchProviderException;
import java.security.SecureRandom;
import java.security.Security;
import java.util.Arrays;
import java.util.Date;
import java.util.Iterator;
public class Main3 {
//private static String publicKeyPath = System.getProperty("user.dir")+"/keys/Public_Key.asc";
private static String stsPublicKeyPath = "pub_rsa_kleo.asc";
//private static String privateKeyPath = System.getProperty("user.dir")+"/keys/Private_Key.asc";
private static String stsPrivateKeyPath = "secret_rsa_kleo.asc";
private static String passwordString = "mypassphrase";
public static void main(String[] args) throws NoSuchProviderException, IOException, PGPException {
System.out.println("https://stackoverflow.com/questions/62305625/i-have-been-working-on-pgp-encrypt-and-decrypt-for-byte-with-bouncy-castle-api");
Security.addProvider(new BouncyCastleProvider());
System.out.println("\nTest with Java version: " + Runtime.version() + " BouncyCastle Version: " + Security.getProvider("BC") + "\n");
byte[] plaintext = "This is my plaintext".getBytes("UTF-8");
byte[] ciphertext = encrypt(plaintext);
byte[] decryptedtext = decrypt(ciphertext);
System.out.println("plaintext: " + bytesToHex(plaintext));
System.out.println("decrytext: " + bytesToHex(decryptedtext));
System.out.println("plaintext equals decryptedtext: " + Arrays.equals(plaintext, decryptedtext));
System.out.println("\nciphetext: " + bytesToHex(ciphertext));
System.out.println("ciphertext String: " + new String(ciphertext, "UTF-8"));
// bcprov-jdk15to18-165.jar
// bcpg-jdk15on-165.jar
// OpenJDK 11.0.5
}
public static byte[] decrypt(byte[] encrypted)
throws IOException, PGPException, NoSuchProviderException {
Security.addProvider(new BouncyCastleProvider());
InputStream keyIn = new BufferedInputStream(new FileInputStream(stsPrivateKeyPath));
char[] password = passwordString.toCharArray();
//char[] password = "".toCharArray();
InputStream in = new ByteArrayInputStream(encrypted);
in = PGPUtil.getDecoderStream(in);
JcaPGPObjectFactory pgpF = new JcaPGPObjectFactory(in);
PGPEncryptedDataList enc;
Object o = pgpF.nextObject();
if (o instanceof PGPEncryptedDataList) {
enc = (PGPEncryptedDataList) o;
} else {
enc = (PGPEncryptedDataList) pgpF.nextObject();
}
Iterator it = enc.getEncryptedDataObjects();
PGPPrivateKey sKey = null;
PGPPublicKeyEncryptedData pbe = null;
PGPSecretKeyRingCollection pgpSec = new PGPSecretKeyRingCollection(
PGPUtil.getDecoderStream(keyIn), new JcaKeyFingerprintCalculator());
while (sKey == null && it.hasNext()) {
pbe = (PGPPublicKeyEncryptedData) it.next();
sKey = PGPExampleUtil.findSecretKey(pgpSec, pbe.getKeyID(), password);
}
if (sKey == null) {
throw new IllegalArgumentException(
"secret key for message not found.");
}
InputStream clear = pbe.getDataStream(new JcePublicKeyDataDecryptorFactoryBuilder().setProvider("BC").build(sKey));
JcaPGPObjectFactory plainFact = new JcaPGPObjectFactory(clear);
PGPCompressedData cData = (PGPCompressedData)plainFact.nextObject();
JcaPGPObjectFactory pgpFact = new JcaPGPObjectFactory(cData.getDataStream());
PGPLiteralData ld = (PGPLiteralData) pgpFact.nextObject();
InputStream unc = ld.getInputStream();
ByteArrayOutputStream out = new ByteArrayOutputStream();
int ch;
while ((ch = unc.read()) >= 0) {
out.write(ch);
}
byte[] returnBytes = out.toByteArray();
out.close();
return returnBytes;
}
public static byte[] encrypt(byte[] clearData)
throws IOException, PGPException {
Security.addProvider(new BouncyCastleProvider());
String fileName=null;
boolean withIntegrityCheck = true;
boolean armor = false; // org
if (fileName == null) {
fileName = PGPLiteralData.CONSOLE;
}
PGPPublicKey encKey = PGPExampleUtil.readPublicKey(stsPublicKeyPath);
ByteArrayOutputStream encOut = new ByteArrayOutputStream();
OutputStream out = encOut;
if (armor) {
out = new ArmoredOutputStream(out);
}
ByteArrayOutputStream bOut = new ByteArrayOutputStream();
PGPCompressedDataGenerator comData = new PGPCompressedDataGenerator(
PGPCompressedData.ZIP);
OutputStream cos = comData.open(bOut); // open it with the final
// destination
PGPLiteralDataGenerator lData = new PGPLiteralDataGenerator();
OutputStream pOut = lData.open(cos, // the compressed output stream
PGPLiteralData.BINARY, fileName, // "filename" to store
clearData.length, // length of clear data
new Date() // current time
);
pOut.write(clearData);
lData.close();
comData.close();
PGPEncryptedDataGenerator cPk = new PGPEncryptedDataGenerator(new JcePGPDataEncryptorBuilder(PGPEncryptedData.CAST5).setWithIntegrityPacket(withIntegrityCheck).setSecureRandom(new SecureRandom()).setProvider("BC"));
cPk.addMethod(new JcePublicKeyKeyEncryptionMethodGenerator(encKey).setProvider("BC"));
byte[] bytes = bOut.toByteArray();
OutputStream cOut = cPk.open(out, bytes.length);
cOut.write(bytes); // obtain the actual bytes from the compressed stream
cOut.close();
out.close();
return encOut.toByteArray();
}
private static String bytesToHex(byte[] bytes) {
StringBuffer result = new StringBuffer();
for (byte b : bytes) result.append(Integer.toString((b & 0xff) + 0x100, 16).substring(1));
return result.toString();
}
}
PGPExampleUtil.java
package PGP_Encrypt_Decrypt_Armor_Error;
import org.bouncycastle.openpgp.*;
import org.bouncycastle.openpgp.operator.jcajce.JcaKeyFingerprintCalculator;
import org.bouncycastle.openpgp.operator.jcajce.JcePBESecretKeyDecryptorBuilder;
import java.io.*;
import java.security.NoSuchProviderException;
import java.util.Iterator;
// https://github.com/bcgit/bc-java/blob/master/pg/src/main/java/org/bouncycastle/openpgp/examples/PGPExampleUtil.java
class PGPExampleUtil
{
static byte[] compressFile(String fileName, int algorithm) throws IOException
{
ByteArrayOutputStream bOut = new ByteArrayOutputStream();
PGPCompressedDataGenerator comData = new PGPCompressedDataGenerator(algorithm);
PGPUtil.writeFileToLiteralData(comData.open(bOut), PGPLiteralData.BINARY,
new File(fileName));
comData.close();
return bOut.toByteArray();
}
/**
* Search a secret key ring collection for a secret key corresponding to keyID if it
* exists.
*
* @param pgpSec a secret key ring collection.
* @param keyID keyID we want.
* @param pass passphrase to decrypt secret key with.
* @return the private key.
* @throws PGPException
* @throws NoSuchProviderException
*/
static PGPPrivateKey findSecretKey(PGPSecretKeyRingCollection pgpSec, long keyID, char[] pass)
throws PGPException, NoSuchProviderException
{
PGPSecretKey pgpSecKey = pgpSec.getSecretKey(keyID);
if (pgpSecKey == null)
{
return null;
}
return pgpSecKey.extractPrivateKey(new JcePBESecretKeyDecryptorBuilder().setProvider("BC").build(pass));
}
static PGPPublicKey readPublicKey(String fileName) throws IOException, PGPException
{
InputStream keyIn = new BufferedInputStream(new FileInputStream(fileName));
PGPPublicKey pubKey = readPublicKey(keyIn);
keyIn.close();
return pubKey;
}
/**
* A simple routine that opens a key ring file and loads the first available key
* suitable for encryption.
*
* @param input data stream containing the public key data
* @return the first public key found.
* @throws IOException
* @throws PGPException
*/
static PGPPublicKey readPublicKey(InputStream input) throws IOException, PGPException
{
PGPPublicKeyRingCollection pgpPub = new PGPPublicKeyRingCollection(
PGPUtil.getDecoderStream(input), new JcaKeyFingerprintCalculator());
//
// we just loop through the collection till we find a key suitable for encryption, in the real
// world you would probably want to be a bit smarter about this.
//
Iterator keyRingIter = pgpPub.getKeyRings();
while (keyRingIter.hasNext())
{
PGPPublicKeyRing keyRing = (PGPPublicKeyRing)keyRingIter.next();
Iterator keyIter = keyRing.getPublicKeys();
while (keyIter.hasNext())
{
PGPPublicKey key = (PGPPublicKey)keyIter.next();
if (key.isEncryptionKey())
{
return key;
}
}
}
throw new IllegalArgumentException("Can't find encryption key in key ring.");
}
static PGPSecretKey readSecretKey(String fileName) throws IOException, PGPException
{
InputStream keyIn = new BufferedInputStream(new FileInputStream(fileName));
PGPSecretKey secKey = readSecretKey(keyIn);
keyIn.close();
return secKey;
}
/**
* A simple routine that opens a key ring file and loads the first available key
* suitable for signature generation.
*
* @param input stream to read the secret key ring collection from.
* @return a secret key.
* @throws IOException on a problem with using the input stream.
* @throws PGPException if there is an issue parsing the input stream.
*/
static PGPSecretKey readSecretKey(InputStream input) throws IOException, PGPException
{
PGPSecretKeyRingCollection pgpSec = new PGPSecretKeyRingCollection(
PGPUtil.getDecoderStream(input), new JcaKeyFingerprintCalculator());
//
// we just loop through the collection till we find a key suitable for encryption, in the real
// world you would probably want to be a bit smarter about this.
//
Iterator keyRingIter = pgpSec.getKeyRings();
while (keyRingIter.hasNext())
{
PGPSecretKeyRing keyRing = (PGPSecretKeyRing)keyRingIter.next();
Iterator keyIter = keyRing.getSecretKeys();
while (keyIter.hasNext())
{
PGPSecretKey key = (PGPSecretKey)keyIter.next();
if (key.isSigningKey())
{
return key;
}
}
}
throw new IllegalArgumentException("Can't find signing key in key ring.");
}
}
secret_rsa_kleo.asc:
-----BEGIN PGP PRIVATE KEY BLOCK-----
lQPGBF7hImoBCAC1YHtHUokERd9OqXiB0/ncVwQaqEBLdd3cxRZ0Kyd7K+OxHH5f
VCdYRWSyVANn4Z+3JjDGZFC5eAUFGWSMwVnWB6VlIW6+7UegGA2cIUAyH6fzBogs
W4hhoIVhUHXTsbUpqj2bhWj85db3GNuSnIhyu6ed0AavsnBbDooHFJYOYgXOvNqN
pq1gjbLIHvBZXZg/OvrpjSM0GoLX6bIDxofzh3ktFlBkUP1fGJ/Cx1xu1ANZ6mQq
6VF8DMcYBO08+HiQvaRkOYYI20AR5X3BOYA+UR63CiXjfvt9r2OX+GTExe3XJp8U
wkvm14JRCyad42ADOkik8fdiy4rGy0yVwe/BABEBAAH+BwMCVamrZmoMPrDHJdAz
Oh1yxKKiMEgBhIdrwdXlvPyk9+nLhnVCa2rBpobtOXbKCxM4lQB8Rf4g4QlmUL1a
m8iyG/GFLVYwtq5q+/9HSYXbU1deLAEOoUZDTZO1FLt8l1Il29KBtTRQmFEM39Ul
pDaTUJ52xvnGjVGHRp9gan2m/LPrcKSUjXcoJPuILlrkpSzZS0101AUF8ELYFTgp
u/BUbXzvjjNpS+VKUZyF00E3O7eeQzTZlYxlQXUl70RWs8TdiJRBJqSXUmexaPwy
ezAv+rZSYIJwO/K/YfGeb68MhNcxcokjaNsDUIFeU+MuNwgaoXmabpbz8pgBaRdM
H1xZSlLh9FcOwufSVJFL4rBs0sN189/9ys+8/oKK9LcVhji4a6TL2FpkrtEZFKSG
QHH7UA9Sw2D+0v2077GmfcRH0vNvvrncJg6lvrD/58nUufAS6bgAywQoyOoIuPS/
vxFtakjpMMs3bR9LcSJtzuR6cBG7qcLSa+SGbeOa9zz5VKUvInJc2LczjcIE4VjN
2RzURUcnnjfCSq2jevxMe5uxbcGPLOrD1vIydcMbzOK6pUh1XGQdoMim95MbpEQW
GylvgxbAt0+SdpzY1jKQjizTC9d9HbA1pcdUCNnelsBO3EluIdgQumO56AB5+ezw
r9xwMRHZV+FsPeqYIAkFothY8db0GFeRo375BbzLEJZ6GJnSz7SLYuO/2qvtJsF1
89ydPTSnaa231ZrPq7rHQ81IqOC5TMc6qqVVdfPY1hbcHQKhBsAi9SUkuKkm5rHL
N6ARpK03ynvp36DDk4z3/0P/DwuhlFyhfdFtRRKd8QJGukNfsnWkqC3Vk5W2wCVx
jkke7EgBaT5thDqEeAmv1SynVnViiK2IFXK+ZacuZL7euwlP7EHAs4hnR6mSjG3y
1dP30ZvxxMbYtBh0ZXN0IHRlc3QgPHRlc3RAdGVzdC5kZT6JAVQEEwEIAD4WIQTr
W7Q8jmUUjy+bIBn5oFs6ErU4JwUCXuEiagIbAwUJA8HztgULCQgHAgYVCgkICwIE
FgIDAQIeAQIXgAAKCRD5oFs6ErU4J3NlB/9nCxiyAEJHv1U/x7pn8dAsCvXRzIF0
/gUudhuNwvHc306dYsJ99bfwf634fXwxMbmE/p62rTuoUTjBPGcx3JuqS6ch+GSQ
m9+VDtCgPxKPEzeelTOvaRD80G6XPNAIhYAENLS5ufHQBSJDGoSj33DqeWKSMNcK
38YiZMoFe3t+8IfcImof8NoaP/AWsGBFBmq46Add32AeYDS87JpNZ3fv5zDmQ2qj
iV5dxoVSPYtaUuXKpE/0/Vqrsm6lu98ISC6bEnQbvTFooYXnQseiraA6Uj67c814
ynOzySIwTwzeKcx/9H6rzf5Nu9+j9TD+c8qv5zjDk3r1emGbTfUlwOh6nQPEBF7h
ImoBCADDd6z/T4TptIaNeEl8I1yhf713PhA0/r4JcIO22VKbCTGLeUzJ0+nTR5Vf
3pPKDFBLwagSB1q1KZRvyxcmjSQI5oBOKhaDR8tVfR/ScON/Tx9U//DG8P5/5N8b
2p1nJdN0HEXLz2oPKGnM2Dvv/nG8pdEf15ZNpQJQaJn5zNvp31BckylVyaBu00hz
URmxt45islcz4t02/yam1tiE1ASpHfieoZeRCvTRMasl1aBR1H9iln2sF8xMPQ5P
N3yqA0B0JJJ83UPr5uJejEPZuNDle1N8XesnePafIdfo6WL+G1oLoloiargKLAAT
heQ7p6SFFnpfNrmfz0Qri2aWxht/ABEBAAH+BwMCqiQb92unBZ/HDLGe1iR1mZqS
ppHmcvNqnYZHjdGbFaf7fECC6+jKa9t1FYGCiW7aFTminJ84hBz1up/vdFDZ2TKI
o9sz2gQF1Zy3EjWutIoeRI2AK8xQuq0e1YdNOJrWJ2lVndLBqOhFd1HkjIbUGKqy
eLrbio/EN0oFnkP45h0C+EPDw8pMUwJV2REVUZMt4UKPgmLezGzbe2DJnpVzR1pw
+tdP1UtguwOXaj/xdMFaFuTY09iYx+RfIHujIW7jAw0bV2d9GcrdMxuAcog2JWR7
xULseb5QJMaRv+jPOEYSoWwSffISI2Hkur+eBzputhkGj3XwBUB3cEuv7ukVvr6A
gesplfyWOzJWgikznMnTyIsMYGYxvjeSA+gCSe7ahLo6jazRCdDYuGN+0fgAyIgy
PXQCJ1QZBeEbfiguVZzJdXzBkzeaJOHdgDh58LgBy4anW1XzHSfHunT7APQdZdDK
pahfDpGY8BBOhE5Ihf1clpPvqYNXXKKsOqYlr0a+kqpwSYMZb6ttWggMTz10duVy
9RfWEizwBdMrsDMCpOu1j6CBs0MMdIOw1CyKGhip71KTUze3onLTug0QptteAFCY
cIcQaxRLXGIoe5v0jPBv8WySCapTFT3qpzjJHJyKKtR6IpsyQ4yP0Ragbg3GrqZy
JqBOanPEVW/zFC3OpgSRcJTVx9iMS2NHTGGBHqoVUn+YuxmTYLAy0KxOXwRXCd3P
NnxZ7ZURNaLtk6FKYaO9ksD2IyCKgpF72bHTWx+7KbMf0fl//esMrW4Jdp+dHQ4G
d3EekXWxN2sFSVrimSSqCCFPYKOYBH3E7+fa4C6h4RoUld0WpcEx0rZIZirA8SLN
3+sppTEpeu1NtUPYnTrpdPCjXgfjx+cZkhlrhdBFyAKJeZ13zr4ZYVwR87HCRwxE
BYkBPAQYAQgAJhYhBOtbtDyOZRSPL5sgGfmgWzoStTgnBQJe4SJqAhsMBQkDwfO2
AAoJEPmgWzoStTgnFZ0H/3y9RhcPK65ssKjclr4gwMyquaDPqwKuXJLEZZNuapj0
G7j4AKJX6bN/RYJ3Nw51iC3vv5j5Pd4l6/d5PwZN5t54SV+T+6WPCbfbvBGn+jwj
mcE584hfwndioXjE+dVVoX4dhwkfZLOz6t825UTpKp2XEoFOeVpjTN8NhdiN5xe6
UHEUyHFBCkP9g81j1nKbgXXB6kss5r59WMIZrasVNOzh8WnAVLrTkuQC+/KZZqwi
8dseaBEeCR5vSzCZWjJULSeziiOGChn+e8CNKpgEn2QuNsWoOoaPe6wQVRoY/oxV
t4wc4dVh2W0HjzMOveo3Dvw66QT1NfXx4cmaOm5HWhc=
=eGJl
-----END PGP PRIVATE KEY BLOCK-----
pub_rsa_kleo.asc:
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQENBF7hImoBCAC1YHtHUokERd9OqXiB0/ncVwQaqEBLdd3cxRZ0Kyd7K+OxHH5f
VCdYRWSyVANn4Z+3JjDGZFC5eAUFGWSMwVnWB6VlIW6+7UegGA2cIUAyH6fzBogs
W4hhoIVhUHXTsbUpqj2bhWj85db3GNuSnIhyu6ed0AavsnBbDooHFJYOYgXOvNqN
pq1gjbLIHvBZXZg/OvrpjSM0GoLX6bIDxofzh3ktFlBkUP1fGJ/Cx1xu1ANZ6mQq
6VF8DMcYBO08+HiQvaRkOYYI20AR5X3BOYA+UR63CiXjfvt9r2OX+GTExe3XJp8U
wkvm14JRCyad42ADOkik8fdiy4rGy0yVwe/BABEBAAG0GHRlc3QgdGVzdCA8dGVz
dEB0ZXN0LmRlPokBVAQTAQgAPhYhBOtbtDyOZRSPL5sgGfmgWzoStTgnBQJe4SJq
AhsDBQkDwfO2BQsJCAcCBhUKCQgLAgQWAgMBAh4BAheAAAoJEPmgWzoStTgnc2UH
/2cLGLIAQke/VT/Humfx0CwK9dHMgXT+BS52G43C8dzfTp1iwn31t/B/rfh9fDEx
uYT+nratO6hROME8ZzHcm6pLpyH4ZJCb35UO0KA/Eo8TN56VM69pEPzQbpc80AiF
gAQ0tLm58dAFIkMahKPfcOp5YpIw1wrfxiJkygV7e37wh9wiah/w2ho/8BawYEUG
arjoB13fYB5gNLzsmk1nd+/nMOZDaqOJXl3GhVI9i1pS5cqkT/T9WquybqW73whI
LpsSdBu9MWihhedCx6KtoDpSPrtzzXjKc7PJIjBPDN4pzH/0fqvN/k2736P1MP5z
yq/nOMOTevV6YZtN9SXA6Hq5AQ0EXuEiagEIAMN3rP9PhOm0ho14SXwjXKF/vXc+
EDT+vglwg7bZUpsJMYt5TMnT6dNHlV/ek8oMUEvBqBIHWrUplG/LFyaNJAjmgE4q
FoNHy1V9H9Jw439PH1T/8Mbw/n/k3xvanWcl03QcRcvPag8oaczYO+/+cbyl0R/X
lk2lAlBomfnM2+nfUFyTKVXJoG7TSHNRGbG3jmKyVzPi3Tb/JqbW2ITUBKkd+J6h
l5EK9NExqyXVoFHUf2KWfawXzEw9Dk83fKoDQHQkknzdQ+vm4l6MQ9m40OV7U3xd
6yd49p8h1+jpYv4bWguiWiJquAosABOF5DunpIUWel82uZ/PRCuLZpbGG38AEQEA
AYkBPAQYAQgAJhYhBOtbtDyOZRSPL5sgGfmgWzoStTgnBQJe4SJqAhsMBQkDwfO2
AAoJEPmgWzoStTgnFZ0H/3y9RhcPK65ssKjclr4gwMyquaDPqwKuXJLEZZNuapj0
G7j4AKJX6bN/RYJ3Nw51iC3vv5j5Pd4l6/d5PwZN5t54SV+T+6WPCbfbvBGn+jwj
mcE584hfwndioXjE+dVVoX4dhwkfZLOz6t825UTpKp2XEoFOeVpjTN8NhdiN5xe6
UHEUyHFBCkP9g81j1nKbgXXB6kss5r59WMIZrasVNOzh8WnAVLrTkuQC+/KZZqwi
8dseaBEeCR5vSzCZWjJULSeziiOGChn+e8CNKpgEn2QuNsWoOoaPe6wQVRoY/oxV
t4wc4dVh2W0HjzMOveo3Dvw66QT1NfXx4cmaOm5HWhc=
=bgyD
-----END PGP PUBLIC KEY BLOCK-----
|
You're on the right track with the Strategy using extends PassportStrategy() class setup you have going. In order to catch the error from passport, you can extend the AuthGuard('facebook') and add some custom logic to handleRequest(). You can read more about it here, or take a look at this snippet from the docs:
import {
ExecutionContext,
Injectable,
UnauthorizedException,
} from '@nestjs/common';
import { AuthGuard } from '@nestjs/passport';
@Injectable()
export class JwtAuthGuard extends AuthGuard('jwt') {
canActivate(context: ExecutionContext) {
// Add your custom authentication logic here
// for example, call super.logIn(request) to establish a session.
return super.canActivate(context);
}
handleRequest(err, user, info) {
// You can throw an exception based on either "info" or "err" arguments
if (err || !user) {
throw err || new UnauthorizedException();
}
return user;
}
}
Yes, this is using JWT instead of Facebook, but the underlying logic and handler are the same so it should still work for you. |
Github Authentication Plugin
Why: Use GitHub user credentials to administrate Jenkins instance, using GitHub-OAuth.
Plug-in details: https://plugins.jenkins.io/github-oauth
Configuration (Github):
Step1: Github.com → Settings → Applications → Authorized OAuth Apps → Create a new Application.
Application Name: Jenkins
HomePageURL: Your Jenkins landing page URL, for me it is https://jenkis..ninja
Application Description: Whatever you like
Authorization callback: JenkinsInatnceURL/securityRealm/finishLogin please make sure your spellings are correct
Add your application
Step 2:
Configuration (Jenkins)
Enable security checkbox
Access Control checkbox
Github Authentication plugin
Github Web URI: https://github.com or your own Github server instance
Client Id: which will you get from Github
Client Secret: Secret key that you will get from GitHub while Adding Jenkins
as application
OAuth Scope(s): read:org,user:email,repo
Then Authorization:
* Matrix-based Security:
checkbox check as checked-in screenshot
For more details please read https://plugins.jenkins.io/github-oauth/ |
Strapi supports natively Microsoft SSO.
You must act on three fronts: Azure Portal, Strapi Admin, Frontend App
1 - AZURE Portal: (create application, configure, get params)
1.1 Create application, go to the App registrations site and register an app
1.2 Click New Registration
1.3 Fill the form as show in below ScreenShot
1.3.1 In "Supported account types" set Multitenant option (in strapi, single tenant is not supported by default, if you need to set single tenant you must create a custom provider, but multitenant is ok)
1.3.2 In the Redirect URI field, put "Web" and
/connect/microsoft/callback
(i.e. http://localhost:1337/connect/microsoft/callback or your strapi
production url https://mystrapiexample.com/connect/microsoft/callback)
1.3.3 Register and go to next page
1.4 Go to the "Authentication" page of your registered App (left menu) to enable the implicit grant flow (Access tokens)
1.5 Go to the "Certificate and secrets" page of your registered App (left menu) to create a "New client secret" and annotate the value, You will use it when you configure the provider on strapi.
1.6 Also note the "Application (client) ID" in the Overview page, You will use it when you configure the provider on strapi
2 - STRAPI ADMIN: (create application, configure, get params)
2.1 Go to "Roles and Permission" > Providers > Microsoft
2.2 Set Enable "ON" and your clientId and secret that you get in previous steps (1.5 and 1.6)
2.3 The redirect URI to your front-end app which gets and redirects the microsoft access_code (this step will be clearer later)
3 - FRONTEND APP:
Ready? At this point the flow begins, starts to jump to complete the authentication and obtain a strapi jwt to make the requests as an authenticated user.
3.1 Create a link in your frontend application to strapi microsoft sign-in
/connect/microsoft
(i.e. http://localhost:1337/connect/microsoft or your strapi
production url https://mystrapiexample.com/connect/microsoft)
3.2 Strapi redirects the user to microsoft authentication page, on success the user will be redirected on strapi with a microsoft access_code (this step is transparent for you)
3.3 Strapi redirects the access_code to the frontend url set in 2.3, which must redirect (with access_code) to strapi page auth
/auth/microsoft/callback
(i.e http://localhost:1337/auth/microsoft/callback or your strapi
production url https://mystrapiexample.com/auth/microsoft/callback ).....
3.4 At this point strapi creates its own JWT token which returns to the frontend application, which can store it (in localstorage, session storage...) to make requests to the strapi endpoints.
References
https://github.com/strapi/strapi-examples/blob/master/login-react/doc/microsoft_setup.md
https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app
https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-implicit-grant-flow
|
From your question and comments I read that you have actually multiple questions, let me address each one separately.
I fear that the appstore will reject the application since the alert messages must be modified.
The alert text cannot be modified, not because the cordova-plugin-facebook4 does not allow it, but Apple does not give you a way to modify it. So it can't be a reason for rejection.
how to eliminate that alert
The dialog is part of SSO via ASWebAuthenticationSession and SFAuthenticationSession (deprecated). You can downgrade to an old FBSDK version (around 4.2.x) that does not use SSO, but that is a pretty old SDK and you will not have the newest features and possible bugs that will never get fixed anymore. You can implement your own authentication controller which uses SFSafariViewController instead of SSO, but that means developing a replacement of the cordova-plugin-facebook4 login process.
this alert appears every time I try to log in
That is how it's supposed to work. It is a system dialog generated by iOS which informs the user about privacy critical decisions, which is an integral part of how iOS manages user privacy. If the dialog pops up every time, check the current login status before unnecessarily logging in. There should be a getLoginStatus method or something similar.
A privacy dialog per se is not a reason for App Store rejection, but it can be a reason for rejection if the dialog pops up so often that it becomes difficult to actually use the app. This would point to a broken authentication process, because once signed on, the user would be expected to stay signed on for a certain amount of time or to complete the intended task.
there is no way to change the text. that's my problem and the reason i
ask this question.
Correct, there is no way to change the text. It seems that you look for an answer to implement your solution even though it may not solve your problem - so what is the actual problem? You want to change the text or hide the dialog because you "fear" that the app will be rejected, but you don't know whether it will be rejected. If this is your first App Store submission you should know that an app rejection is not the end of the world. You will be told what specifically to fix and you can resubmit the revised app. My suggestion would be to simply submit the app and wait for approval. |
The problem here is that your stomp client is not performing the required WebSocket handshake upon starting of the connection.
This initial handshake is used to upgrade the HTTP connection to a WebSocket connection, and using Java you will be talking over secure WebSockets while you are directly trying to connect to host. Note that the STOMP client implementation you're using expects a CONNECTED frame upon sending CONNECT, but since it does not perform the WebSocket handshake it is not receiving it, hence the EOFException.
From the official code example:
private StompSession connect() throws Exception {
// Create a client.
final WebSocketClient client = new StandardWebSocketClient();
final WebSocketStompClient stompClient = new WebSocketStompClient(client);
stompClient.setMessageConverter(new StringMessageConverter());
final WebSocketHttpHeaders headers = new WebSocketHttpHeaders();
// Create headers with authentication parameters.
final StompHeaders head = new StompHeaders();
head.add(StompHeaders.LOGIN, ACTIVE_MQ_USERNAME);
head.add(StompHeaders.PASSCODE, ACTIVE_MQ_PASSWORD);
final StompSessionHandler sessionHandler = new MySessionHandler();
// Create a connection.
return stompClient.connect(<wss_endpoint>, headers, head, sessionHandler).get();
}
As you can see here the connect contains some headers and a session handler and also they are connecting with a WSS uri. You can replace the headers with your credentials and also the wss endpoint can be got from the AWSMQTT server, see the link I shared above for how the wss uri looks like. |
https://blog.restcase.com/4-most-used-rest-api-authentication-methods/
Basic Authentication is probably the easiest. Just google PHP Basic Authentication REST API implementation
There's a very basic old example on php.net in the comments https://www.php.net/manual/en/features.http-auth.php
<?php
$valid_passwords = array ("mario" => "carbonell");
$valid_users = array_keys($valid_passwords);
$user = $_SERVER['PHP_AUTH_USER'];
$pass = $_SERVER['PHP_AUTH_PW'];
$validated = (in_array($user, $valid_users)) && ($pass == $valid_passwords[$user]);
if (!$validated) {
header('WWW-Authenticate: Basic realm="My Realm"');
header('HTTP/1.0 401 Unauthorized');
die ("Not authorized");
}
// If arrives here, is a valid user.
echo "<p>Welcome $user.</p>";
echo "<p>Congratulation, you are into the system.</p>";
?>
But the above is not using a database to store user login/hashed password, it's just storing it in an array. But it would be very quick to prototype your authentication before making something a bit more complicated
Here's another basic example along the same lines
https://gist.github.com/rchrd2/c94eb4701da57ce9a0ad4d2b00794131
So with this setup, if you're sending a GET request to your REST API endpoints to get the json, you would also need to include the username/pass in the Http request headers otherwise you would get the 401 not authorized response instead of the JSON.
See the answer here for how you would code the GET request from your PHP to call the REST endpoint
How do I make a request using HTTP basic authentication with PHP curl? or here also has a good example PHP: how to make a GET request with HTTP-Basic authentication |
You can use this sample which will help you to create events with same client credential flow which you are using but you need to change some things here.
You need to first give the Calendar.ReadWrite permission in the Azure portal for your app.
You need to add the below code in the Program.cs
if (result != null)
{
var httpClient = new HttpClient();
var apiCaller = new ProtectedApiCallHelper(httpClient);
await apiCaller.CallWebAPIToPostEvent($"{config.ApiUrl}v1.0/users/{user obj id}/calendars/{calendar id}/events", result.AccessToken, Display);
}
Then you need to add the below classes in the protectedApiCallHelper.cs
public class Event
{
[JsonProperty("subject")]
public string Subject { get; set; }
[JsonProperty("body")]
public Body Body;
[JsonProperty("start")]
public TimeAndDate Start;
[JsonProperty("end")]
public TimeAndDate End;
[JsonProperty("location")]
public Location Location;
[JsonProperty("attendees")]
public List<Attendees> Attendees;
}
public class Body
{
[JsonProperty("contentType")]
public string ContentType { get; set; }
[JsonProperty("content")]
public string Content { get; set; }
}
public class TimeAndDate
{
[JsonProperty("dateTime")]
public string DateTime { get; set; }
[JsonProperty("timeZone")]
public string TimeZone { get; set; }
}
public class Location
{
[JsonProperty("displayName")]
public string DisplayName { get; set; }
}
public class Attendees
{
[JsonProperty("emailAddress")]
public EmailAddress EmailAddress;
[JsonProperty("type")]
public string Type;
}
public class EmailAddress
{
[JsonProperty("address")]
public string Address { get; set; }
[JsonProperty("name")]
public string Name { get; set; }
}
In this same ProtectedApiCallHelper class you can create a post request and get the details by adding the below code
public async Task CallWebAPIToPostEvent(string webApiUrl, string accessToken, Action<JObject> processResult)
{
var defaultRequetHeaders = HttpClient.DefaultRequestHeaders;
if (defaultRequetHeaders.Accept == null || !defaultRequetHeaders.Accept.Any(m => m.MediaType == "application/json"))
{
HttpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
}
defaultRequetHeaders.Authorization = new AuthenticationHeaderValue("bearer", accessToken);
var payload = new Event
{
Subject = "Let's go for lunch",
Body = new Body
{
ContentType = "HTML",
Content = "Does mid month work for you?"
},
Start = new TimeAndDate
{
DateTime = "2019-03-15T12:00:00",
TimeZone = "Pacific Standard Time"
},
End = new TimeAndDate
{
DateTime = "2019-03-15T14:00:00",
TimeZone = "Pacific Standard Time"
},
Location = new Location
{
DisplayName = "Harry's Bar"
},
Attendees = new List<Attendees>
{
new Attendees
{
EmailAddress = new EmailAddress
{
Address = "[email protected]",
Name = "Shiva"
},
Type = "required"
}
}
};
// Serialize our concrete class into a JSON String
var stringPayload = await Task.Run(() => JsonConvert.SerializeObject(payload));
// Wrap our JSON inside a StringContent which then can be used by the HttpClient class
var httpContent = new StringContent(stringPayload, Encoding.UTF8, "application/json");
HttpResponseMessage response = await HttpClient.PostAsync(webApiUrl, httpContent);
if (response.Content != null)
{
var responseContent = await response.Content.ReadAsStringAsync();
Console.WriteLine(response.Content.ReadAsStringAsync().Result);
}
}
This will help you create the event with these details. |
You need to use ask cli, I assume you've already installed and configured it on your machine in order to deploy skill. If so - skip the first section:
Install ask cli tool according to ASK CLI Quick Start
Once it's installed and configured you need to generate auth token using your client-id and secret from your Security Profile, configured to Use SMAPI:
ask util generate-lwa-tokens --client-id <your-client-id> --client-confirmation <your-client-secret>
It should open the website, click Allow and go back to the console, there should be something like:
The LWA tokens result:
{
"access_token": "Atza|IwEBIJDuJivzzkceXtesWGS5tYIKRZlK0NKp9OWP8TXh4HlFSQxTiMD4V-1QeoSHa8C6(...)",
"refresh_token": "Atzr|IwEBIOyzzw_7(...)",
"token_type": "bearer",
"expires_in": 3600,
"expires_at": "2020-06-03T13:21:04.922Z"
}
Copy the access_token and use it in Authentication header: Authorization: Bearer access_token in requests from the doc site you've linked.
Hint: you can obtain your vendorId here:
curl --location --request GET 'https://api.amazonalexa.com/v1/vendors' \
--header 'Authorization: Bearer access_token'
Sample CURL request:
curl --location --request POST 'http://api.amazonalexa.com/v1/skills/api/custom/interactionModel/slotTypes/' \
--header 'Authorization: Bearer access_token' \
--header 'Content-Type: application/json' \
--data-raw '{
"vendorId": "MBT******E",
"slotType": {
"name": "SharedSlot",
"description": "Your shared slot'\''s description"
}
}'
and response:
{
"slotType": {
"id": "amzn1.ask.interactionModel.slotType.e4fc2751-e4be-48c5-9be0-cd193a2ffafb"
}
}
|
IdentityServer4 had 2 DbContexts that are a part of the framework which you will have to use if you're going to store these to the database. The ConfigurationDbConext for client and flow configuration. And the PersistentGrantDbContext for storing tokens and such. These 2 DbContexts are the only core part of IdentityServer4. These can also be stored in memory, but I wouldn't advise that. These 2 dbcontexts can be stored along side the existing database tables, or in another database if you want to.
User-management and such are not part of the IdentityServer framework, and you can use the implementation of your liking, like ASP.NET Core identity or something custom. In the article you mentioned, the magic happens within the IProfileService service,where users are retrieved and the IResourceOwnerPasswordValidator where credentials are validated. Use these custom implementations to retrieve and validate the users from your existing database.
Also, if you look at the quickstart example project, you'll see the UserStore is injected into each controller. Feel free to replace this one with your own user-repository if you need to. So to answer your question, IdentityServer4 doesn't rely on any user/role related storage framework, but you can attach however you want to.
For example: in my projects, User management, and authentication are 2 different microservices. Within the cluster, IdentityServer calls the user-service internally to get the user that is requested, but it isn't even part of the Auth microservice. The auth service just focusses on the OpenId connect implementation but knows basically nothing about users at all. |
Here is an example of code obfuscation using a few simple operations together with the exec function. The obfuscated portion of code is the fib function. See if you can determine what it is.
from base64 import b64decode
code1 = b'CbaKdTwxFl317Zq13s3agdfQZWHwXZ7JV7vVvwiwhhJNEIgbaV1HoDR54/KoeVW1UArBeEVEhWfB1iDMDrzSz6NI6oIpxDs3A4/Ai9QQxUo0g2ri26Bd9zpunDy1ZUsUOPJgz1nWMB7KMMQ7AilCSZu0Vqugu1wouuPgisoznaoNmKvaYgrPXSVPE4BNfH9af5bmva/hjS5FPJwwfkCpX4CCzg2m6H9VzbX4nnSjy8hqNYoPmiRniV5i6yLqSFJ0+shXEQ=='
code2 = b'A9LvExxXfz/dg7OP1O36ofe/CQWVL7LpONexnzWQtz5tIII7SX1nyVJZ09KURHXbcDb8WHR+j0fh9gDsLpzyvcY8n/BH5BNYb+ul+fgwqiZQqjGMhqpX1xpOvFraF2tLGJsO7yu3XnmvGPQXIkdiZLuGf5Gqm3wImsPAqupQ6Nh//cWuQjfvMkkrM6ttExM+GuTsnY/BrQ5lHLxfEiTMLayioWHCyEJ1otmcslTAvroYUOR7kARHqX4QjlafOjxUlaQzGw=='
exec(bytes(x ^ y for x, y in zip(b64decode(code1), b64decode(code2))).decode('utf-8'))
print(fib(200))
Now you can take this simple example and make it more and more complicated, adding encryption layers and so forth. But the bottom line is that it's not that hard to reverse engineer this and recover the source code, as long as the attacker is sufficiently motivated.
On the other hand, this makes the code harder for you maintain and debug. So it is possible to make the attacker do a little work to recover your source code, but you may decide it's not worth it.
EDIT: Some comments have suggested that this is a bad example because it's a concocted weak strawman which somehow puts "real" commercial obfuscators in a bad light. That was not the intent, however in my experience wearing the black hat commercial obfuscators don't really add that much. As always, it depends on your threat model. If your worried about someone who is actually getting paid to reverse engineer your code, commercial obfuscators are of minimal value for the following reasons:
I can buy the obfuscator myself and figure out the transformations it makes. This gives me a substantial leg up over some script kiddie.
It's still mostly one-time work. As software goes through revisions it typically changes very little. The reverse-engineered source that I obtained from my initial hard work will allow me to quickly recover the changes from the next revision. In other words, I get to amortize my initial efforts over many succeeding versions. I can also go backwards to previous versions just as easy.
Because I can do an exec within the code I am exec-ing, it's straightforward to extend this simple example to arbitrary levels of nesting. You can achieve substantially the benefit you get from expensive commercial obfuscators for free.
Commercial obfuscators still have all the downsides I mentioned above. Your code will have bugs, you will need to receive feedback from your users about those bugs, but now you have to de-obfuscate the error information you receive. And yes, the commercial products will have tools to assist you here, but even so it's just a pain in the ass. If you do pay for an obfuscation product, make sure you buy it outright with a lifetime license for the de-obfuscation tools.
If your goal is to keep honest people honest, you don't need commercial obfuscators. If your goal is to keep out a determined hacker with a modest budget from reverse engineering your code, commercial products won't be able to stop them. There may a middle ground between these two where commercial is the way to go, but I haven't found that to be the case. |
Based on what I knew and what I've read from the documentation, it looks like it's not possible to do this. A Firebase user, a.k.a an user authenticated within Firebase platform is required if you want to send email verification that uses Firebase Email Vetification service.
Well, do not lose hope since there are plenty workarounds to do this. What I would do to achieve this is to use Firebase Cloud Functions to create serverless API platform. I connect Firebase Cloud Functions with Firebase Admin SDK (which also has access to other Firebase services if I am not mistaken).
I send an email using some kind of email service providers such as SendGrid to designated email address (which the app got from user's inputted email) and provide a link to verify there (in the e-mail that sent to designated email address). Then, in the cloud functions, you leverage Firebase Admin SDK to change verification status.
This approach is flexible though, as it can be used to verify a user not only with Firebase Authentication.
Hope it helps. If it's not clear for you, just comment.
Happy coding.
EDIT: After thoroughly read your question again, I realized that my answer is not fully correct. Somehow you still need a specific user to be added within Firebase Authentication database, which you would not want to do manually and let your app do so instead. Perhaps you can use Firebase Admin SDK in this matter. You can read official Firebase documentation for more information regarding Admin SDK. |
For unauthenticated requests, they limit up to 60 requests an hour. You can increase this upto 5000 per hours by authenticating the api requests.
So when I was facing this problem a couple of weeks ago, I created personal_auth_token at gihub and passed this token in the headers and the problem was solved.
To generate personal_auth_token, login to github.com, go to settings -> developers settings -> Personal access tokens and generate one.
Pass this token in headers under Auhtorization: *token*. So in your AJAX request, it could look something like this:
$.ajax({
url: *yourUrl*
...
beforeSend: function (xhr) {
xhr.setRequestHeader('Authorization', *token*));
},
});
One thing to note here is DON'T push the code with this token on github if the repository is public. That gets immediately detected and the token is revoked and you're be required to create again one.
For API requests using Basic Authentication or OAuth, you can make up
to 5000 requests per hour. Authenticated requests are associated with
the authenticated user, regardless of whether Basic Authentication or
an OAuth token was used. This means that all OAuth applications
authorized by a user share the same quota of 5000 requests per hour
when they authenticate with different tokens owned by the same user.
For unauthenticated requests, the rate limit allows for up to 60
requests per hour. Unauthenticated requests are associated with the
originating IP address, and not the user making requests.
https://developer.github.com/v3/#rate-limiting
Another solution that effectively worked in my case was solving the CORS issue with a proxy server. In this, you're just required to append the API request URL to a proxy service provider such as, https://cors-anywhere.herokuapp.com/
var url = "http://example.com/repo"; //your api request url,
var proxyUrl = `https://cors-anywhere.herokuapp.com/${url}`;
fetch(proxyUrl)... //Make a request with this proxy url to navigate CORS issue
|
With security rules, the query must exactly match the rules. The behavior you're observe is exactly what I would expect.
With a rule like this:
allow read: if resource.data.readAccess == 0;
That means the query must be filtered exactly like this;
where("readAccess", isEqualTo: 0)
Nothing else will satisfy this rule. It's absolutely demands that the query filter for exactly the value of 0 on the readAccess field. It's not clear to me why you're expecting a different outcome.
Your query suggests that the client provide its own "access" to the collection. Note that this is not secure. You can't depend on client apps self-reporting their own level of access in a database query. Something else on the backend needs to determine if the app is allowed to make the query.
Typically, Firebase Authentication is used to determine who the user is, then allow access based on what that user is allowed to do. You could store the user's permissions somewhere in another document, and use the contents of that document to determine what they can do. Or perhaps use custom claims. But you can't trust the the user pass their own permissions. |
The behavior you're seeing is caused by a limitation in the alpha bits of the OpenIddict validation handler, that checks whether TokenValidationParameters.IssuerSigningKeys is null, but not TokenValidationParameters.IssuerSigningKey.
To work around it, you can use:
config.IssuerSigningKeys = new[] { new X509SecurityKey(MyOtherCustomCertificate) };
Alternatively, you can use discovery to allow it to download the signing keys from the authorization server:
services.AddOpenIddict()
.AddValidation(options =>
{
options.SetIssuer(new Uri("https://localhost:44365/"));
options.AddEncryptionCertificate(AuthenticationExtensionMethods.TokenEncryptionCertificate());
options.UseAspNetCore();
options.UseSystemNetHttp();
});
It's worth noting that the options.SetTokenValidationParameters() method will be removed very soon (as part of the introspection support addition). The new syntax for registering a static OIDC configuration will be something like that:
services.AddOpenIddict()
.AddValidation(options =>
{
options.SetConfiguration(new OpenIdConnectConfiguration
{
Issuer = "https://localhost:44365/",
SigningKeys = { new X509SecurityKey(AuthenticationExtensionMethods.TokenSigningCertificate()) }
});
options.AddEncryptionCertificate(AuthenticationExtensionMethods.TokenEncryptionCertificate());
});
|
This could be the issue if you aren't using Authentication State Persistence about which you can read here
Some code snippet from the official documention:
firebase.auth().setPersistence(firebase.auth.Auth.Persistence.SESSION)
.then(function() {
// Existing and future Auth states are now persisted in the current
// session only. Closing the window would clear any existing state even
// if a user forgets to sign out.
// ...
// New sign-in will be persisted with session persistence.
return firebase.auth().signInWithEmailAndPassword(email, password);
})
.catch(function(error) {
// Handle Errors here.
var errorCode = error.code;
var errorMessage = error.message;
});
And here's type of states available:
Using the 'LOCAL' state should be fine with your need and your user needs to log out himself which mean he'll stay logged in and can access your database unless he logs out.
PS: The default for web browser and React Native apps is local (provided the browser supports this storage mechanism, eg. 3rd party cookies/data are enabled) whereas it is none for Node.js backend apps. |
Which disadvantages does the last solution have?
One thing that comes to my mind is that you want API A to be able to edit data in e.g. MS Graph API, so you give it the app permission to Read/Write Directory data.
Now with the shared app registration this permission has also been given to API B and API C.
So the principle of least privilege may be violated in the second and third options.
But it does make it easier to manage those APIs as you noticed.
The third option does open up the door for the user to acquire access tokens to any APIs that you might want to call on behalf of the current user from your APIs.
So if you wanted to API A to edit a user through MS Graph API on behalf of the user, you'd have to require the read/write users delegated permission (scope) for your app.
This would allow the user to acquire this token from your front-end as well, even though that is not intended.
Now they would not be able to do anything they wouldn't otherwise be able to do since the token's permissions are limited based on the user's permissions, so this might not be a significant disadvantage.
Which of the three approaches should I chose?
As with many things, it depends :)
If you want absolute least privilege for your services, option 1.
If you want easier management, I'd go with option 3 instead of 2.
There was that one thing I mentioned above about option 3 but that does not allow privilege escalation. |
Just create the granted authorities based in the user roles and authenticate the user with it. Then the authenticated user principal will contain the roles.
Simple example:
UserEntity userEntity = userRepository.findUserByEmail(user); // this depends of course on your implementation
if (userEntity == null) return null;
List<RoleEntity> roles = userEntity.getRoles();
Collection<GrantedAuthority> authorities = new HashSet<>();
roles.forEach((role) -> {
authorities.add(new SimpleGrantedAuthority(role.getName()));
});
return new UsernamePasswordAuthenticationToken(user, null, authorities);
Even better, you can create a UserPrincipal that implements UserDetails from spring security.
public class UserPrincipal implements UserDetails {
private static final long serialVersionUID = 1L;
private final UserEntity userEntity;
public UserPrincipal(UserEntity userEntity){
this.userEntity = userEntity;
}
@Override
public Collection<? extends GrantedAuthority> getAuthorities() {
Collection<GrantedAuthority> authorities = new HashSet<>();
// Get user Roles
Collection<RoleEntity> roles = userEntity.getRoles();
if(roles == null) return authorities;
roles.forEach((role) -> {
authorities.add(new SimpleGrantedAuthority(role.getName()));
});
return authorities;
}
@Override
public String getPassword() {
return this.userEntity.getEncryptedPassword();
}
@Override
public String getUsername() {
return this.userEntity.getEmail();
}
@Override
public boolean isAccountNonExpired() {
return false;
}
@Override
public boolean isAccountNonLocked() {
return false;
}
@Override
public boolean isCredentialsNonExpired() {
return false;
}
@Override
public boolean isEnabled() {
return false;
}
}
And to use it:
UserEntity userEntity = userRepository.findUserByEmail(user);
if (userEntity == null) return null;
UserPrincipal userPrincipal = new UserPrincipal(userEntity);
return new UsernamePasswordAuthenticationToken(userPrincipal, null, userPrincipal.getAuthorities());
|
It seems you use client credential grant flow in your code, but the graph api just support delegated permissions. So we need to use authorization code grant flow or username/password grant flow. (By the way, we recommend authorization code flow rather than username/password flow)
For authorization code flow, please refer to this tutorial with the code sample in it.
IConfidentialClientApplication confidentialClientApplication = ConfidentialClientApplicationBuilder
.Create(clientId)
.WithRedirectUri(redirectUri)
.WithClientSecret(clientSecret) // or .WithCertificate(certificate)
.Build();
AuthorizationCodeProvider authProvider = new AuthorizationCodeProvider(confidentialClientApplication, scopes);
For username/password flow, you can refer to this tutorial with the code sample in it.
IPublicClientApplication publicClientApplication = PublicClientApplicationBuilder
.Create(clientId)
.WithTenantId(tenantID)
.Build();
UsernamePasswordProvider authProvider = new UsernamePasswordProvider(publicClientApplication, scopes);
GraphServiceClient graphClient = new GraphServiceClient(authProvider);
User me = await graphClient.Me.Request()
.WithUsernamePassword(email, password)
.GetAsync();
Update:
You can do some changes by following the steps below, then you don't need to provide the client_secret in your code and it also will not show the error message The request body must contain the following parameter: 'client_assertion' or 'client_secret'.
Go to your app registered in azure ad and click "Authentication" --> "Add a platform" --> "Mobile and desktop applications".
Choose the first one as the "Redirect URIs".
Scroll down and check if this configuration is "yes"(if "no", please change it to "yes"), then click "Save".
|
Having the request as a POST will prevent any request coming form other domains based on CORS policy unless you configure your server to allow it, which turns this issue to another thing. GET requests on the other hand are allowed by browsers to retrieve resources, like javascript that might have sensitive data from your domain and it happen to be an array not an object.
Updated answer:
You will not actually find a source tells you how GET, POST requests are different for JSON Hijacking attacks. The difference actually is how web servers and browsers are dealing with those requests. JSON hijacking vulnerability is about malicious websites using an endpoint in your website/app that provides JSON data and response to a GET request (a request that by default allow resources, e.g js, images, text files to be download), if you change it to POST, they will not be able to include <script> that do a POST request from the src attribute, even inside the script tag POST requests will be prevented by CORS policy.
In the modern browser era we no longer have this type of vulnerability (at least in the form mentioned in the discovery article by Jeremiah Grossman) because of CORS policy.
This also referenced in other related questions |
Your question is a bit old, I assume you already found a solution, any how, maybe there are other looking to implement custome roles in Windows Authentification, so the easies way which I found is like this:
In a service or a compenent you can inject AuthenticationStateProvider then
var authState = await authenticationStateProvider.GetAuthenticationStateAsync();
var user = authState.User;
var userClaims = new ClaimsIdentity(new List<Claim>()
{
new Claim(ClaimTypes.Role,"Admin")
});
user.AddIdentity(userClaims);
In this way you can set new roles.
Of course you can implement a custom logic to add the roles dynamically for each user.
This is how I end-up adding Roles based on AD groups:
public async void GetUserAD()
{
var auth = await authenticationStateProvider.GetAuthenticationStateAsync();
var user = (System.Security.Principal.WindowsPrincipal)auth.User;
using PrincipalContext pc = new PrincipalContext(ContextType.Domain);
UserPrincipal up = UserPrincipal.FindByIdentity(pc, user.Identity.Name);
FirstName = up.GivenName;
LastName = up.Surname;
UserEmail = up.EmailAddress;
LastLogon = up.LastLogon;
FixPhone = up.VoiceTelephoneNumber;
UserDisplayName = up.DisplayName;
JobTitle = up.Description;
DirectoryEntry directoryEntry = up.GetUnderlyingObject() as DirectoryEntry;
Department = directoryEntry.Properties["department"]?.Value as string;
MobilePhone = directoryEntry.Properties["mobile"]?.Value as string;
MemberOf = directoryEntry.Properties["memberof"]?.OfType<string>()?.ToList();
if(MemberOf.Any(x=>x.Contains("management-team") && x.Contains("OU=Distribution-Groups")))
{
var userClaims = new ClaimsIdentity(new List<Claim>()
{
new Claim(ClaimTypes.Role,"Big-Boss")
});
user.AddIdentity(userClaims);
}
}
Edit
Below you can find a sample of how I load user info and assign roles
using Microsoft.AspNetCore.Components.Authorization;
using Microsoft.EntityFrameworkCore;
using System.DirectoryServices;
using System.DirectoryServices.AccountManagement;
using System.Linq;
using System.Security.Claims;
using System.Threading.Tasks;
public class UserService : IUserService
{
private readonly AuthenticationStateProvider authenticationStateProvider;
private readonly ApplicationDbContext context;
public ApplicationUser CurrentUser { get; private set; }
public UserService(AuthenticationStateProvider authenticationStateProvider, ApplicationDbContext context)
{
this.authenticationStateProvider = authenticationStateProvider;
this.context = context;
}
public async Task LoadCurrentUserInfoAsync()
{
var authState = await authenticationStateProvider.GetAuthenticationStateAsync();
using PrincipalContext principalContext = new PrincipalContext(ContextType.Domain);
UserPrincipal userPrincipal = UserPrincipal.FindByIdentity(principalContext, authState.User.Identity.Name);
DirectoryEntry directoryEntry = userPrincipal.GetUnderlyingObject() as DirectoryEntry;
CurrentUser.UserName = userPrincipal.SamAccountName;
CurrentUser.FirstName = userPrincipal.GivenName;
CurrentUser.LastName = userPrincipal.Surname;
CurrentUser.Email = userPrincipal.EmailAddress;
CurrentUser.FixPhone = userPrincipal.VoiceTelephoneNumber;
CurrentUser.DisplayName = userPrincipal.DisplayName;
CurrentUser.JobTitle = userPrincipal.Description;
CurrentUser.Department = directoryEntry.Properties["department"]?.Value as string;
CurrentUser.MobilePhone = directoryEntry.Properties["mobile"]?.Value as string;
//get user roles from Database
var roles = context.UserRole
.Include(a => a.User)
.Include(a => a.Role)
.Where(a => a.User.UserName == CurrentUser.UserName)
.Select(a => a.Role.Name.ToLower())
.ToList();
var claimsIdentity = authState.User.Identity as ClaimsIdentity;
//add custom roles from DataBase
foreach (var role in roles)
{
var claim = new Claim(claimsIdentity.RoleClaimType, role);
claimsIdentity.AddClaim(claim);
}
//add other types of claims
var claimFullName = new Claim("fullname", CurrentUser.DisplayName);
var claimEmail = new Claim("email", CurrentUser.Email);
claimsIdentity.AddClaim(claimFullName);
claimsIdentity.AddClaim(claimEmail);
}
}
|
A good way to test the expected behaviour would be using /var/ossec/bin/ossec-logtest as mentioned in that doc.
To elaborate i will take the example of that doc :
I will overwrite the rule 5716 : https://github.com/wazuh/wazuh-ruleset/blob/317052199f751e5ea936730710b71b27fdfe2914/rules/0095-sshd_rules.xml#L121, as below :
[root@localhost vagrant]# egrep -iE "ssh" /var/ossec/etc/rules/local_rules.xml -B 4 -A 3
<rule id="5716" overwrite="yes" level="9">
<if_sid>5700</if_sid>
<match>^Failed|^error: PAM: Authentication</match>
<description>sshd: authentication failed.</description>
<group>authentication_failed,pci_dss_10.2.4,pci_dss_10.2.5,gpg13_7.1,gdpr_IV_35.7.d,gdpr_IV_32.2,hipaa_164.312.b,nist_800_53_AU.14,nist_800_53_AC.7,</group>
</rule>
The logs can be tested without having to restart the Wazuh manager, Opening /var/ossec/bin/ossec-logtest then pasting my log :
2020/05/26 09:03:00 ossec-testrule: INFO: Started (pid: 9849).
ossec-testrule: Type one log per line.
Oct 23 17:27:17 agent sshd[8221]: Failed password for root from ::1 port 60164 ssh2
**Phase 1: Completed pre-decoding.
full event: 'Oct 23 17:27:17 agent sshd[8221]: Failed password for root from ::1 port 60164 ssh2'
timestamp: 'Oct 23 17:27:17'
hostname: 'agent'
program_name: 'sshd'
log: 'Failed password for root from ::1 port 60164 ssh2'
**Phase 2: Completed decoding.
decoder: 'sshd'
dstuser: 'root'
srcip: '::1'
srcport: '60164'
**Phase 3: Completed filtering (rules).
Rule id: '5716'
Level: '9'
Description: 'sshd: authentication failed.'
As expected the level has been overwriting which was initially 5. Although in your case, you will have to paste the log 8 times in timeframe lower than 20 s to be able to trigger that rule.
If you can share the logs triggering that alert, i can test with it.
On the other hand, you can create a sibling rule to simply ignore your rule 31533, something similar to below :
<rule id="100010" level="2">
<if_sid>31533</if_sid>
<description>Ignore rule 31533</description>
</rule>
Make sure to restart the Wazuh manager afterward to apply the change.
You can find more information about customizing rules/decoders here : https://wazuh.com/blog/creating-decoders-and-rules-from-scratch/
Hope this helps, |
Wherever you are using the login authentication, save a key-value pair in SharedPreferences. For example, in LoginActivity, if all credentials are same,
use :
SharedPrefernces sp = getSharedPreferences("PreferenceFile",MODE_PRIVATE);
sp.edit().putBoolean("isLoggedIn",true).apply();
sp.edit().putBoolean("Password",1234).apply();
And then redirect user to MainActivity/Dashboard.
Now in any activity where you want to have security pin checking, just get the text from user, and check:
SharedPrefernces sp = getSharedPreferences("PreferenceFile",MODE_PRIVATE);
sp.getBoolean("isLoggedIn",false);
String checker = sp.getBoolean("Password",0);
if( (editText.getText().toString().trim()).equals(checker) ) {
//Do Secure Work
} else {
//Add warning Toast
}
For extra checking that the activity has started because of intent or it came from background, you can have an isBack variable.
In Any Activity, use
onCreate() {
isBack = false;
}
onStop() {
isBack = true;
}
Then in onStart, you can use
onStart() {
if(isBack) {
// Do the work which you want to do if the activity comes from background
} else {
// Do the work which you want to do if activity comes due to any intent
}
// After doing all the necessary work, remember to update isBack
// because now the activity is in focus
isBack = false;
}
|
Authentication shouldn't be done on the client side.
Explanation:
Imagine you are a security guard for a big company that only allow its employees to enter.
Each employee carry a card holding a unique ID and a password for each one.
Someone just show up at the door an ask you to let them in.
You ask that person for his card an ID and a password. When he gives a card with his credentials on it, you don't let him in, because you know that he may be lying, and even thought the card he gave you has the companie's logo on It, you know that this card can be faked.
The only thing you rely on is the ID and password on this card.
Now you have to go to the database of the company and check if there is a combination of an ID and password that matches the ID and password on the card he gave you.
Now this person trying to enter represent a user, you represent the authentication system of your website, and the company represents your website.
You can't authenticate a user client side. The way It is done is: a user sends his credentials, now on the server these credentials are checked against the database to see if there is a match. If yes, the server let that user in the website (in other words the server sends a response)
Real life example: [edited]
This is a page called Home_login.html sent to the client that contains the following coode:
/*
1-
When the user click on the Login button, this page will make a POST request to the page specified by "action" which is "API/Login.php".
But because this is a POST request, the POST data sent with that request (the POST data sent are the inputs in the form: email and password), are encoded in the body and the content-type header of that request is set to "application/x-www-form-urlencoded"
So this is what inside the request sent to "API/Login.php" when the user clicks on the Login button:
POST /Login HTTP/1.1
Host: foo.example
Content-Type: application/x-www-form-urlencoded
Content-Length: 27
[email protected]&password=somepassword
2-
Now when this request reaches "API/Login.php", this page "API/Login.php" contains a code that first check the content-type of the request which is: application/x-www-form-urlencoded, which tells it that the data sent with that request are in the form of: key1=value1&key2=valu2&... (in this case it is: [email protected]&password=somepassword)
But because this is an API, the person who coded that page most probably only accept or take input data in JSON format which is different from the format: key1=value1&key2=valu2&...
JSON format :
{
"email":"[email protected]",
"password":"somepassword"
}
3-
So we are sending the data to "API/Login.php" in a format which It don't like. We need to send that data in JSON format.
To do that we write the javascript below.
NOW KEEP IN MIND THAT THIS IS ALL ON CLIENT SIDE. BUT THIS IS NOT AUTHENTICATION. THIS IS JUST SIMPLY SENDING JSON DATA TO THE SERVER ("API/Login.php"), AND NOT LETTING THE PAGE SEND IT IN THE DEFAULT FORMAT (key1=value1&key2=value2...)*/
/*When the form is submitted the function below will run, and it will:
1- Stop the default behaviour sending the data in the POOST format (key1=value&key2=value2...)
2- get the values the user typed in the input filed
3- create a JSON object containing these values
4- sending a request to "API/Login.php" with the data being the JSON object.*/
const form = document.querySelector ("form") // This selects the <form> element in the document
form.addEventListener ("submit", function (e){
e.preventDefault (); // prevent the default form's behaviour (sending POSt data)
var xhttp = new XMLHttpRequest ();
var data = new FormData (loginform); // get the data of form
var email = data.get("email"); // get the value of the <input name="email">
var password = data.get("password"); // get the value of the <input name="password">
var user = { // create the JSON object which will sent as the data in the body
email,
password
}
xhttp.onreadystatechange = function () { // this funtion will run when the server ("API/Login/.php") respond back
if (xhttp.readyState == 4 ){
var res = JSON.parse(xhttp.responseText);
if (xhttp.status != 200){ // If the response from the server has a response code of:200
}else {
}
}
}
xhttp.open ("POST", "API/Login.php"); // Create a POST request to send for the server
xhttp.setRequestHeader("content-type", "application/json"); // set the content type to json (like the server wants)
xhttp.send(JSON.stringify(user), false); // put the JSON object in the body of that request and send that request
});
<!DOCTYPE HTML>
<html>
<body>
<form method="POST" action = "API/Login.php" >
Email: <input type="text" name="email"> <br><br>
Password <input type="text" name="password"> <br><br>
<button type="submit">Login</button>
</form>
</body>
</html>
Now you sent the data in the right format (JSON) to the server and all the client can do is wait for the response.
HERE WHERE THE AUTHENTICATION HAPPENS
THE CLIENT CANNOT DO ANYTHING HE IS JUST WAITING FOR SOME REPLY FROM THE SERVER.
The server have the credentials, checks to see if they match any record in the database, and depending on that the server decides how to respond to the client.
(REMEMBER THAT THE CLIENT IS WAITING FOR THAT RESPONSE AND WILL CHECK THE RESPONSE CODE. THIS IS THE FUNCTION IN THE JAVASCRIPT ABOVE:
xhttp.onreadystatechange = function () { // this funtion will run when the server ("API/Login/.php") respond back
if (xhttp.readyState == 4 ){
var res = JSON.parse(xhttp.responseText);
if (xhttp.status != 200){ // If the response from the server has a response code of:200
}else {
}
}
}
)
If there is a match, the user is authenticated, so this is what happens:
1- The server sends a response back to the waiting client. That response has a response code = 200.
2- Also in that response the server tells the client (the browser where the client is working) to create a cookie that has a value = a randomString REMEMBER THAT COOKIE FOR LATER IT IS IMPORTANT
If there is no match:
1-The server also sends a response back to the waiting client. But this time the response code is not 200 (404). In that response he can send a JSON object which looks like this:
{
"message":"Wrong credentials-no user found"
}
Now this is how the server tells the client if he is authenticated using the response code.
And now code running on the client side (the function in the javascript aboce) can decide what to do based on that response code. If it is 200 (authenticated) maybe it can redirect the user to a dashboard page. If it's not 200 it can display the message received from the server ("Wrong credentials-no user found" in this case)
Now one final thing
Remember the cookie from above sent with the server's response.
This cookie will remain on the client's computer until he closes the browser.
And it is sent with every new requests made to the server, allowing the server to know that this user is already authenticated, so It can server him the right content.
Check this video https://www.youtube.com/watch?v=j8Yxff6L_po |
This answer is a little in jest, but it will give you the desired effect. But, like I said in my comment. This feels like an XY Problem.
Firstly, create a new schema in your database. I am intentionally giving is a silly name:
USE YourDatabase;
GO
CREATE SCHEMA Always_define_your_schema;
GO
Then change all the USERs in the database to have their default schema be this new ("stupid") schema:
ALTER USER YourUser WITH DEFAULT_SCHEMA = Always_define_your_schema;
If you want to do this to every USER with a script, you could do something like this:
DECLARE @SQL nvarchar(MAX);
SET @SQL = STUFF((SELECT NCHAR(13) + NCHAR(10) +
N'ALTER USER ' + QUOTENAME(U.[name]) + N' WITH DEFAULT_SCHEMA = Always_define_your_schema;'
FROM sys.database_principals U
WHERE U.type IN ('U','S')
AND U.authentication_type > 0
AND U.[name] != 'dbo' --You will need to likely include more here
FOR XML PATH(''),TYPE).value('.','nvarchar(MAX)'),1,2,N'');
PRINT @SQL; --Your debugging friend
--EXEC sys.sp_executesql @SQL; --Uncomment to run
But, like I said, this is more of a "jest" answer; if you really implement this then expect to break things... |
Firstly, don't roll your own crypto. Cryptography is very hard, and if you make any mistake, it will have vulnerabilities you could have avoided by using a well-established library to do the heavy lifting. You could, for example, use libsodium. It has many abstractions, and probably has a solution for what you need.
With that out of the way, let's discuss how that would make it safer: the user needs to be able to read the contents, but not edit it. What exactly do you mean by "cannot edit"? Can he not be able to modify anything locally, or just not be able to upload it to your server as if he was authorized to do so?
If the former, encryption can't help you much - you need to be able to decrypt it locally, so an attacker can always dump your process' memory to get to the data - sure it would be hard, but definitely possible. Just not allowing people to edit/save/download in your application would be the strongest guarantee you can get.
If the latter, then using authentication would be the way to go - be that a simple method like HTTP basic authentication with user and password, or signing the file to be uploaded. Dealing with authentication on your application's side would be the more practical way. |
The root cause is most likely missing support for latest EC-based cipher suites like TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
Elliptical Curve Cryptography (ECC) is implemented by the SunEC provider. To be fully functional the following is required:
SunEC has to be listed in
jre/lib/security/java.security and the value of jdk.tls.disabledAlgorithms must not contain ECDHE
The native library libsunce.so /
sunce.dll has to be available in jre/lib (Linux, Mac) or jre/bin (Windows) folder
Ensure that the JDK JCE framework uses the unlimited policy. Since Java 8u161
this is the new default, for older versions you may have to install the JCE Unlimited Strength Jurisdiction Policy Files and / or make changes in the java.security file. See here for further instructions.
If the native library is not available the provider still works but provides only a subset of ECC-based ciphers (see comments in source of SunEC.java). It seems that some linux distributions explicitely removed the native library or disabled the provider by default (e.g. RedHat, Amazon Linux). So if the library should not be part of your JRE, update to the latest package version or directly download and install latest OpenJDK 8 version - simply copying the native lib from the download could also be an option - see here for example.
Another option would be to use a third-party cryptography provider like Bouncy Castle which has its own Provider for ECC. For instruction see this question and its accepted answer. |
If you want to use Azure Key Vault to Encrypt and Decrypt text, you can use SDK Azure.Security.KeyVault.Keys to implement it.
For example
Install SDK
Install-Package Azure.Security.KeyVault.Keys -Version 4.0.3
Install-Package Azure.Identity -Version 1.1.1
Code
ClientSecretCredential clientSecretCredential = new ClientSecretCredential(tenantId, // your tenant id
clientId, // your AD application appId
clientSecret // your AD application app secret
);
//get key
var KeyVaultName = "<your kay vault name>";
KeyClient keyClient = new KeyClient(new Uri($"https://{KeyVaultName}.vault.azure.net/"), clientSecretCredential);;
var keyName="<your key name>"
var key = await keyClient.GetKeyAsync(keyName);
// create CryptographyClient
CryptographyClient cryptoClient = new CryptographyClient(key.Value.Id, clientSecretCredential);
var str ="test"
Console.WriteLine("The String used to be encrypted is : " +str );
Console.WriteLine("-------------encrypt---------------");
var byteData = Encoding.Unicode.GetBytes(str);
var encryptResult = await cryptoClient.EncryptAsync(EncryptionAlgorithm.RsaOaep, byteData);
var encodedText = Convert.ToBase64String(encryptResult.Ciphertext);
Console.WriteLine(encodedText);
Console.WriteLine("-------------dencrypt---------------");
var encryptedBytes = Convert.FromBase64String(encodedText);
var dencryptResult = await cryptoClient.DecryptAsync(EncryptionAlgorithm.RsaOaep, encryptedBytes);
var decryptedText = Encoding.Unicode.GetString(dencryptResult.Plaintext);
Console.WriteLine(decryptedText);
|
The reason for this is quite simple, you're trying to access a web-resource that's protected by authentication (this should be obvious) or protected by detecting non-standard behavior. The reason for your curl request failing is because it's missing a Cookie header or some form of other header needed to identify you as a human. Usually it's the cookie identifying you and your authenticated session as trusted with the server. At some point, you've most likely logged in with your browser and that's why the request works in your browser - but not the curl/php logic or you're missing headers such as User-Agent that masks the use of curl.
Here's an example of a cookie string identifying me as myself. Without it, I wouldn't be able to do these requests in my browser. There for, as long as the server sends Set-Cookie: ... the browser honors it and saves it, keeps track of it and sends it with every request.
Either you borrow a cookie from your browser session and implement it temporarily into your curl requests, or you implement the login logic before sending curl requests. But you should do the right thing and start using the Instagram API as pointed out by Magnus Eriksson in the comments.
The later is the recommended, and there are some libraries altho they are old. But perhaps they will give you an idea of how to go about it.
Instagram-PHP-API library as an example.
use MetzWeb\Instagram\Instagram;
$instagram = new Instagram(array(
'apiKey' => 'YOUR_APP_KEY',
'apiSecret' => 'YOUR_APP_SECRET',
'apiCallback' => 'YOUR_APP_CALLBACK'
));
echo "<a href='{$instagram->getLoginUrl()}'>Login with Instagram</a>";
And if you're left wondering, "what the hell is an API", here's (Tom Scott - This Video Has X Views) a video to a good explanation and why it's not a good idea to pretend to be a human - but instead use API's. |
First, server needs access to site folders:
Console commands to give access:
chown -R www-data.www-data /var/www/new_site
chmod -R 755 /var/www/new_site
chmod -R 777 /var/www/new_site/storage
Then cd to /var/www/new_site/
To check things working type command
php artisan
if it shows a list of commands things are ok till now.
Now open for edit site configuration with nano or vim:
nano /etc/apache2/sites-enabled/000-default.conf
This configuration should be for your old site. If that's the case change for your new site.
It should be something like this
<VirtualHost *:80>
ServerAdmin webmaster@localhost
DocumentRoot /var/www/new_site/public
<Directory />
Options FollowSymLinks
AllowOverride None
</Directory>
<Directory /var/www/new_site>
AllowOverride All
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
After saving configuration you need to restart server:
sudo service apache2 restart
Now cd to /var/www/new_site/ and open .env file.
sudo nano .env
Check if everything is ok there. It should look like this:
APP_NAME=XXXXXXXXXXXXXXXXXXX
APP_ENV=local
APP_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
APP_DEBUG=true
APP_URL=http://XXXXXXXXXXXXX
LOG_CHANNEL=stack
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=XXXXXXXXXXXXXX
DB_USERNAME=XXXXXXXXXXXXX
DB_PASSWORD=XXXXXXXXXXXXX
BROADCAST_DRIVER=log
CACHE_DRIVER=file
QUEUE_CONNECTION=sync
SESSION_DRIVER=file
SESSION_LIFETIME=120
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379
MAIL_DRIVER=smtp
MAIL_HOST=smtp.mailtrap.io
MAIL_PORT=2525
MAIL_USERNAME=null
MAIL_PASSWORD=null
MAIL_ENCRYPTION=null
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_DEFAULT_REGION=us-east-1
AWS_BUCKET=
PUSHER_APP_ID=
PUSHER_APP_KEY=
PUSHER_APP_SECRET=
PUSHER_APP_CLUSTER=mt1
MIX_PUSHER_APP_KEY="${PUSHER_APP_KEY}"
MIX_PUSHER_APP_CLUSTER="${PUSHER_APP_CLUSTER}"
|
Assuming a local instance of SQL server?
The default is usually YOURCOMPUTERNAME\SQLEXPRESS.
But, you can use a "." (dot) in place of your computer name.
So, the server name should be .\SQLEXPRESS
A few things:
When you setup and attempt to connect? Always create a FILE dsn. The reasons for this are many, but one really nice reason is that then Access will by default create a DSN-LESS connection. This approach is preferred since then Access remembers the settings, and no external settings (such as registry or even a file (dsn) is used or required. And if you connect to a SQL server on your network? Well then you can distribute the access application to each workstation, and it will "just work" - all without you having to setup a DSN or do anything with the ODBC manager on each station. So, a great tip and habit. So, use a FILE dsn (and if you follow the default prompts when linking tables, a FILE dsn is the default anyway).
Next up:
Make sure the sql server browser service is running. In the past this was often not required, but now it is recommended. That service is this:
Once you sure that service is running?
Make sure that you enabled named Pipes - in fact I would also enable TCP/IP. That is found here:
Ok, now the next question:
Are you using SQL server logons, or Windows authentication?
For now since this is your local stand alone machine? lets go with windows.
As noted, if you have SSMS installed, then see if it can connect. This not only is a quick + easy fast test but it will also tend to give you the hints as to what the server name connect is. (this helps you when you attempt to connect with Access - you can see what worked with SSMS - and better is SSMS does usually figure out the correct computer name for you.
So, from Access, you now choose from ribbon "External" data, and then import and link group - choose ODBC.
the wizard to connect will start. choose "link to the data source".
At this point, the panels that start to launch are the same one you see if you try to use the ODBC manager from the control panel - but in most cases this road is better, since Access will correctly launch the x32 or x64 bit ODBC manager (it makes this correct decision for you).
So you be at a DSN name, but just hit new. Now you have to choose a ODBC (for sql server) driver.
For now, I would try "SQL server". You can choose SQL Server Native Client 11 (or later if you see). Either one is fine. Just keep in future mind that SQL Server driver choice exists on all computers - so for future distribution to other workstations, this is a good choice. The native 11 (or later) driver is NOT installed by default, and you have to install this on other workstations if you want to use that driver if you move or distribute your application to other workstations.
Next, and now you can enter a name for this connection (myTestcon or whatever). Hit finish.
You should now see/be at this screen:
The dropdown for the server name SHOULD appear and work (it make take 30 seconds). So, it should show you a server name, and a sql instance.
Next, and now you have to choose the type of logon
Because this is a local stand alone computer? Well, you can choose windows logons, or sql logons. Being a local computer - choose the default - windows auth.
Next.
NOW VERY VERY important - make sure you change/select the correct database here - SO MANY skip or miss this - and that's painful!!
This one:
So make sure you select/change the default from "master" to your database you created when you sent the data to sql server.
Next - (you can try the test data source). "ok".
Now you are back to the VERY same starting panel. Your "name" should be defaulted for the file connection. This:
So, now just click ok.
You can then select the tables you want to link to. |
Your issue is using getAllItems :
Gets all the Items recursively in the ItemGroup tree and filter them by the given type.
But a folder: com.cloudbees.hudson.plugins.folder.Folder is class: hudson.model.AbstractItem, not hudson.model.AbstractProject. Use alltems:
Gets a read-only view of all the Items recursively matching type and
predicate in the ItemGroup tree visible to Jenkins.getAuthentication()
without concern for the order in which items are returned.
(The "order" being usually in reverse - ie: LIFO).
Jenkins.instance.allItems(hudson.model.AbstractProject.class).each {it ->
scm = it.getScm()
if(scm instanceof hudson.plugins.git.GitSCM) {
if(scm.getUserRemoteConfigs()[0].getUrl()) {
println it.fullName + ' - '+ scm.getUserRemoteConfigs()[0].getUrl()
}
}
}
return
or
Jenkins.instance.allItems.findAll {
it instanceof hudson.model.AbstractProject
}.each
which will process them in alphabetical order (ie: FIFO).
fullName also gives you the folder path to the job; handy when traversing folders. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.