id
stringlengths 40
40
| text
stringlengths 29
2.03k
| original_text
stringlengths 3
154k
| subdomain
stringclasses 20
values | metadata
dict |
---|---|---|---|---|
4bf24d81053ccfc93028b73283781a4863044368 | Stackoverflow Stackexchange
Q: How to create minor ticks for polar plot matplotlib I am interested in the following two things for the polar plot plotted by matplotlib shown below
*
*How do I create minor ticks for a polar plot on the r axis?
*How do I move the r labels further away from the r ticks, as seen in the graph, some of the r ticks are in contact with the axis.
A: The polar plot does not have minor or major ticks. So I think you need to create the minor ticks manually by plotting small line segments.
For example:
import numpy as np
import matplotlib.pyplot as plt
r = np.arange(0, 2, 0.01)
theta = 2 * np.pi * r
ax = plt.subplot(111, projection='polar')
ax.plot(theta, r)
ax.set_rmax(2)
ax.margins(y=0)
ax.set_rticks([0.5, 1, 1.5, 2]) # less radial ticks
ax.set_rlabel_position(120) # get radial labels away from plotted line
ax.grid(True)
tick = [ax.get_rmax(),ax.get_rmax()*0.97]
for t in np.deg2rad(np.arange(0,360,5)):
ax.plot([t,t], tick, lw=0.72, color="k")
ax.set_title("A line plot on a polar axis", va='bottom')
plt.show()
| Q: How to create minor ticks for polar plot matplotlib I am interested in the following two things for the polar plot plotted by matplotlib shown below
*
*How do I create minor ticks for a polar plot on the r axis?
*How do I move the r labels further away from the r ticks, as seen in the graph, some of the r ticks are in contact with the axis.
A: The polar plot does not have minor or major ticks. So I think you need to create the minor ticks manually by plotting small line segments.
For example:
import numpy as np
import matplotlib.pyplot as plt
r = np.arange(0, 2, 0.01)
theta = 2 * np.pi * r
ax = plt.subplot(111, projection='polar')
ax.plot(theta, r)
ax.set_rmax(2)
ax.margins(y=0)
ax.set_rticks([0.5, 1, 1.5, 2]) # less radial ticks
ax.set_rlabel_position(120) # get radial labels away from plotted line
ax.grid(True)
tick = [ax.get_rmax(),ax.get_rmax()*0.97]
for t in np.deg2rad(np.arange(0,360,5)):
ax.plot([t,t], tick, lw=0.72, color="k")
ax.set_title("A line plot on a polar axis", va='bottom')
plt.show()
A: For your first question, you can either increase the number of ticks, which doesn't seem to be what you want if you wish for minor ticks, or you can manually generate the ticks yourself. To do this you will need to use the polar axes own plot facilities to plot these ticks ie:
ax.plot([theta_start, theta_end], [radius_start, radius_end], kwargs**)
You'll need to figure out the interval you want these ticks, and then tick them manually with a function like the one below.
def minor_tick_gen(polar_axes, tick_depth, tick_degree_interval, **kwargs):
for theta in np.deg2rad(range(0, 360, tick_degree_interval)):
polar_axes.plot([theta, theta], [polar_axes.get_rmax(), polar_axes.get_rmax()-tick_depth], **kwargs)
which you can then call like this:
minor_tick_gen(ax, 0.25, 20, color = "black")
Its kind of difficult to find, but polar axis are not normal axes, but Polar Axis class instances. In the documentation you can use set_ylim(min, max) which will allow you to move the labels off of the line, however this will rescale the entire graph. Going outside of the graph bounds will require developer knowledge of the framework, because matplotlib does not expose this functionality to you. Using set_rgrids(...) for example, even with a position component will not affect the relative label positioning.
Putting these things together, you can use the following code:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import math
def minor_tick_gen(polar_axes, tick_depth, tick_degree_interval, **kwargs):
for theta in np.deg2rad(range(0, 360, tick_degree_interval)):
polar_axes.plot([theta, theta], [polar_axes.get_rmax(), polar_axes.get_rmax()-tick_depth], **kwargs)
def radian_function(x, y =None):
rad_x = x/math.pi
return "{}π".format(str(rad_x if rad_x % 1 else int(rad_x)))
ax = plt.subplot(111, projection='polar')
ax.set_rmax(2)
ax.set_rticks([3*math.pi, 6*math.pi, 9*math.pi, 12*math.pi])
ax.set_rlabel_position(112.5)
# go slightly beyond max value for ticks to solve second problem
ax.set_ylim(0, 13*math.pi)
ax.grid(True)
# generate ticks for first problem
minor_tick_gen(ax, math.pi, 20, color = "black", lw = 0.5)
ax.set_title("Polar axis label minor tick example", va='bottom')
ax.yaxis.set_major_formatter(ticker.FuncFormatter(radian_function))
ax.xaxis.set_major_formatter(ticker.FuncFormatter(radian_function))
plt.show()
to get the following image
| stackoverflow | {
"language": "en",
"length": 470,
"provenance": "stackexchange_0000F.jsonl.gz:901599",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44657003"
} |
ada1e4ad78188980de815263e065a4ad200d31cc | Stackoverflow Stackexchange
Q: Delete cookie that begins with a specific string in Rails I have a rails app that includes the ability for users to get multiple quotes. I am storing each quote in a cookie like this:
if !results.nil?
cookies["quote_#{SecureRandom.uuid}"]
end
As the user creates multiple quotes I will be pulling the cookies out to display o the screen. I want the user to be able to delete or all their quotes by clicking a button.
How can I use some sort of wildcard to delete all the cookies that start with quote_? So something like...
def clear_cookies
cookies.delete "quote_*"
redirect_to compare_path
end
A: You can iterate on your cookies and use deletemethod only on desired cookies :
cookies.each { |key, _| cookies.delete(key) if key.start_with?("quote_") }
| Q: Delete cookie that begins with a specific string in Rails I have a rails app that includes the ability for users to get multiple quotes. I am storing each quote in a cookie like this:
if !results.nil?
cookies["quote_#{SecureRandom.uuid}"]
end
As the user creates multiple quotes I will be pulling the cookies out to display o the screen. I want the user to be able to delete or all their quotes by clicking a button.
How can I use some sort of wildcard to delete all the cookies that start with quote_? So something like...
def clear_cookies
cookies.delete "quote_*"
redirect_to compare_path
end
A: You can iterate on your cookies and use deletemethod only on desired cookies :
cookies.each { |key, _| cookies.delete(key) if key.start_with?("quote_") }
| stackoverflow | {
"language": "en",
"length": 125,
"provenance": "stackexchange_0000F.jsonl.gz:901644",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44657144"
} |
e190c81180d7fd1c5384318fb36e31b06fbca3ec | Stackoverflow Stackexchange
Q: Use Auth0 decorator with Flask-RESTful resource I need to use Auth0 for my Flask-RESTful app. Auth0 has an example using the requires_auth decorator on a view function.
@app.route('/secured/ping')
@cross_origin(headers=['Content-Type', 'Authorization'])
@requires_auth
def securedPing():
return "All good. You only get this message if you're authenticated"
With Flask-RESTful I use add_resource with a Resource class, not app.route with a view function. How do I apply requires_auth to Version?
app = Flask(__name__)
API = Api(app)
CORS = CORS(app, resources={r'/api/*': {'origins': '*'}})
API.add_resource(Version, '/api/v1')
A: The Flask-Restful docs describe how to specify decorators for a resource.
There is a property on the Resource class called method_decorators. You can subclass the Resource and add your own decorators that will be added to all method functions in resource.
class AuthResource(Resource):
method_decorators = [requires_auth]
# inherit AuthResource instead of Resource to define Version
| Q: Use Auth0 decorator with Flask-RESTful resource I need to use Auth0 for my Flask-RESTful app. Auth0 has an example using the requires_auth decorator on a view function.
@app.route('/secured/ping')
@cross_origin(headers=['Content-Type', 'Authorization'])
@requires_auth
def securedPing():
return "All good. You only get this message if you're authenticated"
With Flask-RESTful I use add_resource with a Resource class, not app.route with a view function. How do I apply requires_auth to Version?
app = Flask(__name__)
API = Api(app)
CORS = CORS(app, resources={r'/api/*': {'origins': '*'}})
API.add_resource(Version, '/api/v1')
A: The Flask-Restful docs describe how to specify decorators for a resource.
There is a property on the Resource class called method_decorators. You can subclass the Resource and add your own decorators that will be added to all method functions in resource.
class AuthResource(Resource):
method_decorators = [requires_auth]
# inherit AuthResource instead of Resource to define Version
| stackoverflow | {
"language": "en",
"length": 137,
"provenance": "stackexchange_0000F.jsonl.gz:901650",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44657159"
} |
bf9735ad1f894ca5082cf8e06c75a5d7ce15d8d8 | Stackoverflow Stackexchange
Q: SBT dependsOn usage - migration from 0.12 to 0.13 I have a command like this in build.sbt
run <<= (run in Compile) dependsOn npmBuildTask
According to documentation <<= is deprecated so I want to use := this one.
I tried with;
run in Compile := ((run in Compile).dependsOn(npmBuildTask).value)
run in Compile := (run in Compile).dependsOn(npmBuildTask).value
run in Compile := run.dependsOn(npmBuildTask).value
But whole of them are not working for me. Could you please help me?
A: Finally I found the solution.
compile := ((compile in Compile) dependsOn npmBuildTask).value
This is working for me. The problem was in the following code:
run := ((run in Compile) dependsOn npmBuildTask).value
compile and run are different. compile has a return type as sbt.TaskKey[sbt.inc.Analysis] and run has a return type as sbt.InputKey[scala.Unit]. Because of this you should use this command:
run := ((run in Compile) dependsOn npmBuildTask).evaluated
Now everything is working fine.
| Q: SBT dependsOn usage - migration from 0.12 to 0.13 I have a command like this in build.sbt
run <<= (run in Compile) dependsOn npmBuildTask
According to documentation <<= is deprecated so I want to use := this one.
I tried with;
run in Compile := ((run in Compile).dependsOn(npmBuildTask).value)
run in Compile := (run in Compile).dependsOn(npmBuildTask).value
run in Compile := run.dependsOn(npmBuildTask).value
But whole of them are not working for me. Could you please help me?
A: Finally I found the solution.
compile := ((compile in Compile) dependsOn npmBuildTask).value
This is working for me. The problem was in the following code:
run := ((run in Compile) dependsOn npmBuildTask).value
compile and run are different. compile has a return type as sbt.TaskKey[sbt.inc.Analysis] and run has a return type as sbt.InputKey[scala.Unit]. Because of this you should use this command:
run := ((run in Compile) dependsOn npmBuildTask).evaluated
Now everything is working fine.
| stackoverflow | {
"language": "en",
"length": 147,
"provenance": "stackexchange_0000F.jsonl.gz:901652",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44657163"
} |
3ef7a5b6104dc92c8cab5fa31c9ebabd8bf71c87 | Stackoverflow Stackexchange
Q: Is there a way to save a value across multiple executions of a C++ application? For example,
int var;
int main() {
if(var==5) {
cout<<"Program has been run before"<<endl;
}
else {
var = 5;
cout<<"First run"<<endl;
}
}
This would print First run the first time, and Program has been run before each time after. Is this possible at all?
EDIT: a file won't work, is there any other method?
A: You need to save that counter somewhere outside of the application. The variables are stored in the memory that is reserved for the process. So when your process dies, the values in memory are gone as well.
If a flat file does not work, other options could be a database or perhaps even a separate daemon that keeps track on the run times of a certain application. But if you want to persist the counter over power cycles, you will need to save that data value somewhere in persistent memory (e.g. hard drive)
| Q: Is there a way to save a value across multiple executions of a C++ application? For example,
int var;
int main() {
if(var==5) {
cout<<"Program has been run before"<<endl;
}
else {
var = 5;
cout<<"First run"<<endl;
}
}
This would print First run the first time, and Program has been run before each time after. Is this possible at all?
EDIT: a file won't work, is there any other method?
A: You need to save that counter somewhere outside of the application. The variables are stored in the memory that is reserved for the process. So when your process dies, the values in memory are gone as well.
If a flat file does not work, other options could be a database or perhaps even a separate daemon that keeps track on the run times of a certain application. But if you want to persist the counter over power cycles, you will need to save that data value somewhere in persistent memory (e.g. hard drive)
A: Ok, so here's the gist of it:
If the kernel you are running doesn't provide files you need to give specific details about what kernel and/or device you are using and if you need to store them between "reboots", since not being able to crate files sounds quite specifics.
If you don't have any flash/hdd/ssd or other type of "hard" to save data to, saving values between executions is impossible, you can't save values in RAM due to its dynamic nature.
What you could do is:
a) Write your own primitive fs management tool, if your architecture only ever runs your app this should be easy since you don't need to make a lot of checks, but you need to have a static memory of sorts to store the bytes to
b) At the end of executing re-compile the initial program and replace the values you want to replace with the ones present in your current program
c) Save the values in some external variables using a a shell:
#include <stdlib.h>
putenv("EXTERNAL_STATE=" + my_variable);
d) Send the state you wish to save over the network to a machine that has a filesystem and read/write it from there.
e) Have a separate application that runs in a while and listens for input from the console. when it receives sets input it runs your program with said variable as the parameter, when your program returns it outputs the variable and the "parent" application reads it and sets it internally
A: I came out with the idea of using shared memory from boost libraries.
The concept is that the first time the program runs, it creates another process of itself, just called with a specific parameter (yes, it's a sort of a fork, but in this way we have a portable solution). The parallel process just handles the initialization of the shared memory, and waits for a termination signal.
The major downside of the following implementation is that, in theory, the shared memory of the client (not the manager) could be opened before the server (which handles the shared memory) has completed the initialization.
Oh, I am just printing the index of the run in base 0, just for demonstration. Here the code.
#include <cstring>
#include <iostream>
#include <thread>
#include <chrono>
#include <mutex>
#include <condition_variable>
#include <csignal>
#include <boost/process.hpp>
#include <boost/interprocess/shared_memory_object.hpp>
#include <boost/interprocess/mapped_region.hpp>
static constexpr const char* daemonizer_string = "--daemon";
static constexpr const char* shared_memory_name = "shared_memory";
static std::mutex waiter_mutex;
static std::condition_variable waiter_cv;
struct shared_data_type
{
std::size_t count = 0;
};
extern "C"
void signal_handler(int)
{
waiter_cv.notify_one();
}
int main(int argc, const char* argv[])
{
namespace bp = boost::process;
namespace bi = boost::interprocess;
if(argc == 2 and std::strcmp(argv[1], daemonizer_string) == 0)
{
struct shm_remove
{
shm_remove() { bi::shared_memory_object::remove("shared_memory"); }
~shm_remove() { bi::shared_memory_object::remove("shared_memory"); }
} shm_remover;
bi::shared_memory_object shm(bi::create_only, shared_memory_name, bi::read_write);
shm.truncate(sizeof(shared_data_type));
bi::mapped_region region(shm, bi::read_write);
void* region_address = region.get_address();
shared_data_type* shared_data = new (region_address) shared_data_type;
std::signal(SIGTERM, signal_handler);
{
std::unique_lock<std::mutex> lock(waiter_mutex);
waiter_cv.wait(lock);
}
shared_data->~shared_data_type();
}
else
{
bi::shared_memory_object shm;
try
{
shm = bi::shared_memory_object(bi::open_only, shared_memory_name, bi::read_write);
}
catch(std::exception&)
{
using namespace std::literals::chrono_literals;
bp::spawn(argv[0], daemonizer_string);
std::this_thread::sleep_for(100ms);
shm = bi::shared_memory_object(bi::open_only, shared_memory_name, bi::read_write);
}
bi::mapped_region region(shm, bi::read_write);
shared_data_type& shared_data = *static_cast<shared_data_type*>(region.get_address());
std::cout << shared_data.count++ << '\n';
}
}
| stackoverflow | {
"language": "en",
"length": 696,
"provenance": "stackexchange_0000F.jsonl.gz:901694",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44657305"
} |
f79e4a9a96e9823d1edec8bb6fa92a9fe637465c | Stackoverflow Stackexchange
Q: Which Docker versions will K8s 1.7 support? 1.7 is around the corner according to the release plan. I'm wondering which Docker versions will be supported. Up until now I got this information from the Changelogs External Dependency Version information paragraph --> https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#external-dependency-version-information
A: I asked the same question in the Kubernetes Google Groups and got an official answer.
According to the sig-node team Kubernetes will continue to support only Docker 1.12.x at the launch of Kubernetes 1.7. They will however add 1.13 support early in the lifecycle of K8s 1.7.
Just FYI: Q2 2017 marks the EOL of Docker 1.12 according to their Maintenance Lifecycle
| Q: Which Docker versions will K8s 1.7 support? 1.7 is around the corner according to the release plan. I'm wondering which Docker versions will be supported. Up until now I got this information from the Changelogs External Dependency Version information paragraph --> https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#external-dependency-version-information
A: I asked the same question in the Kubernetes Google Groups and got an official answer.
According to the sig-node team Kubernetes will continue to support only Docker 1.12.x at the launch of Kubernetes 1.7. They will however add 1.13 support early in the lifecycle of K8s 1.7.
Just FYI: Q2 2017 marks the EOL of Docker 1.12 according to their Maintenance Lifecycle
| stackoverflow | {
"language": "en",
"length": 106,
"provenance": "stackexchange_0000F.jsonl.gz:901701",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44657320"
} |
83bce96aadfa86c3ff032da5550ea50e6272788a | Stackoverflow Stackexchange
Q: Wrong SCRIPT_FILENAME & PHP_SELF in Apache 2.4.26 With Apache 2.4.26 using php-fpm 7.1.6, $_SERVER['SCRIPT_FILENAME'] (and $_SERVER['PHP_SELF']) is incorrect on a folder:
Apache 2.4.26:
/index.php
Apache 2.4.25:
/myfolder/index.php
What is wrong?
A: I fixed it in apache config with this new config directive :
ProxyFCGIBackendType GENERIC
In global configuration before the SetHandler directive. Default is FPM, but it's not correct with some php-fpm configuration (SetHandler and socket).
With this bug all $_SERVER['SCRIPT_FILENAME'] and $_SERVER['PHP_SELF'] Apache vars ($_SERVER) (injected from php-fpm) are wrong, they don't have the path!
ProxyFCGIBackendType is default to FPM but it's wrong for many configuration.
Apache httpd should add GENERIC as default to don't break websites.
See: https://httpd.apache.org/docs/2.4/en/mod/mod_proxy_fcgi.html#proxyfcgibackendtype
| Q: Wrong SCRIPT_FILENAME & PHP_SELF in Apache 2.4.26 With Apache 2.4.26 using php-fpm 7.1.6, $_SERVER['SCRIPT_FILENAME'] (and $_SERVER['PHP_SELF']) is incorrect on a folder:
Apache 2.4.26:
/index.php
Apache 2.4.25:
/myfolder/index.php
What is wrong?
A: I fixed it in apache config with this new config directive :
ProxyFCGIBackendType GENERIC
In global configuration before the SetHandler directive. Default is FPM, but it's not correct with some php-fpm configuration (SetHandler and socket).
With this bug all $_SERVER['SCRIPT_FILENAME'] and $_SERVER['PHP_SELF'] Apache vars ($_SERVER) (injected from php-fpm) are wrong, they don't have the path!
ProxyFCGIBackendType is default to FPM but it's wrong for many configuration.
Apache httpd should add GENERIC as default to don't break websites.
See: https://httpd.apache.org/docs/2.4/en/mod/mod_proxy_fcgi.html#proxyfcgibackendtype
A: (This would be better as a comment, but I'm under the minimum rep.)
Like neoteknic points out in their answer, using the GENERIC backend type will revert you to 2.4.25 behavior while we get this bug worked out.
Note, however, that the 2.4.25 behavior breaks some other people too and is not a panacea. If you're in a bind, the new ProxyFCGISetEnvIf directive can give you direct control over FCGI envvars, and it accepts httpd expression syntax for some pretty complex manipulations, if you need them. This only helps if you understand what those envvars are supposed to be, but it's another option.
| stackoverflow | {
"language": "en",
"length": 215,
"provenance": "stackexchange_0000F.jsonl.gz:901715",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44657362"
} |
279d384f0341db523d2b626ac6a86cb0e0f5337a | Stackoverflow Stackexchange
Q: How to replace certain values in Tensorflow tensor with the values of the other tensor? I have a Tensorflow tensor A of size (64, 2, 82, 1), and I want to replace its (:, :, 80:82, :) part with the corresponding part of the tensor B (also (64, 2, 82, 1) size).
How would I do that?
P.S.: To be precise, I mean the operation that would look like this in the numpy:
A[:, :, 80:82, :] = B[:, :, 80:82, :]
A: the following code might help you to get some idea,
a = tf.constant([[11,0,13,14],
[21,22,23,0]])
condition = tf.equal(a, 0)
case_true = tf.reshape(tf.multiply(tf.ones([8], tf.int32), -9999), [2, 4])
case_false = a
a_m = tf.where(condition, case_true, case_false)
sess = tf.Session()
sess.run(a_m)
here i am accessing individual element of a tensor!
| Q: How to replace certain values in Tensorflow tensor with the values of the other tensor? I have a Tensorflow tensor A of size (64, 2, 82, 1), and I want to replace its (:, :, 80:82, :) part with the corresponding part of the tensor B (also (64, 2, 82, 1) size).
How would I do that?
P.S.: To be precise, I mean the operation that would look like this in the numpy:
A[:, :, 80:82, :] = B[:, :, 80:82, :]
A: the following code might help you to get some idea,
a = tf.constant([[11,0,13,14],
[21,22,23,0]])
condition = tf.equal(a, 0)
case_true = tf.reshape(tf.multiply(tf.ones([8], tf.int32), -9999), [2, 4])
case_false = a
a_m = tf.where(condition, case_true, case_false)
sess = tf.Session()
sess.run(a_m)
here i am accessing individual element of a tensor!
A: tf.assign should work: (not tested)
tf.assign(A[:, :, 80:82, :], B[:, :, 80:82, :])
| stackoverflow | {
"language": "en",
"length": 144,
"provenance": "stackexchange_0000F.jsonl.gz:901725",
"question_score": "13",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44657388"
} |
f65e6dbc3b2c616c6529e1a94d6037b41e7d27cd | Stackoverflow Stackexchange
Q: Type error: reading ESRI shapefile driver using OGR/GDAL in Python I'm trying to use gdal_polygonize within Python to convert a raster to a shapefile using the following code:
# define output shapefile
driver_name = "ESRI Shapefile"
drv = ogr.GetDriverByName(driver_name)
dst_ds = drv.CreateDataSource(DataDirectory+OutputShapefile)
dst_layer = dst_ds.CreateLayer(DataDirectory+dst_layername, srs = Projection)
However I keep getting the following error when reading in the driver by name:
File "/home/s0923330/miniconda2/lib/python2.7/site-packages/osgeo/ogr.py", line 7262, in GetDriverByName
return _ogr.GetDriverByName(*args)
TypeError: in method 'GetDriverByName', argument 1 of type 'char const *'
The raster that I'm reading in is perfectly fine, and I can open it with gdal from the command line with no problems. It just seems to be a problem with OGR and Python. I was wondering if anybody has come across this problem before? It's GDAL version 2.1.0.
Thank you in advance!
A: I solved this problem by commented line in my code (or just remove):
# from __future__ import unicode_literals
| Q: Type error: reading ESRI shapefile driver using OGR/GDAL in Python I'm trying to use gdal_polygonize within Python to convert a raster to a shapefile using the following code:
# define output shapefile
driver_name = "ESRI Shapefile"
drv = ogr.GetDriverByName(driver_name)
dst_ds = drv.CreateDataSource(DataDirectory+OutputShapefile)
dst_layer = dst_ds.CreateLayer(DataDirectory+dst_layername, srs = Projection)
However I keep getting the following error when reading in the driver by name:
File "/home/s0923330/miniconda2/lib/python2.7/site-packages/osgeo/ogr.py", line 7262, in GetDriverByName
return _ogr.GetDriverByName(*args)
TypeError: in method 'GetDriverByName', argument 1 of type 'char const *'
The raster that I'm reading in is perfectly fine, and I can open it with gdal from the command line with no problems. It just seems to be a problem with OGR and Python. I was wondering if anybody has come across this problem before? It's GDAL version 2.1.0.
Thank you in advance!
A: I solved this problem by commented line in my code (or just remove):
# from __future__ import unicode_literals
| stackoverflow | {
"language": "en",
"length": 154,
"provenance": "stackexchange_0000F.jsonl.gz:901745",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44657434"
} |
1eb155acd4d569fac77e151cf1b79f5771e3221b | Stackoverflow Stackexchange
Q: What is difference between .bowerrc file and bower.json file I'm working on a project and its root directory has both files:
*
*.bowerrc
*bower.json
They both seem to configure bower, they both seem to use JSON, and my project has both files. But how are they different?
A: The bower site explains the difference:
*
*bower.json exists inside of a package. (at the root directory of a package). A package is contained code which other developers/packages can use. So if you make your own package, you need a bower.json. If you use another package, it must have a bower.json
*.bowerrc exists either at the "user's home folder" (aka ~) and/or the "global folder" (aka /). This file configures how the bower program (the command-line utility) will work. The configurations in this file are merged with configurations you can specify other ways (i.e. arguments via the command-line) so bower knows how to run
This repository also explains the difference (emphasis added):
*
*With bower.json for project configuration (like package.json or Gemfile)
*With configuration variables for execution in general (like command-line flags)
Notice that "configuration" can be expressed in the .bowerrc file.
| Q: What is difference between .bowerrc file and bower.json file I'm working on a project and its root directory has both files:
*
*.bowerrc
*bower.json
They both seem to configure bower, they both seem to use JSON, and my project has both files. But how are they different?
A: The bower site explains the difference:
*
*bower.json exists inside of a package. (at the root directory of a package). A package is contained code which other developers/packages can use. So if you make your own package, you need a bower.json. If you use another package, it must have a bower.json
*.bowerrc exists either at the "user's home folder" (aka ~) and/or the "global folder" (aka /). This file configures how the bower program (the command-line utility) will work. The configurations in this file are merged with configurations you can specify other ways (i.e. arguments via the command-line) so bower knows how to run
This repository also explains the difference (emphasis added):
*
*With bower.json for project configuration (like package.json or Gemfile)
*With configuration variables for execution in general (like command-line flags)
Notice that "configuration" can be expressed in the .bowerrc file.
| stackoverflow | {
"language": "en",
"length": 191,
"provenance": "stackexchange_0000F.jsonl.gz:901759",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44657471"
} |
8679b036fb3acaa976a154266f777e2b51727f9a | Stackoverflow Stackexchange
Q: Streaming from particular partition within a topic (Kafka Streams) As far as I understand after reading Kafka Streams documentation, it's not possible to use it for streaming data from only one partition from given topic, one always have to read it whole.
Is that correct?
If so, are there any plans to provide such an option to the API in the future?
A: No you can't do that because the internal consumer subscribes to the topic joining a consumer group which is specified through the application-id so the partitions are assigned automatically.
Btw why do you want do that ?
Without re-balancing you lose the scalability feature provided by Kafka Stream because just adding/removing instances of your streaming application you can scale the entire process, thanks to the re-balancing on partitions.
| Q: Streaming from particular partition within a topic (Kafka Streams) As far as I understand after reading Kafka Streams documentation, it's not possible to use it for streaming data from only one partition from given topic, one always have to read it whole.
Is that correct?
If so, are there any plans to provide such an option to the API in the future?
A: No you can't do that because the internal consumer subscribes to the topic joining a consumer group which is specified through the application-id so the partitions are assigned automatically.
Btw why do you want do that ?
Without re-balancing you lose the scalability feature provided by Kafka Stream because just adding/removing instances of your streaming application you can scale the entire process, thanks to the re-balancing on partitions.
A: You can do something similar to your need using PartitionGrouper. A partition grouper can be used to create a stream task based on the given topic partition.
For example refer to the DefaultPartitionGrouper implementation. But it would require customization.
Therefore as @ppatierno suggested please look into your usecase and then design the topology in a way that you do not have to deviate from a standard practice.
A: You can do this by specifying the topic,partition number and offset correctly
Map(new TopicPartition(topic, partition) -> 2L)
val stream = KafkaUtils.createDirectStream[String, String](
ssc,
PreferConsistent,
Subscribe[String, String](topics, kafkaParams,offsets))
where partition refers to the Partition number,
2L refers to the starting offset of the partition
Refer streaming_from_specific_partiton for more details.
A: You could not specify a partition in Kafka consumer because that is why Kafka scaling. Or we can say like this only a distributed system works. You can do segmentation and allocate each segment to a topic and each topic should have only one partition.
Since topics are registered in ZooKeeper , you might run into issues if trying to add too many of them, e.g. the case where you have a million users and have decided to create a topic per user.
| stackoverflow | {
"language": "en",
"length": 333,
"provenance": "stackexchange_0000F.jsonl.gz:901776",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44657521"
} |
1075a0da112bba5dbbf137cf3c6d5cf8230811d2 | Stackoverflow Stackexchange
Q: How to trim transparent pixels in svg? I need option similar to what photoshop has
for svg files.
What is the best way to achieve that and in which program?
I am also considering manually editing .svg file.
A: You don't have that in Illustrator. To trim an SVG in illustrator, just use the Artboard tool to adjust the artboard to the edges of your svg.
Also, you can just open your SVG in a text editor and manually change the viewbox size to trim the svg down. The viewbox is pretty much the same thing as a clipping mask.
| Q: How to trim transparent pixels in svg? I need option similar to what photoshop has
for svg files.
What is the best way to achieve that and in which program?
I am also considering manually editing .svg file.
A: You don't have that in Illustrator. To trim an SVG in illustrator, just use the Artboard tool to adjust the artboard to the edges of your svg.
Also, you can just open your SVG in a text editor and manually change the viewbox size to trim the svg down. The viewbox is pretty much the same thing as a clipping mask.
A: Yes you can
*
*Select everything that should be included in the SVG Select Objects
*-> Artboards -> Fit Artboard to Bounds
| stackoverflow | {
"language": "en",
"length": 124,
"provenance": "stackexchange_0000F.jsonl.gz:901787",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44657543"
} |
6a6961baa4d32fbde74378ceb6fd9b7fbcc24d1d | Stackoverflow Stackexchange
Q: Order by split string django orm I have an ID as a reference number and year in this format:
1/17
98/15
2/17
112/17
2345/17
67/17
9/17
8974/16
When i get my IDs out using django orm:
obj = MyIDs.objects.filter(run='run_1').order_by('ID')
I get them out in the order of the first number:
1/17
112/17
2/17
2345/17
67/17
8974/16
9/17
98/15
However as the number after the / is the year, I would like to order them by the year then number. I am able to do this easily in mySQL (using substring index etc) and also if it was a python list, but as I am now wanting to not process my objects before sending them to my html template - is there a way to do this in the orm?
A: Django 2.0 is currently in alpha stage, but it has the StrIndex function that will probably be helpful. I haven't tested this, but it is a draft of what you can do. The slash will remain in the string, but since you're just sorting it, I don't think it will be a problem for you.
MyIDs.objects.filter(run='run_1').annotate(
slash_pos=StrIndex(F('id'), '/')
).annotate(
y=Substr(F('id'), F('slash_pos'))
).order_by('y', 'id')
| Q: Order by split string django orm I have an ID as a reference number and year in this format:
1/17
98/15
2/17
112/17
2345/17
67/17
9/17
8974/16
When i get my IDs out using django orm:
obj = MyIDs.objects.filter(run='run_1').order_by('ID')
I get them out in the order of the first number:
1/17
112/17
2/17
2345/17
67/17
8974/16
9/17
98/15
However as the number after the / is the year, I would like to order them by the year then number. I am able to do this easily in mySQL (using substring index etc) and also if it was a python list, but as I am now wanting to not process my objects before sending them to my html template - is there a way to do this in the orm?
A: Django 2.0 is currently in alpha stage, but it has the StrIndex function that will probably be helpful. I haven't tested this, but it is a draft of what you can do. The slash will remain in the string, but since you're just sorting it, I don't think it will be a problem for you.
MyIDs.objects.filter(run='run_1').annotate(
slash_pos=StrIndex(F('id'), '/')
).annotate(
y=Substr(F('id'), F('slash_pos'))
).order_by('y', 'id')
| stackoverflow | {
"language": "en",
"length": 194,
"provenance": "stackexchange_0000F.jsonl.gz:901795",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44657566"
} |
40dad041c1ba966b89750977d33c869a249d62b6 | Stackoverflow Stackexchange
Q: Azure function apps logs not showing I'm relatively new to Azure and I just went through the tutorial on how to create a new Azure function, which is triggered when a new blob is created, and had this as the default code in it.
public static void Run(Stream myBlob, string name, TraceWriter log)
{
log.Info($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {myBlob.Length} Bytes");
}
From what I can see on the tutorial, I should be able to see some information in the "logs" area below the code, but nothing shows up, I've been checking for a solution for a while now but can't seem to find anything useful.
Any help would be greatly appreciated.
A: Microsoft keep changing the interface so many of these answers no longer are correct.
The best way I have found to view the logs is to go into Application Insights for the function itself and then search for some text that might be in the log in Transaction search.
| Q: Azure function apps logs not showing I'm relatively new to Azure and I just went through the tutorial on how to create a new Azure function, which is triggered when a new blob is created, and had this as the default code in it.
public static void Run(Stream myBlob, string name, TraceWriter log)
{
log.Info($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {myBlob.Length} Bytes");
}
From what I can see on the tutorial, I should be able to see some information in the "logs" area below the code, but nothing shows up, I've been checking for a solution for a while now but can't seem to find anything useful.
Any help would be greatly appreciated.
A: Microsoft keep changing the interface so many of these answers no longer are correct.
The best way I have found to view the logs is to go into Application Insights for the function itself and then search for some text that might be in the log in Transaction search.
A: Azure Portal was updated last week and they moved logs from Monitor to the home of the actual Azure Function. However, to see them you need to click Test. I raised an issue with Microsoft support and they spent several days twiddling their thumbs before I came across the answer myself. I hope this saves others a bit of time
A: The log window is a bit fragile and doesn't always show the logs. However, logs are also written to the log files.
You can access these logs from the Kudu console:
https://[your-function-app].scm.azurewebsites.net/
From the menu, select Debug console > CMD
On the list of files, go into LogFiles > Application > Functions > Function > [Name of your function]
There you will see a list of log files.
A: Log messages should show under the function code, if you're watching that window at the time of the function's execution:
To view log messages made while you weren't looking, you'll need Application Insights configured. If that's configured, that should show under the Monitor tab:
A: Following the advice here worked for me. Configuring Log Level for Azure Functions
If you want to see your logs show up immediately in the Portal console after you press "Run", then go to your "Function app settings" and add the following to your host.json file:
"logging": {
"fileLoggingMode": "always",
"logLevel": {
"default": "Information",
"Host.Results": "Error",
"Function": "Trace",
"Host.Aggregator": "Trace"
}
}
Note that this only worked for Javascript functions. For locally developed functions in other languages, the console can be a bit skittish.
A: Indeed, the Logs section of the Function App in the Azure portal seems fragile. I had it open a few ours unused and then it did not log anything anymore. Closing the Function App and reopening it solved the problem.
A: If you use Visual Studio Code and the Azure Functions Extension (link) you can directly connect to the log stream of the function:
It will open an output window where you can see all the logs.
EDIT:
To get to this point you will have to go through the Azure Functions Extension tab and then select your subscription which will have whichever functions you have.
A: In my case, right click on Function App and Refresh
A: I would entirely avoid waiting for the logs to appear in the function app. Go over to Monitor on the left and go through it that way. Even then though there can be a solid 5 minute delay on them coming through. How on earth can aws be the only provider in this space that's able to give you logs immediately? GCP is bad for it as well.. (not sure about alicloud)
| stackoverflow | {
"language": "en",
"length": 618,
"provenance": "stackexchange_0000F.jsonl.gz:901802",
"question_score": "46",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44657584"
} |
3923ea7d4cb07c2b9c08c42398b563ae941853b0 | Stackoverflow Stackexchange
Q: Strategy for partitioning dask dataframes efficiently The documentation for Dask talks about repartioning to reduce overhead here.
They however seem to indicate you need some knowledge of what your dataframe will look like beforehand (ie that there will 1/100th of the data expected).
Is there a good way to repartition sensibly without making assumptions? At the moment I just repartition with npartitions = ncores * magic_number, and set force to True to expand partitions if need be. This one size fits all approach works but is definitely suboptimal as my dataset varies in size.
The data is time series data, but unfortunately not at regular intervals, I've used repartition by time frequency in the past but this would be suboptimal because of how irregular the data is (sometimes nothing for minutes then thousands in seconds)
A: As of Dask 2.0.0 you may call .repartition(partition_size="100MB").
This method performs an object-considerate (.memory_usage(deep=True)) breakdown of partition size. It will join smaller partitions, or split partitions that have grown too large.
Dask's Documentation also outlines the usage.
| Q: Strategy for partitioning dask dataframes efficiently The documentation for Dask talks about repartioning to reduce overhead here.
They however seem to indicate you need some knowledge of what your dataframe will look like beforehand (ie that there will 1/100th of the data expected).
Is there a good way to repartition sensibly without making assumptions? At the moment I just repartition with npartitions = ncores * magic_number, and set force to True to expand partitions if need be. This one size fits all approach works but is definitely suboptimal as my dataset varies in size.
The data is time series data, but unfortunately not at regular intervals, I've used repartition by time frequency in the past but this would be suboptimal because of how irregular the data is (sometimes nothing for minutes then thousands in seconds)
A: As of Dask 2.0.0 you may call .repartition(partition_size="100MB").
This method performs an object-considerate (.memory_usage(deep=True)) breakdown of partition size. It will join smaller partitions, or split partitions that have grown too large.
Dask's Documentation also outlines the usage.
A: After discussion with mrocklin a decent strategy for partitioning is to aim for 100MB partition sizes guided by df.memory_usage().sum().compute(). With datasets that fit in RAM the additional work this might involve can be mitigated with use of df.persist() placed at relevant points.
A: Just to add to Samantha Hughes' answer:
memory_usage() by default ignores memory consumption of object dtype columns. For the datasets I have been working with recently this leads to an underestimate of memory usage of about 10x.
Unless you are sure there are no object dtype columns I would suggest specifying deep=True, that is, repartition using:
df.repartition(npartitions= 1+df.memory_usage(deep=True).sum().compute() // n )
Where n is your target partition size in bytes. Adding 1 ensures the number of partitions is always greater than 1 (// performs floor division).
A: I tried to check what is the optimal number for my case.
I have 100Gb csv files with 250M rows and 25 columns.
I work on laptop with 8 cores .
I run the function "describe" on 1,5,30,1000 partitions
df = df.repartition(npartitions=1)
a1=df['age'].describe().compute()
df = df.repartition(npartitions=5)
a2=df['age'].describe().compute()
df = df.repartition(npartitions=30)
a3=df['age'].describe().compute()
df = df.repartition(npartitions=100)
a4=df['age'].describe().compute()
about speed :
5,30 > around 3 minutes
1, 1000 > around 9 minutes
but ...I found that "order" functions like median or percentile give wrong number when I used more than one partition .
1 partition give right number (I checked it with small data using pandas and dask)
| stackoverflow | {
"language": "en",
"length": 410,
"provenance": "stackexchange_0000F.jsonl.gz:901816",
"question_score": "31",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44657631"
} |
2fc1ca3b7e926585e6b8c39a240372f2cbe8f4ad | Stackoverflow Stackexchange
Q: Azure App Service not installing all node dependencies I have a nodejs azure app service. It seems like npm isn't installing all the dependencies correctly. The site runs locally fine, when i try in azure, I get 500 responses.
When I check the log, it shows various node dependencies missing. After I install the missing one using the console on the portal, another missing dependency notification pops up.
Using git deploy, I've tried just committing the entire node_modules folder, but for some reason, all the bits don't get uploaded.
Grateful for any ideas about how to sort this out.
| Q: Azure App Service not installing all node dependencies I have a nodejs azure app service. It seems like npm isn't installing all the dependencies correctly. The site runs locally fine, when i try in azure, I get 500 responses.
When I check the log, it shows various node dependencies missing. After I install the missing one using the console on the portal, another missing dependency notification pops up.
Using git deploy, I've tried just committing the entire node_modules folder, but for some reason, all the bits don't get uploaded.
Grateful for any ideas about how to sort this out.
| stackoverflow | {
"language": "en",
"length": 100,
"provenance": "stackexchange_0000F.jsonl.gz:901841",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44657687"
} |
27867087fabe6bc12885d32e6f3c3ab6e286b397 | Stackoverflow Stackexchange
Q: How to change the developer's info How do I change the developer's email that a new user gets when they sign-up for my site? See image below..
Will I have to completely set up a new firebase account with a different email address and change all the code corresponding with this one? Or is there another way?
A: If I'm understanding you correctly, you want your project to be associated with a different email, correct? If that's what you mean, then you can go to the Firebase console, click the gear>Users and Permissions, add the other email as the owner, accept the role with the other email, and then delete the original email you no longer want to associate.
| Q: How to change the developer's info How do I change the developer's email that a new user gets when they sign-up for my site? See image below..
Will I have to completely set up a new firebase account with a different email address and change all the code corresponding with this one? Or is there another way?
A: If I'm understanding you correctly, you want your project to be associated with a different email, correct? If that's what you mean, then you can go to the Firebase console, click the gear>Users and Permissions, add the other email as the owner, accept the role with the other email, and then delete the original email you no longer want to associate.
A: Email id and all the other information that is appearing under Developer info popup can be changed in OAuth consent screen.
Here is the path:
"https://console.cloud.google.com/" -> "APIs & Services" -> "Credentials" -> "OAuth consent screen"
Here is the screenshot:
A: If the google login is setup through firebase then the developer's email shown on the popup can be updated by the following setps(shown by green marking in the diagram below):
*
*Firebase console
*Click the gear > Project settings
*Select the General tab, under Public settings > update support email
Addition to that can be confirmed by following the step shown in the second answer
A: I had to follow the steps below:
*
*Go to project settings
*Add a new member
*Complete the new member setup
*From the General tab under Public settings you can choose the newly added member.
| stackoverflow | {
"language": "en",
"length": 263,
"provenance": "stackexchange_0000F.jsonl.gz:901853",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44657719"
} |
b87682c485dbaa7b58b1d12683df1217ade7ce58 | Stackoverflow Stackexchange
Q: Deploy/Update WebExtension Firefox/Chrome I have an extension that do the same on Firefox and Chrome that I would like to deploy / be able to update in Enterprise environment for Firefox and Chrome.
I am not wanting to use the stores and I could not find informations on how I would be able to deploy/update this extension on 200+ computers without having to do it manually.
If someone have some informations about how to do this.
| Q: Deploy/Update WebExtension Firefox/Chrome I have an extension that do the same on Firefox and Chrome that I would like to deploy / be able to update in Enterprise environment for Firefox and Chrome.
I am not wanting to use the stores and I could not find informations on how I would be able to deploy/update this extension on 200+ computers without having to do it manually.
If someone have some informations about how to do this.
| stackoverflow | {
"language": "en",
"length": 77,
"provenance": "stackexchange_0000F.jsonl.gz:901900",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44657863"
} |
cd5ad594bee7aadac9e5c0bf531f55c44bb6dd6b | Stackoverflow Stackexchange
Q: How to load numpy npz files in google-cloud-ml jobs or from Google Cloud Storage? I have a google-cloud-ml job that requires loading numpy .npz files from gs bucket. I followed this example on how to load .npy files from gs, but it didn't work for me since .npz files are compressed.
Here's my code:
from StringIO import StringIO
import tensorflow as tf
import numpy as np
from tensorflow.python.lib.io import file_io
f = StringIO(file_io.read_file_to_string('gs://my-bucket/data.npz'))
data = np.load(f)
And here's the error message:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa2 in position 10: invalid start byte
Apparently, encoding the data to str is not correct, but I'm not sure how to address this.
Can some one help? Thanks!
A: It turns out I need to set the binary flag to True in file_io.read_file_to_string().
Here's the working code:
from io import BytesIO
import tensorflow as tf
import numpy as np
from tensorflow.python.lib.io import file_io
f = BytesIO(file_io.read_file_to_string('gs://my-bucket/data.npz', binary_mode=True))
data = np.load(f)
And this works for both compressed and uncompressed .npz files.
| Q: How to load numpy npz files in google-cloud-ml jobs or from Google Cloud Storage? I have a google-cloud-ml job that requires loading numpy .npz files from gs bucket. I followed this example on how to load .npy files from gs, but it didn't work for me since .npz files are compressed.
Here's my code:
from StringIO import StringIO
import tensorflow as tf
import numpy as np
from tensorflow.python.lib.io import file_io
f = StringIO(file_io.read_file_to_string('gs://my-bucket/data.npz'))
data = np.load(f)
And here's the error message:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa2 in position 10: invalid start byte
Apparently, encoding the data to str is not correct, but I'm not sure how to address this.
Can some one help? Thanks!
A: It turns out I need to set the binary flag to True in file_io.read_file_to_string().
Here's the working code:
from io import BytesIO
import tensorflow as tf
import numpy as np
from tensorflow.python.lib.io import file_io
f = BytesIO(file_io.read_file_to_string('gs://my-bucket/data.npz', binary_mode=True))
data = np.load(f)
And this works for both compressed and uncompressed .npz files.
A: Try using io.BytesIO instead, which has the added bonus of being forwards-compatible with Python 3:
import io
import tensorflow as tf
import numpy as np
from tensorflow.python.lib.io import file_io
f = io.BytesIO(file_io.read_file_to_string('gs://my-bucket/data.npz'),
binary_mode=True)
data = np.load(f)
A: An alternative is (note the difference between earlier TF versions and later ones):
import numpy as np
from tensorflow.python.lib.io import file_io
from tensorflow import __version__ as tf_version
if tf_version >= '1.1.0':
mode = 'rb'
else: # for TF version 1.0
mode = 'r'
f_stream = file_io.FileIO('mydata.npz', mode)
d = np.load( BytesIO(f_stream.read()) )
Similarly, for pickle files:
import pickle
d = pickle.load(file_io.FileIO('mydata.pickle', mode))
| stackoverflow | {
"language": "en",
"length": 269,
"provenance": "stackexchange_0000F.jsonl.gz:901911",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44657902"
} |
483c0fb2bb2916a7df9e5a5e86a72d8176353d32 | Stackoverflow Stackexchange
Q: How do I prefetch url's in ionic/angularjs? I am pretty new to ionic 1 and I am working on an application (with Ionic 1 and angular js) with multiple URLs where each URL brings up a list of categories, followed by a list of items for each category and each item has a document URL. How do I preload all these URLs on launch in the background but not display them?Is there any way this can be achieved? a good code sample or tutorial will help greatly.
Also, please let me know if this will be the best approach, as in pre-loading and pre-caching all content upon launch or should it be done category by category or some other way.
Thanks in advance!
A: You can make multiple Asynchronous service calls in background using $q.
Make a list of URL's in an array and call them at once using $q.all(listOfURL).
Using promises retrieve each response.
By making this asynchronous you can save lot of time.
After getting response you can either store them in $rootScope or in localStorage/sessionStorage.
| Q: How do I prefetch url's in ionic/angularjs? I am pretty new to ionic 1 and I am working on an application (with Ionic 1 and angular js) with multiple URLs where each URL brings up a list of categories, followed by a list of items for each category and each item has a document URL. How do I preload all these URLs on launch in the background but not display them?Is there any way this can be achieved? a good code sample or tutorial will help greatly.
Also, please let me know if this will be the best approach, as in pre-loading and pre-caching all content upon launch or should it be done category by category or some other way.
Thanks in advance!
A: You can make multiple Asynchronous service calls in background using $q.
Make a list of URL's in an array and call them at once using $q.all(listOfURL).
Using promises retrieve each response.
By making this asynchronous you can save lot of time.
After getting response you can either store them in $rootScope or in localStorage/sessionStorage.
A: I am assuming you don't want to load data in next screens, deliver user flawless experience.
Yes you can start loading URLs on you very first page as you want them to get the data you want to use in future screens.
In terms of storage
*
*In AngularJs if you want something to persist throughout the application scope you should use $rootscope[beware keeping lot of data
may leads to memory issues, you need to clear it regularly].
*Or another option is to store it in Localstorage. And fetch as per your need.
*If you want you can share those arrays between different controllers of screens.
While loading[response getting from server] you can do two things
1. get single JSON response having all the data
2.have multiple urls, and load them serially.
As per your requirement of loading 5th (page)screen data in advance it's not good practice, and even stop user from seeing updates but as it's your requirement. We've couple of approaches:
*
*Add all the category and their respective details as per your pastebin like cardiac then it's details.. kidney then details..
You can do this with managing hierarchies [categories] like parent main group and it's child sub group in JSONArray and details in JSONObject. (This change would be on sender side -server)
You need to load only one url to get all data.
So you don't need to load with different urls like now your doing. But beware this would be a big Json. So when you store it separate the categories and required data [screen-wise requirements] and store in local storage so easy for access.
*Another approach would be you have to provide your [category] subgroup names to load so the loading would be like firing same URL with different category names to get data and store it in local storage.
This may lead to fire around 10-15[depends on your categories] urls may affect the UI thread response.
This won't need any changes on your server side response.
**
Programmatically approach to load urls sequentially:
**
URL Loading: This method will get detail of particular category [id or anything
works for you]. This will fire a http request and return a result.
getCategoryDetails(category){
url = url+category;
return $http({
method: 'GET',
url: url,
headers: --
}).then(function onSuccess(response) { //<--- `.then` transforms the promise here
//You can ether store in local storage
return response
}, function onError(response) {
throw customExceptionHadnler.getErrorMsg(response.status, response.data);
});
}
Parallel : This method will do it in parallel, we just load categories[ids] as we have all of them and then use $q.all to wait for all the urls loading to finish.
function loadUrlsParallel(urls) {
var loadUrls = []
for(var i = 0; i < urls.length; i++) {
loadUrls.push(getCategoryDetails(urls[i]))
}
return $q.all(loadUrls)
}
First API: This method to load first url and then Loading urls in
parallel call above method
getListOfCategories(){
url = url;
return $http({
method: 'GET',
url: url,
headers: --
}).then(function onSuccess(response) { //<--- `.then` transforms the promise here
//You can ether store in local storage or directly send response
return response
}, function onError(response) {
throw customExceptionHadnler.getErrorMsg(response.status, response.data);
});
}
urls : you have to prepare list of urls with appending category to
load after loading first url[expecting this returns you all the
categories you will require in your app beforehand] and pass to
loadUrlsParallel method.
You can write loadUrl methods as per your convenience, here whatever
is given is foe example purpose so may not run as it is.
You can load API responses every where from local storage where you've stored after API calls, So this will not ask you to execute API calls on every laoding of pages[screen]
Hope this helps you and solves your prob.
A: Update - As the OP is already aware of and using localStorage, thus additional suggestions :-
In that case, you could either call all of your service methods for fetching data at startup or you could use a headless browser such as 'PhantomJS' to visit these URLs at startup and fetch the data.
Thus, your code would look something like :-
var webPage = require('webpage');
var page = webPage.create();
page.open('http://www.google.com/', function(status) {
console.log('Status: ' + status);
// Do other things here...
});
For more information, regarding PhantomJS, please refer to the following links :-
http://phantomjs.org/
http://phantomjs.org/api/webpage/method/open.html
Earlier Suggestions
Make an HTTP request in your service to fetch the data and store it to localStorage, as is shown below :-
$http.get('url', function(response) {
var obj = response.data;
window.localStorage.setItem('key', JSON.stringify(obj)); // Store data to localStorage for later use
});
For fetching data :-
var cachedData = JSON.parse(window.localStorage.getItem('key')); // Load cached data stored earlier
Please refer to the following link for detailed information regarding 'localStorage' :-
https://www.w3schools.com/html/html5_webstorage.asp
Hope this helps!
A: Best way to share data between different views in angular is to use a service as it is a singleton and can be used in other controllers.
In your main controller you can prefetch your lists of categories asynchronously through a service which can be shared for next views.Below is a small demo which you refer
angular.module("test").service("testservice",function('$http',$q){
var lists = undefined;
// fetch all lists in deferred technique
this.getLists = function() {
// if lists object is not defined then start the new process for fetch it
if (!lists) {
// create deferred object using $q
var deferred = $q.defer();
// get lists form backend
$http.get(URL)
.then(function(result) {
// save fetched posts to the local variable
lists = result.data;
// resolve the deferred
deferred.resolve(lists);
}, function(error) {
//handle error
deferred.reject(error);
});
// set the posts object to be a promise until result comeback
lists = deferred.promise;
}
// in any way wrap the lists object with $q.when which means:
// local posts object could be:
// a promise
// a real lists data
// both cases will be handled as promise because $q.when on real data will resolve it immediately
return $q.when(lists);
};
this.getLists2=function(){
//do it similarly as above
};
}).controller("mainController",function(testservice,$scope){
$scope.lists1=testervice.getLists()
.then(function(lists) {
//do something
});
};
$scope.lists2=testervice.getLists2()
.then(function(lists) {
//do something
});
};
$scope.lists1();
$scope.lists2();
}).controller("demoController1",function(testservice,$scope){
$scope.lists1=testervice.getLists()
.then(function(lists) {
//do something
});
};
$scope.lists2=testervice.getLists2()
.then(function(lists) {
//do something
});
};
$scope.lists1();
$scope.lists2();
});
| stackoverflow | {
"language": "en",
"length": 1201,
"provenance": "stackexchange_0000F.jsonl.gz:901959",
"question_score": "13",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44658033"
} |
6e9372026ff053f7f64164d22db5e64755851573 | Stackoverflow Stackexchange
Q: ternary operator based on if else If i have following if else statement
if a.present? && b.value == 'N'
b = test
elsif a.present? && b.value == 'Y'
b = guest
end
I can write ternary operation for this
b = (a.present? && b.value == 'N') ? "test" : "guest"
but in this ternary oprator i am not looking for condition b.value == 'Y' and it could be something else 'd' or 'e'.
How do i update ternary operator so it verifies both conditions in if and elsif?
A: b = case b.value
when 'N' then test
when 'Y' then guest
end if a.present?
This is the only DRY answer here so far.
| Q: ternary operator based on if else If i have following if else statement
if a.present? && b.value == 'N'
b = test
elsif a.present? && b.value == 'Y'
b = guest
end
I can write ternary operation for this
b = (a.present? && b.value == 'N') ? "test" : "guest"
but in this ternary oprator i am not looking for condition b.value == 'Y' and it could be something else 'd' or 'e'.
How do i update ternary operator so it verifies both conditions in if and elsif?
A: b = case b.value
when 'N' then test
when 'Y' then guest
end if a.present?
This is the only DRY answer here so far.
A: For something like this you might want to use a simple look-up table to eliminate some of the logic:
EQUIVALENT = {
'Y' => 'guest',
'N' => 'test'
}
if (a.present?)
b = EQUIVALENT[b.value] || b
end
The || b part may not be necessary if non-mapped b values are ignored.
A: You can use a ternary operator. It doesn't mean you should do it, though:
a.present? && (b.value == 'N' ? b = 'test' : b.value == 'Y' && b = 'guest')
Here's a small test:
class Object
def present?
true
end
end
class NilClass
def present?
false
end
end
a = true
class B
attr_accessor :value
end
b = B.new
b.value = 'Y'
a.present? && (b.value == 'N' ? b = 'test' : b.value == 'Y' && b = 'guest')
p b
# "guest"
A: I would not insist on the ternary operator but extract the common a.present? test in an outer if and then write the rest of the code using if modifiers:
if a.present?
b = test if b.value == 'N'
b = guest if b.value == 'Y'
end
To me, it seems much easier to read this way.
| stackoverflow | {
"language": "en",
"length": 308,
"provenance": "stackexchange_0000F.jsonl.gz:901998",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44658133"
} |
47a6e482106b7739e72e60ec71ae03f655674bfd | Stackoverflow Stackexchange
Q: Adding argument in class definition I ran across this design pattern in Python wondering if someone can explain as I've never seen it before
def func():
pass
class Child(Parent, f=func):
pass
Not sure what's happening here. Could this work if Parent has metaclass definition in which it changes the class constructor to allow for passing an argument through? Any help is appreciated and sorry for the vagueness
A: This works in Python 3.6 using __init_subclass__ on the parent.
class Parent:
def __init_subclass__(self, f, **kwargs):
super().__init_subclass__(**kwargs)
print(f)
def func():
pass
class Child(Parent, f=func):
pass
Output:
<function func at 0x7f48207cae18>
| Q: Adding argument in class definition I ran across this design pattern in Python wondering if someone can explain as I've never seen it before
def func():
pass
class Child(Parent, f=func):
pass
Not sure what's happening here. Could this work if Parent has metaclass definition in which it changes the class constructor to allow for passing an argument through? Any help is appreciated and sorry for the vagueness
A: This works in Python 3.6 using __init_subclass__ on the parent.
class Parent:
def __init_subclass__(self, f, **kwargs):
super().__init_subclass__(**kwargs)
print(f)
def func():
pass
class Child(Parent, f=func):
pass
Output:
<function func at 0x7f48207cae18>
A: Extra named arguments in the class definition are passed into the class constructor methods - i.e., the metaclass __new__:
In [1]: class M(type):
...: def __new__(metacls, name, bases, namespace, **kwargs):
...: print(f'At metaclass, {kwargs}')
...: return super().__new__(metacls, name, bases, namespace)
...:
In [2]: class A(metaclass=M, f="hello world"): pass
At metaclass, {'f': 'hello world'}
So, a custom metaclass might make use of that, even before Python 3.6. But n Python 3.6, the __init_subclass__ addition makes it much simpler and therefore useful, to have such arguments - as no custom metaclass is needed.
Note that the __init_subclass__ method in a custom class hierarchy is responsible to ultimately call object.__init_subclass__ - which do not take any named arguments. So, if you are creating a class hierarchy that makes use of __init_subclass__, each such method should "consume" its specific arguments by removing them from kwargs before calling super().__init_subclass__. The __new__ and __init__ methods from type itself (the default metaclass) simply ignore any named args.
| stackoverflow | {
"language": "en",
"length": 260,
"provenance": "stackexchange_0000F.jsonl.gz:902009",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44658171"
} |
bcc1387d012e2802e66f94225d96fc282869dee0 | Stackoverflow Stackexchange
Q: How to detect inactivity in foreground react-native I used AppState for detect state changes in app. But this only works when app is in background and when this is active. But I need have time control over inactive state.
I need that works only when user is inactive in foreground. But this state also works when app is in background state.
This is my code for state control:
AppState.addEventListener('change', state =>
{
var blockApp = setTimeout(function(){
if(inVerify){
self.goToVerify();
}
}, 1000);
console.log('AppState changed to', state);
switch(state){
case 'active':
clearTimeout(blockApp);
inVerify = false;
break;
case 'inactive':
var x;
break;
case 'background':
inVerify = true;
blockApp;
break;
default:
//nothing state or different state, error in app state
break;
}
}
);
How can I have control over inactivity time?
| Q: How to detect inactivity in foreground react-native I used AppState for detect state changes in app. But this only works when app is in background and when this is active. But I need have time control over inactive state.
I need that works only when user is inactive in foreground. But this state also works when app is in background state.
This is my code for state control:
AppState.addEventListener('change', state =>
{
var blockApp = setTimeout(function(){
if(inVerify){
self.goToVerify();
}
}, 1000);
console.log('AppState changed to', state);
switch(state){
case 'active':
clearTimeout(blockApp);
inVerify = false;
break;
case 'inactive':
var x;
break;
case 'background':
inVerify = true;
blockApp;
break;
default:
//nothing state or different state, error in app state
break;
}
}
);
How can I have control over inactivity time?
| stackoverflow | {
"language": "en",
"length": 128,
"provenance": "stackexchange_0000F.jsonl.gz:902020",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44658210"
} |
97e4719829001ceccf97fa7954f127936b04e650 | Stackoverflow Stackexchange
Q: ASP.NET *PRECOMPILED* but still long startup We have an app that is being published "precompiled". When we deploy it to the server, it updates momentarily, no interruptions, no pauses. This is really important since the app is used by 1000s of companies
Then we added SignalR.
Which, in turn, brought the OWIN dependency.
Now, the precompiled app chokes for 9-10 seconds when we update. w3wp jumps to 100% cpu load.
I profiled the processes cpu usage and I saw this among the top time-consuming call stacks:
System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start
Microsoft.Owin.Host.SystemWeb.OwinCallContext.AcceptCallback
System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start
Microsoft.Owin.Host.SystemWeb.OwinCallContext.AcceptCallback
System.Web.WebSocketPipeline+<>c__DisplayClass6.<ProcessRequestImplAsync>b__3
System.Web.Util.SynchronizationHelper.SafeWrapCallback
System.Web.Util.SynchronizationHelper.QueueSynchronous
System.Web.WebSocketPipeline+<ProcessRequestImplAsync>d__8.MoveNext
System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start
System.Web.WebSocketPipeline.ProcessRequest
//...skipped
Wait, what? "CompilerServices"?
Apparently, OWIN is doing some compiling work on the background.... Or is it something else?
Anyone faced this? Any workarounds?
What is SignalR doing during startup?
UPDATE: We tried EnableJavaScriptProxies = false; we also tried our own IAssemblyLocator - neither has helped.
| Q: ASP.NET *PRECOMPILED* but still long startup We have an app that is being published "precompiled". When we deploy it to the server, it updates momentarily, no interruptions, no pauses. This is really important since the app is used by 1000s of companies
Then we added SignalR.
Which, in turn, brought the OWIN dependency.
Now, the precompiled app chokes for 9-10 seconds when we update. w3wp jumps to 100% cpu load.
I profiled the processes cpu usage and I saw this among the top time-consuming call stacks:
System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start
Microsoft.Owin.Host.SystemWeb.OwinCallContext.AcceptCallback
System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start
Microsoft.Owin.Host.SystemWeb.OwinCallContext.AcceptCallback
System.Web.WebSocketPipeline+<>c__DisplayClass6.<ProcessRequestImplAsync>b__3
System.Web.Util.SynchronizationHelper.SafeWrapCallback
System.Web.Util.SynchronizationHelper.QueueSynchronous
System.Web.WebSocketPipeline+<ProcessRequestImplAsync>d__8.MoveNext
System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start
System.Web.WebSocketPipeline.ProcessRequest
//...skipped
Wait, what? "CompilerServices"?
Apparently, OWIN is doing some compiling work on the background.... Or is it something else?
Anyone faced this? Any workarounds?
What is SignalR doing during startup?
UPDATE: We tried EnableJavaScriptProxies = false; we also tried our own IAssemblyLocator - neither has helped.
| stackoverflow | {
"language": "en",
"length": 143,
"provenance": "stackexchange_0000F.jsonl.gz:902031",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44658233"
} |
e3f6b60271e3213990d18efd6b3ac2f477f03700 | Stackoverflow Stackexchange
Q: ImportError: No Module named six; six already installed I'm running python 3.6 on Mac OS X El Capitan.
I'm trying to run code that uses the six module, but am getting the following error:
ImportError: No module named six.
When I search for six it appears no problem, and I've made sure that the location is included in the sys.path
$ pip show six
Name: six
Version: 1.10.0
Summary: Python 2 and 3 compatibility utilities
Home-page: http://pypi.python.org/pypi/six/
Author: Benjamin Peterson
Author-email: [email protected]
License: MIT
Location: /usr/anaconda/lib/python3.6/site-packages
However, when I try to run something basic I encounter an error:
$ python -c "import six; print (six.__version__)"
Traceback (most recent call last):
File "<string>", line 1, in <module>
AttributeError: module 'six' has no attribute 'version'
I've tried uninstalling and reinstalling, and have tried installing using $ python -m pip install six, but nothing has worked.
If anyone has any ideas or needs more information, I would appreciate it.
A: This should work:
pip install --ignore-installed six
more info here:
https://github.com/pypa/pip/issues/3165
| Q: ImportError: No Module named six; six already installed I'm running python 3.6 on Mac OS X El Capitan.
I'm trying to run code that uses the six module, but am getting the following error:
ImportError: No module named six.
When I search for six it appears no problem, and I've made sure that the location is included in the sys.path
$ pip show six
Name: six
Version: 1.10.0
Summary: Python 2 and 3 compatibility utilities
Home-page: http://pypi.python.org/pypi/six/
Author: Benjamin Peterson
Author-email: [email protected]
License: MIT
Location: /usr/anaconda/lib/python3.6/site-packages
However, when I try to run something basic I encounter an error:
$ python -c "import six; print (six.__version__)"
Traceback (most recent call last):
File "<string>", line 1, in <module>
AttributeError: module 'six' has no attribute 'version'
I've tried uninstalling and reinstalling, and have tried installing using $ python -m pip install six, but nothing has worked.
If anyone has any ideas or needs more information, I would appreciate it.
A: This should work:
pip install --ignore-installed six
more info here:
https://github.com/pypa/pip/issues/3165
A: I didn't see any method version() for six from six Documentation Release 1.10.0, and the error you got also says six doesn't have the attribute which makes sense to me, below I print all the attributes and there's __version__ inside
>>> import six
>>> six.__dir__()
['_moved_attributes', 'remove_move', '__path__', '__author__', '_MovedItems', 'Module_six_moves_urllib', 'Module_six_moves_urllib_robotparser', 'raise_from', '_SixMetaPathImporter', 'get_function_code', 'callable', 'absolute_import', '_func_code', 'moves', '_urllib_error_moved_attributes', 'text_type', 'Module_six_moves_urllib_parse', 'iteritems', 'iterlists', 'print_', '_assertCountEqual', '__builtins__', 'sys', 'Module_six_moves_urllib_error', 'Module_six_moves_urllib_request', 'assertRegex', 'MovedModule', 'create_bound_method', '_urllib_robotparser_moved_attributes', '_func_closure', 'indexbytes', 'string_types', 'with_metaclass', 'reraise', 'exec_', 'assertRaisesRegex', 'types', 'python_2_unicode_compatible', 'get_function_globals', '_LazyModule', '_assertRaisesRegex', '_meth_self', 'itertools', '_LazyDescr', 'BytesIO', 'add_move', 'iterbytes', '_func_defaults', '__file__', 'unichr', 'get_method_function', 'create_unbound_method', 'get_unbound_function', 'Module_six_moves_urllib_response', 'functools', '__doc__', 'assertCountEqual', 'integer_types', 'PY34', '_importer', '__spec__', '_urllib_response_moved_attributes', 'Iterator', 'StringIO', '_import_module', '__package__', '__version__', 'get_function_defaults', 'operator', 'PY3', 'MAXSIZE', 'int2byte', '_urllib_request_moved_attributes', '_urllib_parse_moved_attributes', 'b', 'class_types', 'next', 'itervalues', '_add_doc', 'viewkeys', 'MovedAttribute', 'advance_iterator', '__cached__', 'u', '__loader__', '_func_globals', 'get_method_self', 'PY2', 'iterkeys', 'wraps', '_meth_func', 'byte2int', 'io', 'viewitems', 'viewvalues', '__name__', 'get_function_closure', 'binary_type', 'add_metaclass', '_assertRegex']
>>> six.__version__
'1.10.0'
Therefore you can get the version of six by
python -c "import six; print (six.__version__)"
A: I recently updated macOS Monterey 12.4 and I was facing same issue
ModuleNotFoundError: No module named 'six'
So I have installed 'six' using below command
brew install six
This resolved the issue. Hope this will be useful for someone, who is not able to do import or do not have 'pip'.
A: For those not finding any of the above answers taking care of your issue - python3.10 will require you in some cases to call pip+python version so...
my fix was: pip3.10 install six
And ta-da! Six IS/WAS installed for python3 (if you have overlapping versions) but it was NOT installed for the current version of python.
You can check if this will solve your issue because when you enter: pip3 --version you will likely not see the same version that python3 is calling.
If you see this:
pip3 --version
pip 23.0 from /usr/local/lib/python3.9/site-packages/pip (python 3.9)
but your python3 --version shows you:
python3 --version
Python 3.10.9
(or mis-matched versions of python3 and pip3) you will get your issue resolved using my answer.
(You can check to see the version Python3 alias calls by using: which python3)
A: The issue I ran into was the script I was running was using Python 2.7, and I was using 3+ on my machine. After switching to Python 2.7 using venv, everything worked correctly.
| stackoverflow | {
"language": "en",
"length": 555,
"provenance": "stackexchange_0000F.jsonl.gz:902042",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44658258"
} |
643f98a66179953ab504f6faf5fb832b481ba247 | Stackoverflow Stackexchange
Q: electron how to allow insecure https Loading https://github.com works fine for exmaple.
But loading an insecure https, the page displays empty.
I've done some research and tried the 3 flags (webSecurity, allowDisplayingInsecureContent, allowRunningInsecureContent) below with no success.
Looking for any known solutions. Thank you.
const { BrowserWindow } = require('electron').remote;
let win = new BrowserWindow({
width: 800,
height: 600,
webPreferences: {
plugins: true,
nodeIntegration: false,
webSecurity: false,
allowDisplayingInsecureContent: true,
allowRunningInsecureContent: true
}
});
win.loadURL('https://insecure...')
A: In main.js, do:
const { app } = require('electron')
app.commandLine.appendSwitch('ignore-certificate-errors')
| Q: electron how to allow insecure https Loading https://github.com works fine for exmaple.
But loading an insecure https, the page displays empty.
I've done some research and tried the 3 flags (webSecurity, allowDisplayingInsecureContent, allowRunningInsecureContent) below with no success.
Looking for any known solutions. Thank you.
const { BrowserWindow } = require('electron').remote;
let win = new BrowserWindow({
width: 800,
height: 600,
webPreferences: {
plugins: true,
nodeIntegration: false,
webSecurity: false,
allowDisplayingInsecureContent: true,
allowRunningInsecureContent: true
}
});
win.loadURL('https://insecure...')
A: In main.js, do:
const { app } = require('electron')
app.commandLine.appendSwitch('ignore-certificate-errors')
| stackoverflow | {
"language": "en",
"length": 86,
"provenance": "stackexchange_0000F.jsonl.gz:902050",
"question_score": "17",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44658269"
} |
e4f21feed6039606886a56fb1a29bca72a529ea6 | Stackoverflow Stackexchange
Q: What is the fastest way to get only directory list I need to build a tree structure recursively of only directories for a given root/parent path. something like "browse for folder" dialog.
Delphi's FindFirst (FindFirstFile API) is not working with faDirectory and FindNext will get all files (it uses faAnyFile regardless of the specified faDirectory) not only directories. which make the process of building the tree very slow.
Is there a fast way to get a directory list (tree) without using FindFirst/FindNext?
A: Find(First|Next)/File() is a viable solution, especially in Delphi 7. Just filter out the results you don't need, eg:
if FindFirst(Root, faDirectory, sr) = 0 then
try
repeat
if (sr.Attr and faDirectory <> 0) and (sr.Name <> '.') and (sr.Name <> '..') then
begin
// ...
end;
until FindNext(sr) <> 0;
finally
FindClose(sr);
end;
If that is not fast enough for you, then other options include:
*
*On Win7+, use FindFirstFileEx() with FindExInfoBasic and FIND_FIRST_EX_LARGE_FETCH. That will provide speed improvements over FindFirstFile().
*access the filesystem metadata directly. On NTFS, you can use DeviceIoControl() to enumerate the Master File Table directly.
| Q: What is the fastest way to get only directory list I need to build a tree structure recursively of only directories for a given root/parent path. something like "browse for folder" dialog.
Delphi's FindFirst (FindFirstFile API) is not working with faDirectory and FindNext will get all files (it uses faAnyFile regardless of the specified faDirectory) not only directories. which make the process of building the tree very slow.
Is there a fast way to get a directory list (tree) without using FindFirst/FindNext?
A: Find(First|Next)/File() is a viable solution, especially in Delphi 7. Just filter out the results you don't need, eg:
if FindFirst(Root, faDirectory, sr) = 0 then
try
repeat
if (sr.Attr and faDirectory <> 0) and (sr.Name <> '.') and (sr.Name <> '..') then
begin
// ...
end;
until FindNext(sr) <> 0;
finally
FindClose(sr);
end;
If that is not fast enough for you, then other options include:
*
*On Win7+, use FindFirstFileEx() with FindExInfoBasic and FIND_FIRST_EX_LARGE_FETCH. That will provide speed improvements over FindFirstFile().
*access the filesystem metadata directly. On NTFS, you can use DeviceIoControl() to enumerate the Master File Table directly.
A: the absolute fastest way, use the NtQueryDirectoryFile api. with this we can query not single file but many files at once. also select what information will be returned (smaller info - higher speed). example (with full recursion)
// int nLevel, PSTR prefix for debug only
void ntTraverse(POBJECT_ATTRIBUTES poa, int nLevel, PSTR prefix)
{
enum { ALLOCSIZE = 0x10000 };//64kb
if (nLevel > MAXUCHAR)
{
DbgPrint("nLevel > MAXUCHAR\n");
return ;
}
NTSTATUS status;
IO_STATUS_BLOCK iosb;
UNICODE_STRING ObjectName;
OBJECT_ATTRIBUTES oa = { sizeof(oa), 0, &ObjectName };
DbgPrint("%s[<%wZ>]\n", prefix, poa->ObjectName);
if (0 <= (status = NtOpenFile(&oa.RootDirectory, FILE_GENERIC_READ, poa, &iosb, FILE_SHARE_VALID_FLAGS,
FILE_SYNCHRONOUS_IO_NONALERT|FILE_OPEN_REPARSE_POINT|FILE_OPEN_FOR_BACKUP_INTENT)))
{
if (PVOID buffer = new UCHAR[ALLOCSIZE])
{
union {
PVOID pv;
PBYTE pb;
PFILE_DIRECTORY_INFORMATION DirInfo;
};
while (0 <= (status = NtQueryDirectoryFile(oa.RootDirectory, NULL, NULL, NULL, &iosb,
pv = buffer, ALLOCSIZE, FileDirectoryInformation, 0, NULL, FALSE)))
{
ULONG NextEntryOffset = 0;
do
{
pb += NextEntryOffset;
ObjectName.Buffer = DirInfo->FileName;
switch (ObjectName.Length = (USHORT)DirInfo->FileNameLength)
{
case 2*sizeof(WCHAR):
if (ObjectName.Buffer[1] != '.') break;
case sizeof(WCHAR):
if (ObjectName.Buffer[0] == '.') continue;
}
ObjectName.MaximumLength = ObjectName.Length;
if (DirInfo->FileAttributes & FILE_ATTRIBUTE_DIRECTORY)
{
ntTraverse(&oa, nLevel + 1, prefix - 1);
}
} while (NextEntryOffset = DirInfo->NextEntryOffset);
}
delete [] buffer;
if (status == STATUS_NO_MORE_FILES)
{
status = STATUS_SUCCESS;
}
}
NtClose(oa.RootDirectory);
}
if (0 > status)
{
DbgPrint("---- %x %wZ\n", status, poa->ObjectName);
}
}
void ntTraverse()
{
BOOLEAN b;
RtlAdjustPrivilege(SE_BACKUP_PRIVILEGE, TRUE, FALSE, &b);
char prefix[MAXUCHAR + 1];
memset(prefix, '\t', MAXUCHAR);
prefix[MAXUCHAR] = 0;
STATIC_OBJECT_ATTRIBUTES(oa, "\\systemroot");
ntTraverse(&oa, 0, prefix + MAXUCHAR);
}
but if you use interactive tree - you not need expand all tree at once, but only top level, handle TVN_ITEMEXPANDING with TVE_EXPAND and TVN_ITEMEXPANDED with TVE_COLLAPSE for expand/ collapse nodes on user click and set cChildren
if use FindFirstFileExW with FIND_FIRST_EX_LARGE_FETCH and FindExInfoBasic this give to as near NtQueryDirectoryFile perfomance, but little smaller:
WIN32_FIND_DATA fd;
HANDLE hFindFile = FindFirstFileExW(L"..\\*", FindExInfoBasic, &fd, FindExSearchLimitToDirectories, 0, FIND_FIRST_EX_LARGE_FETCH);
if (hFindFile != INVALID_HANDLE_VALUE)
{
do
{
if (fd.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY)
{
if (fd.cFileName[0] == '.')
{
switch (fd.cFileName[1])
{
case 0:
continue;
case '.':
if (fd.cFileName[2] == 0) continue;
break;
}
}
DbgPrint("%S\n", fd.cFileName);
}
} while (FindNextFile(hFindFile, &fd));
FindClose(hFindFile);
}
unfortunately FindExSearchLimitToDirectories not implemented currently
A: If you have Delphi XE2 or newer, the fastest way is to use the TDirectory.GetDirectories defined in th System.IOUtils.
Sample code:
procedure TVideoCamera.GetInterfaceNameList(
const AInterfaceNameList: TInterfaceNameList);
const
SEARCH_OPTION = TSearchOption.soTopDirectoryOnly;
PREDICATE = nil;
var
interfaceList: TStringDynArray;
idxInterface: Integer;
interfaceName: String;
begin
interfaceList := TDirectory.GetDirectories(GetCameraDirectory, SEARCH_OPTION,
PREDICATE);
AInterfaceNameList.Clear;
for idxInterface := Low(interfaceList) to High(interfaceList) do
begin
interfaceName := ExtractFileName(InterfaceList[idxInterface]);
AInterfaceNameList.Add(interfaceName);
end;
end;
| stackoverflow | {
"language": "en",
"length": 600,
"provenance": "stackexchange_0000F.jsonl.gz:902054",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44658284"
} |
a6de1b47bfe1504e683ed85d43112b07e001f190 | Stackoverflow Stackexchange
Q: API pagination, how to implement page token paging method? I am working on implementing an API interface for my project.
As i know, there are different forms to make pagination through the results, like the following:
https://example.com/api/purchaseorders?page=2&pagesize=25
But, i see many APIs like google use a different approach, in which they use a "pageToken" to let the user move between the pages of results, for example:
https://example.com/api/purchaseorders?pagesize=25&pageToken=ClkKHgoRc291cmNlX2NyZWF0ZWRfYXQSCQjA67Si5sr
So instead of page=2 they used pageToken=[token].
It is not clear for me the idea of pageToken and how to implement it.
It will be helpful if you guide me to any resources so i can get more knowledge.
Thank you.
A: Here's a very simple standalone example using the filesystem as a keyvalue store (since a filesystem will always be available).
$requestParameters = [];
if (($token = filter_input(INPUT_GET,"pageToken")) && is_readable("/tmp/$token")) {
$requestParameters = file_get_contents("/tmp/$token");
} else {
$requestParameters = [
"q" => filter_input(INPUT_GET,"q"),
"pageSize" => filter_input(INPUT_GET,"pageSize",FILTER_VALIDATE_INT),
"page" => filter_input(INPUT_GET,"page",FILTER_VALIDATE_INT)
];
}
$nextPageRequestParameters = $requestParameters;
$nextPageRequestParameters["page"]++;
$nextPageToken = md5(serialize($nextPageRequestParameters)); //This is not ideal but at least people can't guess it easily.
file_put_contents("/tmp/$nextPageToken", serialize($nextPageRequestParameters));
//Do request using $requestParameters
$result = [ "nextPageToken" => $nextPageToken, "data" => $resultData ];
echo json_encode($result);
| Q: API pagination, how to implement page token paging method? I am working on implementing an API interface for my project.
As i know, there are different forms to make pagination through the results, like the following:
https://example.com/api/purchaseorders?page=2&pagesize=25
But, i see many APIs like google use a different approach, in which they use a "pageToken" to let the user move between the pages of results, for example:
https://example.com/api/purchaseorders?pagesize=25&pageToken=ClkKHgoRc291cmNlX2NyZWF0ZWRfYXQSCQjA67Si5sr
So instead of page=2 they used pageToken=[token].
It is not clear for me the idea of pageToken and how to implement it.
It will be helpful if you guide me to any resources so i can get more knowledge.
Thank you.
A: Here's a very simple standalone example using the filesystem as a keyvalue store (since a filesystem will always be available).
$requestParameters = [];
if (($token = filter_input(INPUT_GET,"pageToken")) && is_readable("/tmp/$token")) {
$requestParameters = file_get_contents("/tmp/$token");
} else {
$requestParameters = [
"q" => filter_input(INPUT_GET,"q"),
"pageSize" => filter_input(INPUT_GET,"pageSize",FILTER_VALIDATE_INT),
"page" => filter_input(INPUT_GET,"page",FILTER_VALIDATE_INT)
];
}
$nextPageRequestParameters = $requestParameters;
$nextPageRequestParameters["page"]++;
$nextPageToken = md5(serialize($nextPageRequestParameters)); //This is not ideal but at least people can't guess it easily.
file_put_contents("/tmp/$nextPageToken", serialize($nextPageRequestParameters));
//Do request using $requestParameters
$result = [ "nextPageToken" => $nextPageToken, "data" => $resultData ];
echo json_encode($result);
| stackoverflow | {
"language": "en",
"length": 197,
"provenance": "stackexchange_0000F.jsonl.gz:902066",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44658338"
} |
efa52491c3f3dff144480ba9057a9140d1dd9605 | Stackoverflow Stackexchange
Q: RxJava Scheduler to observe on main thread If I write something like this, then both the operation and notification will be on the current thread...
Observable.fromCallable(() -> "Do Something")
.subscribe(System.out::println);
If I do the operation on a background thread like this, then both the operation and notification will be on a background thread...
Observable.fromCallable(() -> "Do Something")
.subscribeOn(Schedulers.io())
.subscribe(System.out::println);
If I want to observe on the main thread and do in the background in Android I would do...
Observable.fromCallable(() -> "Do Something")
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.subscribe(System.out::println);
But If I was writing a standard Java program, what is the equivalent to state that you want to observe on the main thread?
A: Convert the Observable to a BlockingObservable via .toBlocking(); this gives you blocking methods to wait for completion, get one item, etc.
| Q: RxJava Scheduler to observe on main thread If I write something like this, then both the operation and notification will be on the current thread...
Observable.fromCallable(() -> "Do Something")
.subscribe(System.out::println);
If I do the operation on a background thread like this, then both the operation and notification will be on a background thread...
Observable.fromCallable(() -> "Do Something")
.subscribeOn(Schedulers.io())
.subscribe(System.out::println);
If I want to observe on the main thread and do in the background in Android I would do...
Observable.fromCallable(() -> "Do Something")
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.subscribe(System.out::println);
But If I was writing a standard Java program, what is the equivalent to state that you want to observe on the main thread?
A: Convert the Observable to a BlockingObservable via .toBlocking(); this gives you blocking methods to wait for completion, get one item, etc.
A: For RxJava2 use "blockingSubscribe()"
Flowable.fromArray(1, 2, 3)
.subscribeOn(Schedulers.computation())
.blockingSubscribe(integer -> {
System.out.println(Thread.currentThread().getName());
});
| stackoverflow | {
"language": "en",
"length": 146,
"provenance": "stackexchange_0000F.jsonl.gz:902078",
"question_score": "21",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44658357"
} |
9eb2fec27751f224b6787e69a68b2313871360b2 | Stackoverflow Stackexchange
Q: Kivy-how can i make my canvas height smaller than the parent height I have a stacklayout with a canvas and an image. my image has a size_hint of .1 and I want my canvas to have the same height as my image.
.kv file:
StackLayout:
orientation: 'lr-tb'
canvas:
Color:
rgba: 1,1,1,1
Rectangle:
pos: self.pos
size: self.size
Image:
size_hint_y: .1
source: 'Images\login\cptbanner.jpg'
allow_stretch: True
keep_ratio: True
what can I do to get the desired effect?
A: A Kivy canvas is not a widget or the space in which you paint. It is only a set of instructions. You can´t resize it. I guess you want to resize the drawn rectangle. I'm not sure what the outcome is you expect, but you can resize the rectangle using the width and height attributes of the parent widget:
from kivy.app import App
from kivy.base import Builder
from kivy.uix.boxlayout import BoxLayout
Builder.load_string("""
<MainWindow>:
StackLayout:
id : aa
orientation: 'lr-tb'
canvas:
Color:
rgba: 1,1,1,1
Rectangle:
pos: 0, self.height*0.9
size: self.width, self.height*0.1
Image:
id: im
size_hint_y: 0.1
source: 'Images\login\cptbanner.jpg'
allow_stretch: True
keep_ratio: True
""")
class MainWindow(BoxLayout):
def __init__(self, **kwargs):
super(MainWindow, self).__init__(**kwargs)
class MyApp(App):
def build(self):
return MainWindow()
if __name__ == '__main__':
MyApp().run()
Result:
| Q: Kivy-how can i make my canvas height smaller than the parent height I have a stacklayout with a canvas and an image. my image has a size_hint of .1 and I want my canvas to have the same height as my image.
.kv file:
StackLayout:
orientation: 'lr-tb'
canvas:
Color:
rgba: 1,1,1,1
Rectangle:
pos: self.pos
size: self.size
Image:
size_hint_y: .1
source: 'Images\login\cptbanner.jpg'
allow_stretch: True
keep_ratio: True
what can I do to get the desired effect?
A: A Kivy canvas is not a widget or the space in which you paint. It is only a set of instructions. You can´t resize it. I guess you want to resize the drawn rectangle. I'm not sure what the outcome is you expect, but you can resize the rectangle using the width and height attributes of the parent widget:
from kivy.app import App
from kivy.base import Builder
from kivy.uix.boxlayout import BoxLayout
Builder.load_string("""
<MainWindow>:
StackLayout:
id : aa
orientation: 'lr-tb'
canvas:
Color:
rgba: 1,1,1,1
Rectangle:
pos: 0, self.height*0.9
size: self.width, self.height*0.1
Image:
id: im
size_hint_y: 0.1
source: 'Images\login\cptbanner.jpg'
allow_stretch: True
keep_ratio: True
""")
class MainWindow(BoxLayout):
def __init__(self, **kwargs):
super(MainWindow, self).__init__(**kwargs)
class MyApp(App):
def build(self):
return MainWindow()
if __name__ == '__main__':
MyApp().run()
Result:
| stackoverflow | {
"language": "en",
"length": 197,
"provenance": "stackexchange_0000F.jsonl.gz:902081",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44658365"
} |
2548223f7e20bad754d57d59f410013a7326b188 | Stackoverflow Stackexchange
Q: Set DateTimeIndex in existing DataFrame I have a existing dataframe, df, which consists of the following structure:
tick_id stock_ticker tick_datetime price volume
0 160939 A2M AU Equity 2016-10-19 09:00:00 450.0 79700
1 160940 A2M AU Equity 2016-10-19 09:00:01 450.0 100
2 160941 A2M AU Equity 2016-10-19 09:00:01 450.0 2500
3 160942 A2M AU Equity 2016-10-19 09:00:01 451.0 200
What I am looking to do is to set the "tick_datetime" as the index of the dataframe, so that it should become DateTimeIndex for easier data manipulation later.
However, executing the following command yields unexpected result.
df.set_index('tick_datetime')
What is the correct way to achieve my desired outcome?
A: Try:
df['tick_datetime'] = pd.to_datetime(df['tick_datetime'])
df.set_index('tick_datetime',inplace=True)
or:
df['tick_datetime'] = pd.to_datetime(df['tick_datetime'])
df = df.set_index('tick_datetime')
| Q: Set DateTimeIndex in existing DataFrame I have a existing dataframe, df, which consists of the following structure:
tick_id stock_ticker tick_datetime price volume
0 160939 A2M AU Equity 2016-10-19 09:00:00 450.0 79700
1 160940 A2M AU Equity 2016-10-19 09:00:01 450.0 100
2 160941 A2M AU Equity 2016-10-19 09:00:01 450.0 2500
3 160942 A2M AU Equity 2016-10-19 09:00:01 451.0 200
What I am looking to do is to set the "tick_datetime" as the index of the dataframe, so that it should become DateTimeIndex for easier data manipulation later.
However, executing the following command yields unexpected result.
df.set_index('tick_datetime')
What is the correct way to achieve my desired outcome?
A: Try:
df['tick_datetime'] = pd.to_datetime(df['tick_datetime'])
df.set_index('tick_datetime',inplace=True)
or:
df['tick_datetime'] = pd.to_datetime(df['tick_datetime'])
df = df.set_index('tick_datetime')
| stackoverflow | {
"language": "en",
"length": 119,
"provenance": "stackexchange_0000F.jsonl.gz:902104",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44658451"
} |
5ac71b3b5590028f586fa0d84b4449117c8bc747 | Stackoverflow Stackexchange
Q: Labeling indexes on a dataframe I have a multi-layer index in a dataframe. When I run
print(len(b.index.names))
I get 3. When I run
print(b.index.names)
I get [None, None, None].
How do I give each of the above index levels a unique name?
A: Either
b.rename_axis(['X', 'Y', 'Z'])
Or
b.index.names = ['X', 'Y', 'Z']
| Q: Labeling indexes on a dataframe I have a multi-layer index in a dataframe. When I run
print(len(b.index.names))
I get 3. When I run
print(b.index.names)
I get [None, None, None].
How do I give each of the above index levels a unique name?
A: Either
b.rename_axis(['X', 'Y', 'Z'])
Or
b.index.names = ['X', 'Y', 'Z']
A: You can also assign with list such that indexes are named index_1, index_2, and index_3 respectively, if more they are named accordingly as well:
b.index.names = ["index_" + str(i+1) for i in range(len(b.index.names))]
| stackoverflow | {
"language": "en",
"length": 88,
"provenance": "stackexchange_0000F.jsonl.gz:902126",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44658505"
} |
7b5b29ddc93c67fd27dedda9b45b44b9ccad7e5d | Stackoverflow Stackexchange
Q: Setting up SonarQube on AWS using EC2 Trying to setup SonarQube on EC2 using what should be basic install settings.
*
*List item
*Setup a standard EC2 AWS LINUX Ami attached to M4 large
*SSH into EC2 instance
*Install JAVA
*Set to use JAVA8
*wget https://sonarsource.bintray.com/Distribution/sonarqube/sonarqube-6.4.zip
*unzip into the /etc dir
*run sudo ./sonar.sh start
*Instance starts
But when I try to go to the app it never comes up when I try either the IPv4 Public IP 187.187.87.87:9000 (ex not real IP) or try ec2-134-73-134-114.compute-1.amazonaws.com:9000 (not real IP either just for example)
Perhaps it is my ignorance or me not configuring something correctly as it pertains to the initial EC2 setup.
If anyone has any ideas, please let me know.
A: Issue was that SonarQube default port is 9000. and by default this port is not open in the security group if you dont apply the default security group in which all the ports are open(which is Not recommended).
As suggested in comment @Issac, opened the 9000 port to allow incoming request to SonarQube, in AWS security group setting of instance. Which solved the issue.
| Q: Setting up SonarQube on AWS using EC2 Trying to setup SonarQube on EC2 using what should be basic install settings.
*
*List item
*Setup a standard EC2 AWS LINUX Ami attached to M4 large
*SSH into EC2 instance
*Install JAVA
*Set to use JAVA8
*wget https://sonarsource.bintray.com/Distribution/sonarqube/sonarqube-6.4.zip
*unzip into the /etc dir
*run sudo ./sonar.sh start
*Instance starts
But when I try to go to the app it never comes up when I try either the IPv4 Public IP 187.187.87.87:9000 (ex not real IP) or try ec2-134-73-134-114.compute-1.amazonaws.com:9000 (not real IP either just for example)
Perhaps it is my ignorance or me not configuring something correctly as it pertains to the initial EC2 setup.
If anyone has any ideas, please let me know.
A: Issue was that SonarQube default port is 9000. and by default this port is not open in the security group if you dont apply the default security group in which all the ports are open(which is Not recommended).
As suggested in comment @Issac, opened the 9000 port to allow incoming request to SonarQube, in AWS security group setting of instance. Which solved the issue.
A: need to have an db and give permissions to the db insonar.properties file in sonar nd need to open firewalls
| stackoverflow | {
"language": "en",
"length": 208,
"provenance": "stackexchange_0000F.jsonl.gz:902127",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44658510"
} |
e6baff1f16fbadecda00a014985249e7a4cc1eaa | Stackoverflow Stackexchange
Q: How to run task scheduler everyday at the same time in R? How can I set a script to run everyday automatically at 11:00pm?
I found the package "taskscheduleR", but I don't know how to run my script with it.
taskschedulerR exemple:
myscript <- system.file("extdata", "helloworld.R", package = "taskscheduleR")
## run script once within 62 seconds taskscheduler_create(taskname = "myfancyscript", rscript = myscript, schedule = "ONCE", starttime = format(Sys.time() + 62, "%H:%M"))
My script
dayfile <- read.csv("A:/file_170611.txt", sep = " ", header=F, stringsAsFactors = F)
write.table(dayfile, file="A:/dayfiles/dayfile.txt", sep = " ")
A: The README of taskscheduleR looks quite on the point:
library(taskscheduleR)
myscript <- "A:/script.R" # path to script
taskscheduler_create(taskname = "myscriptdaily", rscript = myscript,
schedule = "DAILY", starttime = "09:10",
startdate = format(Sys.Date()+1, "%d/%m/%Y")
)
and you are done.
| Q: How to run task scheduler everyday at the same time in R? How can I set a script to run everyday automatically at 11:00pm?
I found the package "taskscheduleR", but I don't know how to run my script with it.
taskschedulerR exemple:
myscript <- system.file("extdata", "helloworld.R", package = "taskscheduleR")
## run script once within 62 seconds taskscheduler_create(taskname = "myfancyscript", rscript = myscript, schedule = "ONCE", starttime = format(Sys.time() + 62, "%H:%M"))
My script
dayfile <- read.csv("A:/file_170611.txt", sep = " ", header=F, stringsAsFactors = F)
write.table(dayfile, file="A:/dayfiles/dayfile.txt", sep = " ")
A: The README of taskscheduleR looks quite on the point:
library(taskscheduleR)
myscript <- "A:/script.R" # path to script
taskscheduler_create(taskname = "myscriptdaily", rscript = myscript,
schedule = "DAILY", starttime = "09:10",
startdate = format(Sys.Date()+1, "%d/%m/%Y")
)
and you are done.
| stackoverflow | {
"language": "en",
"length": 130,
"provenance": "stackexchange_0000F.jsonl.gz:902143",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44658552"
} |
26250d7f23093b7d4261299da236559990e89fc6 | Stackoverflow Stackexchange
Q: Best way to generate role based sidebar navigation with React Router What is the best way to handle role a based navigation sidebar with React Router? Right now I am repeating myself by having two sidebars. I was thinking about mapping over to generate but didn't know if React Router had some functionality already to generate Links based on a Role condition? If not I am checking if there is a better approach than something like this this below?
Currently I am just hardcoding the Sidebar links instead of dynamically generating.
```
const routes = _.map(props.links, (link) => {
if(someRoleOnUser in link.roles) {
return (<div><Link .... /></div>);
}
})
```
| Q: Best way to generate role based sidebar navigation with React Router What is the best way to handle role a based navigation sidebar with React Router? Right now I am repeating myself by having two sidebars. I was thinking about mapping over to generate but didn't know if React Router had some functionality already to generate Links based on a Role condition? If not I am checking if there is a better approach than something like this this below?
Currently I am just hardcoding the Sidebar links instead of dynamically generating.
```
const routes = _.map(props.links, (link) => {
if(someRoleOnUser in link.roles) {
return (<div><Link .... /></div>);
}
})
```
| stackoverflow | {
"language": "en",
"length": 111,
"provenance": "stackexchange_0000F.jsonl.gz:902161",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44658604"
} |
c0181d4a5fba76aa097729c94585a9037a3419f3 | Stackoverflow Stackexchange
Q: Looking for source code for Crafter Deployer 2.5.3 I have an instance of crafter running with crafter-studio-publishing-receiver-2.5.3-aio.jar, I need to locate the source code for the jar file.
Is this the right repository
What is the significance of the word "legacy" in the name of the project?
A: You can find the source code here:
https://github.com/craftercms/legacy-deployer
The specific version can be found by checking the manifest of the jar.
- unzip the jar
- open ./META-INF/MANIFEST.MF
- locate the property Implementation-Build: 87c84d58313b2bcbdca306de69758320aee174d0
This value can be placed in github to get the exact code you are looking for.
Example:
https://github.com/craftercms/legacy-deployer/blob/87c84d58313b2bcbdca306de69758320aee174d0/cstudio-publishing-receiver-zip/pom.xml
The reason we renamed the project "legacy-deployer" in github is that with Crafter 3.x we are moving to a new deployment system. Without going too deep on this: The new system is based on Git pulls, as you can imagine, this approach has many benefits. It will support the same concepts (callbacks etc) as the now "legacy" deployer.
| Q: Looking for source code for Crafter Deployer 2.5.3 I have an instance of crafter running with crafter-studio-publishing-receiver-2.5.3-aio.jar, I need to locate the source code for the jar file.
Is this the right repository
What is the significance of the word "legacy" in the name of the project?
A: You can find the source code here:
https://github.com/craftercms/legacy-deployer
The specific version can be found by checking the manifest of the jar.
- unzip the jar
- open ./META-INF/MANIFEST.MF
- locate the property Implementation-Build: 87c84d58313b2bcbdca306de69758320aee174d0
This value can be placed in github to get the exact code you are looking for.
Example:
https://github.com/craftercms/legacy-deployer/blob/87c84d58313b2bcbdca306de69758320aee174d0/cstudio-publishing-receiver-zip/pom.xml
The reason we renamed the project "legacy-deployer" in github is that with Crafter 3.x we are moving to a new deployment system. Without going too deep on this: The new system is based on Git pulls, as you can imagine, this approach has many benefits. It will support the same concepts (callbacks etc) as the now "legacy" deployer.
| stackoverflow | {
"language": "en",
"length": 159,
"provenance": "stackexchange_0000F.jsonl.gz:902193",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44658731"
} |
5fc1b49feeef76f2548c17d08bf817ccedc319ef | Stackoverflow Stackexchange
Q: How can I get resharper to know that my variable is not null after I have called an extension method on it? I have an extension method:
public static bool Exists(this object toCheck)
{
return toCheck != null;
}
if I use it and then do something like this:
if (duplicate.Exists())
throw new Exception(duplicate);
then resharper warns me that there is a possible null reference exception.
I know this is not possible, but how can I tell resharper that this is ok?
A: You can do it with contract annotations, but the way provided in another answer does not work for me (that is - still produces a warning). But this one works:
public static class Extensions {
[ContractAnnotation("null => false; notnull => true")]
public static bool Exists(this object toCheck) {
return toCheck != null;
}
}
To get ContractAnnotationAttribute - recommended way is to install JetBrains.Annotations nuget package. If you don't want to install package - go to Resharper > Options > Code Annotations and press "copy implementation to clipboard" button, then paste it anywhere in your project (ensure to not change namespace).
| Q: How can I get resharper to know that my variable is not null after I have called an extension method on it? I have an extension method:
public static bool Exists(this object toCheck)
{
return toCheck != null;
}
if I use it and then do something like this:
if (duplicate.Exists())
throw new Exception(duplicate);
then resharper warns me that there is a possible null reference exception.
I know this is not possible, but how can I tell resharper that this is ok?
A: You can do it with contract annotations, but the way provided in another answer does not work for me (that is - still produces a warning). But this one works:
public static class Extensions {
[ContractAnnotation("null => false; notnull => true")]
public static bool Exists(this object toCheck) {
return toCheck != null;
}
}
To get ContractAnnotationAttribute - recommended way is to install JetBrains.Annotations nuget package. If you don't want to install package - go to Resharper > Options > Code Annotations and press "copy implementation to clipboard" button, then paste it anywhere in your project (ensure to not change namespace).
A: You can use "Contract Annotation Syntax" to indicate to Resharper that a method does not return normally under some circumstances, e.g. when a parameter is null.
For your example you can do something like this:
[ContractAnnotation(toCheck:notnull => true]
public static bool Exists(this object toCheck)
{
return toCheck != null;
}
Where the toCheck:null => true tells Resharper that if toCheck is not null, the method will return true.
[EDIT] Updated link to point to the most recent Resharper documentation.
| stackoverflow | {
"language": "en",
"length": 265,
"provenance": "stackexchange_0000F.jsonl.gz:902198",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44658747"
} |
7b57aa9b9a645b2c654cc04200946215ab8d7bec | Stackoverflow Stackexchange
Q: How can I make this line work in jest ? => jasmine.clock().install(); I have a unit test that use jasmin.clock.install
I have the following error using jest-cli 20.0.4
TypeError: jasmine.clock is not a function
What package should I have in order to have this line work in my unit test :
jasmine.clock().install();
I managed to make it work by downgrading to jest-cli 19.0.1. it would be nice to know the upgrade procedure.
A: From the docs jasmine.clock().install(); is needed to mock out setTimeout calls. So this can be done in Jest by using jest.useFakeTimers();. Have a look at the docs on how to mock timer in Jest. Also have a look at the announcement of v20 to see why the Jasmine stuff does not work anymore
| Q: How can I make this line work in jest ? => jasmine.clock().install(); I have a unit test that use jasmin.clock.install
I have the following error using jest-cli 20.0.4
TypeError: jasmine.clock is not a function
What package should I have in order to have this line work in my unit test :
jasmine.clock().install();
I managed to make it work by downgrading to jest-cli 19.0.1. it would be nice to know the upgrade procedure.
A: From the docs jasmine.clock().install(); is needed to mock out setTimeout calls. So this can be done in Jest by using jest.useFakeTimers();. Have a look at the docs on how to mock timer in Jest. Also have a look at the announcement of v20 to see why the Jasmine stuff does not work anymore
| stackoverflow | {
"language": "en",
"length": 127,
"provenance": "stackexchange_0000F.jsonl.gz:902246",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44658894"
} |
00dba493bba5dd035145c0b9cd24eb8d68d9a38b | Stackoverflow Stackexchange
Q: Android foreground service notification not showing I am trying to start a foreground service. I get notified that the service does start but the notification always gets suppressed. I double checked that the app is allowed to show notifications in the app info on my device. Here is my code:
private void showNotification() {
Intent notificationIntent = new Intent(this, MainActivity.class);
notificationIntent.setAction(Constants.ACTION.MAIN_ACTION);
notificationIntent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK
| Intent.FLAG_ACTIVITY_CLEAR_TASK);
PendingIntent pendingIntent = PendingIntent.getActivity(this, 0,
notificationIntent, 0);
Bitmap icon = BitmapFactory.decodeResource(getResources(),
R.mipmap.ic_launcher);
Notification notification = new NotificationCompat.Builder(getApplicationContext())
.setContentTitle("Revel Is Running")
.setTicker("Revel Is Running")
.setContentText("Click to stop")
.setSmallIcon(R.mipmap.ic_launcher)
//.setLargeIcon(Bitmap.createScaledBitmap(icon, 128, 128, false))
.setContentIntent(pendingIntent)
.setOngoing(true).build();
startForeground(Constants.FOREGROUND_SERVICE,
notification);
Log.e(TAG,"notification shown");
}
Here is the only error I see in relation:
06-20 12:26:43.635 895-930/? E/NotificationService: Suppressing notification from the package by user request.
A: For me everything was set correctly (also added FOREGROUND_SERVICE permission to manifest),
but I just needed to uninstall the app and reinstall it.
| Q: Android foreground service notification not showing I am trying to start a foreground service. I get notified that the service does start but the notification always gets suppressed. I double checked that the app is allowed to show notifications in the app info on my device. Here is my code:
private void showNotification() {
Intent notificationIntent = new Intent(this, MainActivity.class);
notificationIntent.setAction(Constants.ACTION.MAIN_ACTION);
notificationIntent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK
| Intent.FLAG_ACTIVITY_CLEAR_TASK);
PendingIntent pendingIntent = PendingIntent.getActivity(this, 0,
notificationIntent, 0);
Bitmap icon = BitmapFactory.decodeResource(getResources(),
R.mipmap.ic_launcher);
Notification notification = new NotificationCompat.Builder(getApplicationContext())
.setContentTitle("Revel Is Running")
.setTicker("Revel Is Running")
.setContentText("Click to stop")
.setSmallIcon(R.mipmap.ic_launcher)
//.setLargeIcon(Bitmap.createScaledBitmap(icon, 128, 128, false))
.setContentIntent(pendingIntent)
.setOngoing(true).build();
startForeground(Constants.FOREGROUND_SERVICE,
notification);
Log.e(TAG,"notification shown");
}
Here is the only error I see in relation:
06-20 12:26:43.635 895-930/? E/NotificationService: Suppressing notification from the package by user request.
A: For me everything was set correctly (also added FOREGROUND_SERVICE permission to manifest),
but I just needed to uninstall the app and reinstall it.
A: If none of the above worked you should check if your notification id is 0 ...
SURPRISE!! it cannot be 0.
Many thanks to @Luka Kama for this post
startForeground(0, notification); // Doesn't work...
startForeground(1, notification); // Works!!!
A: It's because of Android O bg services restrictions.
So now you need to call startForeground() only for services that were started with startForegroundService() and call it in first 5 seconds after service has been started.
Here is the guide - https://developer.android.com/about/versions/oreo/background#services
Like this:
//Start service:
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
startForegroundService(new Intent(this, YourService.class));
} else {
startService(new Intent(this, YourService.class));
}
Then create and show notification (with channel as supposed earlier):
private void createAndShowForegroundNotification(Service yourService, int notificationId) {
final NotificationCompat.Builder builder = getNotificationBuilder(yourService,
"com.example.your_app.notification.CHANNEL_ID_FOREGROUND", // Channel id
NotificationManagerCompat.IMPORTANCE_LOW); //Low importance prevent visual appearance for this notification channel on top
builder.setOngoing(true)
.setSmallIcon(R.drawable.small_icon)
.setContentTitle(yourService.getString(R.string.title))
.setContentText(yourService.getString(R.string.content));
Notification notification = builder.build();
yourService.startForeground(notificationId, notification);
if (notificationId != lastShownNotificationId) {
// Cancel previous notification
final NotificationManager nm = (NotificationManager) yourService.getSystemService(Activity.NOTIFICATION_SERVICE);
nm.cancel(lastShownNotificationId);
}
lastShownNotificationId = notificationId;
}
public static NotificationCompat.Builder getNotificationBuilder(Context context, String channelId, int importance) {
NotificationCompat.Builder builder;
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
prepareChannel(context, channelId, importance);
builder = new NotificationCompat.Builder(context, channelId);
} else {
builder = new NotificationCompat.Builder(context);
}
return builder;
}
@TargetApi(26)
private static void prepareChannel(Context context, String id, int importance) {
final String appName = context.getString(R.string.app_name);
String description = context.getString(R.string.notifications_channel_description);
final NotificationManager nm = (NotificationManager) context.getSystemService(Activity.NOTIFICATION_SERVICE);
if(nm != null) {
NotificationChannel nChannel = nm.getNotificationChannel(id);
if (nChannel == null) {
nChannel = new NotificationChannel(id, appName, importance);
nChannel.setDescription(description);
nm.createNotificationChannel(nChannel);
}
}
}
Remember that your foreground notification will have the same state as your other notifications even if you'll use different channel ids, so it might be hidden as a group with others. Use different groups to avoid it.
A: The problem was i am using Android O and it requires more information. Here is the successful code for android O.
mNotifyManager = (NotificationManager) mActivity.getSystemService(Context.NOTIFICATION_SERVICE);
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) createChannel(mNotifyManager);
mBuilder = new NotificationCompat.Builder(mActivity, "YOUR_TEXT_HERE").setSmallIcon(android.R.drawable.stat_sys_download).setColor
(ContextCompat.getColor(mActivity, R.color.colorNotification)).setContentTitle(YOUR_TITLE_HERE).setContentText(YOUR_DESCRIPTION_HERE);
mNotifyManager.notify(mFile.getId().hashCode(), mBuilder.build());
@TargetApi(26)
private void createChannel(NotificationManager notificationManager) {
String name = "FileDownload";
String description = "Notifications for download status";
int importance = NotificationManager.IMPORTANCE_DEFAULT;
NotificationChannel mChannel = new NotificationChannel(name, name, importance);
mChannel.setDescription(description);
mChannel.enableLights(true);
mChannel.setLightColor(Color.BLUE);
notificationManager.createNotificationChannel(mChannel);
}
A: if you are targeting Android 9(Pie) api level 28 and higher than you should give FOREGROUND_SERVICE permission in manifest file.see this link : https://developer.android.com/about/versions/pie/android-9.0-migration#bfa
A: I can not believe it. In my case, after adding 'android:name=".App"' to AndroidManifest.xml, the notification started showing.
Example:
<application
android:name=".App"
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
A: For Android API level 33+ you need to request POST_NOTIFICATIONS runtime permission. Although this doesn't prevent the foreground service from running, it's still mandatory to notify as we did for < API 33:
Note: Apps don't need to request the POST_NOTIFICATIONS permission in order to launch a foreground service. However, apps must include a notification when they start a foreground service, just as they do on previous versions of Android.
See more in Android Documentation.
A: In my case, it was caused by me using IntentService.
In short, if you want a foreground service then subclass Service.
| stackoverflow | {
"language": "en",
"length": 662,
"provenance": "stackexchange_0000F.jsonl.gz:902253",
"question_score": "27",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44658923"
} |
45b6bd7a4a4c856a53c75df1911c13061d0f34a3 | Stackoverflow Stackexchange
Q: how to secure/ httponly cookies I have configured nginx as Proxy server and my backend is tomcat8.5 on Redhat7. I have configure SSL in nginx. I would like to have cookie in secure and httponly but my header shows only JSESSIONID in cookie.
my result:- Cookie:JSESSIONID=D442DD723352EA8354E4D .
i'm seeking of below result
click on here- pic 1
I have follow below solution , but that was not worked for httpOnly.
https://geekflare.com/secure-cookie-flag-in-tomcat/
| Q: how to secure/ httponly cookies I have configured nginx as Proxy server and my backend is tomcat8.5 on Redhat7. I have configure SSL in nginx. I would like to have cookie in secure and httponly but my header shows only JSESSIONID in cookie.
my result:- Cookie:JSESSIONID=D442DD723352EA8354E4D .
i'm seeking of below result
click on here- pic 1
I have follow below solution , but that was not worked for httpOnly.
https://geekflare.com/secure-cookie-flag-in-tomcat/
| stackoverflow | {
"language": "en",
"length": 72,
"provenance": "stackexchange_0000F.jsonl.gz:902255",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44658927"
} |
cc9a1717c0cf0358c80b19186ed9b320b5606f46 | Stackoverflow Stackexchange
Q: Figure numbers according to sections and subsections I want to generate the figure numbers depending on the sections, for example, if section number is 1.1 then I want to generate the figure numbers as 1.1.1, 1.1.2 and so on.
Thanks in advance
A: Add
\renewcommand{\thefigure}{\arabic{section}.\arabic{subsection}.\arabic{figure}}
to your document. This will redefine the figure number to contain the section and subsection number.
MWE:
\documentclass{article}
\usepackage{graphicx}
\renewcommand{\thefigure}{\arabic{section}.\arabic{subsection}.\arabic{figure}}
\begin{document}
\section{title}
\subsection{subtitle}
\begin{figure}[htbp]
\includegraphics[width=\textwidth]{example-image-duck}
\caption{content...}
\end{figure}
\end{document}
| Q: Figure numbers according to sections and subsections I want to generate the figure numbers depending on the sections, for example, if section number is 1.1 then I want to generate the figure numbers as 1.1.1, 1.1.2 and so on.
Thanks in advance
A: Add
\renewcommand{\thefigure}{\arabic{section}.\arabic{subsection}.\arabic{figure}}
to your document. This will redefine the figure number to contain the section and subsection number.
MWE:
\documentclass{article}
\usepackage{graphicx}
\renewcommand{\thefigure}{\arabic{section}.\arabic{subsection}.\arabic{figure}}
\begin{document}
\section{title}
\subsection{subtitle}
\begin{figure}[htbp]
\includegraphics[width=\textwidth]{example-image-duck}
\caption{content...}
\end{figure}
\end{document}
A: Add
\usepackage{chngcntr}
\counterwithin{figure}{section}
to your preamble.
Reference:
*
*Continuous v. per-chapter/section numbering of figures, tables, and other document elements
| stackoverflow | {
"language": "en",
"length": 94,
"provenance": "stackexchange_0000F.jsonl.gz:902277",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44658990"
} |
23bee9878131afce0fd0c8e07eeedc41fa8f1714 | Stackoverflow Stackexchange
Q: Regex detect any repeated character but with optional whitespace between So currently I've got the following regex pattern, allowing me to detect any string containing 9 characters that are the same consecutively.
/^.*(\S)\1{9,}.*$/
This works perfectly with a string like the following: this a tesssssssssst however I wish for it to also detect a string like this: this a tess sss ssssst (Same number of the repeated character, but with optional whitespace)
Any ideas?
A: You need to put the backreference into a group and add an optional space into the group:
^.*(\S)(?: ?\1){9,}.*$
See the regex demo. If there can be more than 1 space in between, replace ? with *.
The .*$ part is only needed if you need to get the whole line match, for methods that allow partial matches, you may use ^.*(\S)(?: ?\1){9,}.
If any whitespace is meant, replace the space with \s in the pattern.
| Q: Regex detect any repeated character but with optional whitespace between So currently I've got the following regex pattern, allowing me to detect any string containing 9 characters that are the same consecutively.
/^.*(\S)\1{9,}.*$/
This works perfectly with a string like the following: this a tesssssssssst however I wish for it to also detect a string like this: this a tess sss ssssst (Same number of the repeated character, but with optional whitespace)
Any ideas?
A: You need to put the backreference into a group and add an optional space into the group:
^.*(\S)(?: ?\1){9,}.*$
See the regex demo. If there can be more than 1 space in between, replace ? with *.
The .*$ part is only needed if you need to get the whole line match, for methods that allow partial matches, you may use ^.*(\S)(?: ?\1){9,}.
If any whitespace is meant, replace the space with \s in the pattern.
A: You can check more than a single character this way.
It's only limited by the number of capture groups available.
This one checks for 1 - 3 characters.
(\S)[ ]*(\S)?[ ]*(\S)?(?:[ ]*(?:\1[ ]*\2[ ]*\3)){9,}
http://regexr.com/3g709
# 1-3 Characters
( \S ) # (1)
[ ]*
( \S )? # (2)
[ ]*
( \S )? # (3)
# Add more here
(?:
[ ]*
(?: \1 [ ]* \2 [ ]* \3 )
# Add more here
){9,}
| stackoverflow | {
"language": "en",
"length": 231,
"provenance": "stackexchange_0000F.jsonl.gz:902290",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44659031"
} |
eb6a0b841229badd0b19df6f6edb779fb30a99af | Stackoverflow Stackexchange
Q: MatLab help! Error using plot3 Not enough input arguments fid =fopen(datafile.txt','r');
data = textscan(fid, '%f %f %f %f %f %f %f %f %f %f %f %f %f %f %f %f %f %f %f %f %f %f %f %f');
plot3(data(:,5),data(:,6),data(:,7))
fclose(fid);
I am getting the error:
Error using plot3
Not enough input arguments.
Where am I going wrong here? my data file is just columns of doubles (hence %f)
A: This is one of those cases where the error isn't very informative. The problem here isn't that there aren't enough input arguments, it's that they are of the wrong type...
Your problem is that textscan actually returns the loaded data in a 1-by-N cell array, where N is the number of columns (i.e. format specifiers, like %f) in your file. Each cell holds one column of data. You need to extract the contents of the cells using curly braces in order to pass it to plot3, like so:
plot3(data{5}, data{6}, data{7});
| Q: MatLab help! Error using plot3 Not enough input arguments fid =fopen(datafile.txt','r');
data = textscan(fid, '%f %f %f %f %f %f %f %f %f %f %f %f %f %f %f %f %f %f %f %f %f %f %f %f');
plot3(data(:,5),data(:,6),data(:,7))
fclose(fid);
I am getting the error:
Error using plot3
Not enough input arguments.
Where am I going wrong here? my data file is just columns of doubles (hence %f)
A: This is one of those cases where the error isn't very informative. The problem here isn't that there aren't enough input arguments, it's that they are of the wrong type...
Your problem is that textscan actually returns the loaded data in a 1-by-N cell array, where N is the number of columns (i.e. format specifiers, like %f) in your file. Each cell holds one column of data. You need to extract the contents of the cells using curly braces in order to pass it to plot3, like so:
plot3(data{5}, data{6}, data{7});
| stackoverflow | {
"language": "en",
"length": 161,
"provenance": "stackexchange_0000F.jsonl.gz:902296",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44659054"
} |
c57143f6fdca576d4f7b9ed73dcd53b7edf15134 | Stackoverflow Stackexchange
Q: SceneBuilder crashed upon startup I performed the installation of SceneBuilder 8.3.0 on Ubuntu Gnome 17.04 with Java Hotspot 1.8.0_131 installed, but when trying to start it, an error occurs, making it impossible to initialize. I've already tried installing Oracle's SceneBuilder 2.0, but the same error occurs.
Do you know what that can be and how I can solve it?
Thank you very much in advance!
A: I noticed that calling directly the SceneBuilder jar "dist.jar" with the java Hotspot, the application launches smoothly ("java - jar /opt/SceneBuilder/app/dist.jar"). So one way to solve this problem palliatively is to edit the file "/usr/share/applications/SceneBuilder.desktop" and change the line:
Exec=/opt/SceneBuilder/SceneBuilder
for:
Exec=java -jar /opt/SceneBuilder/app/dist.jar
| Q: SceneBuilder crashed upon startup I performed the installation of SceneBuilder 8.3.0 on Ubuntu Gnome 17.04 with Java Hotspot 1.8.0_131 installed, but when trying to start it, an error occurs, making it impossible to initialize. I've already tried installing Oracle's SceneBuilder 2.0, but the same error occurs.
Do you know what that can be and how I can solve it?
Thank you very much in advance!
A: I noticed that calling directly the SceneBuilder jar "dist.jar" with the java Hotspot, the application launches smoothly ("java - jar /opt/SceneBuilder/app/dist.jar"). So one way to solve this problem palliatively is to edit the file "/usr/share/applications/SceneBuilder.desktop" and change the line:
Exec=/opt/SceneBuilder/SceneBuilder
for:
Exec=java -jar /opt/SceneBuilder/app/dist.jar
| stackoverflow | {
"language": "en",
"length": 111,
"provenance": "stackexchange_0000F.jsonl.gz:902309",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44659100"
} |
1a2e52f925adc740dd10c6ce0e8de6c83c2d919a | Stackoverflow Stackexchange
Q: Inspect history in react-router-dom with HashRouter As described in the title, I'm using react-router-dom and HashRouter to serve a client side app in a portion of my site.
What I'm trying to achieve is something like a back button to pop through the history. This is fine in normal situations, but the history seems to contain the entire browser history - so if someone links directly to a page and click the back button, it will take them back to the previous site.
I'd like to be able to detect when this is the case (i.e. the last thing in the history was not within the hash history) - that way I could do something different.
Thanks in advance for the help
Matt
| Q: Inspect history in react-router-dom with HashRouter As described in the title, I'm using react-router-dom and HashRouter to serve a client side app in a portion of my site.
What I'm trying to achieve is something like a back button to pop through the history. This is fine in normal situations, but the history seems to contain the entire browser history - so if someone links directly to a page and click the back button, it will take them back to the previous site.
I'd like to be able to detect when this is the case (i.e. the last thing in the history was not within the hash history) - that way I could do something different.
Thanks in advance for the help
Matt
| stackoverflow | {
"language": "en",
"length": 124,
"provenance": "stackexchange_0000F.jsonl.gz:902319",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44659122"
} |
845dd0c5ca475120d7a144e4b3533f17fb8a65d3 | Stackoverflow Stackexchange
Q: Observable get differences between sequence elements I have an observable that emits a sequence of IEnumerables
1: [1,2,3,4]
2: [1,3,4]
3: [1,5,6]
etc..
I want to try and create two observables from this:
*
*One that emits an IEnumerable of newly added elements:
1: [1,2,3,4]
2: []
3: [5, 6]
etc..
*
*One that emits an IEnumerable of newly removed elements:
1: []
2: [2]
3: [3,4]
etc..
Is there a way to do this using System.Reactive without having to rely on keeping a separate data structure to compare changes against?
A: It's fairly simple if you use Observable.Zip and Enumerable.Except to easily compare element n to element n-1.
public static class IObservableIEnumerableExtensions
{
public static IObservable<IEnumerable<T>> GetAddedElements<T>(this IObservable<IEnumerable<T>> source)
{
return source.Zip(source.StartWith(Enumerable.Empty<T>()), (newer, older) => newer.Except(older));
}
public static IObservable<IEnumerable<T>> GetRemovedElements<T>(this IObservable<IEnumerable<T>> source)
{
return source.Zip(source.StartWith(Enumerable.Empty<T>()), (newer, older) => older.Except(newer));
}
}
And here's some runner code:
var source = new Subject<IEnumerable<int>>();
var addedElements = source.GetAddedElements();
var removedElements = source.GetRemovedElements();
addedElements.Dump(); //Using Linqpad
removedElements.Dump(); //Using Linqpad
source.OnNext(new int[] { 1, 2, 3, 4 });
source.OnNext(new int[] { 1, 3, 4 });
source.OnNext(new int[] { 1, 5, 6 });
| Q: Observable get differences between sequence elements I have an observable that emits a sequence of IEnumerables
1: [1,2,3,4]
2: [1,3,4]
3: [1,5,6]
etc..
I want to try and create two observables from this:
*
*One that emits an IEnumerable of newly added elements:
1: [1,2,3,4]
2: []
3: [5, 6]
etc..
*
*One that emits an IEnumerable of newly removed elements:
1: []
2: [2]
3: [3,4]
etc..
Is there a way to do this using System.Reactive without having to rely on keeping a separate data structure to compare changes against?
A: It's fairly simple if you use Observable.Zip and Enumerable.Except to easily compare element n to element n-1.
public static class IObservableIEnumerableExtensions
{
public static IObservable<IEnumerable<T>> GetAddedElements<T>(this IObservable<IEnumerable<T>> source)
{
return source.Zip(source.StartWith(Enumerable.Empty<T>()), (newer, older) => newer.Except(older));
}
public static IObservable<IEnumerable<T>> GetRemovedElements<T>(this IObservable<IEnumerable<T>> source)
{
return source.Zip(source.StartWith(Enumerable.Empty<T>()), (newer, older) => older.Except(newer));
}
}
And here's some runner code:
var source = new Subject<IEnumerable<int>>();
var addedElements = source.GetAddedElements();
var removedElements = source.GetRemovedElements();
addedElements.Dump(); //Using Linqpad
removedElements.Dump(); //Using Linqpad
source.OnNext(new int[] { 1, 2, 3, 4 });
source.OnNext(new int[] { 1, 3, 4 });
source.OnNext(new int[] { 1, 5, 6 });
A: If you expect adds and removes to be cumulative from the start of the sequence, you need something to remember what has come before.
public static IObservable<IEnumerable<T>> CumulativeAdded<T>(this IObservable<IEnumerable<T>> src) {
var memadd = new HashSet<T>();
return src.Select(x => x.Where(n => memadd.Add(n)));
}
public static IObservable<IEnumerable<T>> CumulativeRemoved<T>(this IObservable<IEnumerable<T>> src) {
var memdiff = new HashSet<T>();
return src.Select(x => { foreach (var n in x) memdiff.Add(n); return memdiff.AsEnumerable().Except(x); });
}
}
| stackoverflow | {
"language": "en",
"length": 261,
"provenance": "stackexchange_0000F.jsonl.gz:902338",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44659176"
} |
76eb6de701f2fbf3d4bbefb5ed0f9b887febe15f | Stackoverflow Stackexchange
Q: Break very long words with php mpdf I have generated pdf file, everything works just fine, just one thing, if I write very long word for example:
qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
All document becomes small and unreadable, for some reason it tries to fit the word in 1 row instead of braking it.
(I have tried word breaks,$mpdf->shrink_tables_to_fit and autosize's) nothing helps.
Thank you in advance.
A: Add css property overflow:wrap; to your table or div
| Q: Break very long words with php mpdf I have generated pdf file, everything works just fine, just one thing, if I write very long word for example:
qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
All document becomes small and unreadable, for some reason it tries to fit the word in 1 row instead of braking it.
(I have tried word breaks,$mpdf->shrink_tables_to_fit and autosize's) nothing helps.
Thank you in advance.
A: Add css property overflow:wrap; to your table or div
A:
Try this class:
.pre-line {
white-space: pre-line;
}
In html:
<p class="pre-line">your-long-text</p>
| stackoverflow | {
"language": "en",
"length": 87,
"provenance": "stackexchange_0000F.jsonl.gz:902339",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44659178"
} |
bfeabbd97fb91ef142df79f6eca645ecaccacad3 | Stackoverflow Stackexchange
Q: numpy dot product with missing values How do you do a numpy dot product where the two vectors might have missing values? This seems to require many additional steps, is there an easier way to do this?:
v1 = np.array([1,4,2,np.nan,3])
v2 = np.array([np.nan,np.nan,2,4,1])
np.where(np.isnan(v1),0,v1).dot(np.where(np.isnan(v2),0,v2))
A: We can use np.nansum to sum up the values ignoring NaNs after element-wise multiplication -
np.nansum(v1*v2)
Sample run -
In [109]: v1
Out[109]: array([ 1., 4., 2., nan, 3.])
In [110]: v2
Out[110]: array([ nan, nan, 2., 4., 1.])
In [111]: np.where(np.isnan(v1),0,v1).dot(np.where(np.isnan(v2),0,v2))
Out[111]: 7.0
In [115]: v1*v2
Out[115]: array([ nan, nan, 4., nan, 3.])
In [116]: np.nansum(v1*v2)
Out[116]: 7.0
| Q: numpy dot product with missing values How do you do a numpy dot product where the two vectors might have missing values? This seems to require many additional steps, is there an easier way to do this?:
v1 = np.array([1,4,2,np.nan,3])
v2 = np.array([np.nan,np.nan,2,4,1])
np.where(np.isnan(v1),0,v1).dot(np.where(np.isnan(v2),0,v2))
A: We can use np.nansum to sum up the values ignoring NaNs after element-wise multiplication -
np.nansum(v1*v2)
Sample run -
In [109]: v1
Out[109]: array([ 1., 4., 2., nan, 3.])
In [110]: v2
Out[110]: array([ nan, nan, 2., 4., 1.])
In [111]: np.where(np.isnan(v1),0,v1).dot(np.where(np.isnan(v2),0,v2))
Out[111]: 7.0
In [115]: v1*v2
Out[115]: array([ nan, nan, 4., nan, 3.])
In [116]: np.nansum(v1*v2)
Out[116]: 7.0
A: Another solution is to use masked arrays:
v1 = np.array([1,4,2,np.nan,3])
v2 = np.array([np.nan,np.nan,2,4,1])
v1_m = numpy.ma.array(v1, mask=numpy.isnan(v1))
v2_m = numpy.ma.array(v2, mask=numpy.isnan(v2))
numpy.ma.dot(v1_m, v2_m)
| stackoverflow | {
"language": "en",
"length": 129,
"provenance": "stackexchange_0000F.jsonl.gz:902347",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44659204"
} |
cd7b69a6ab90d53f86e1549a8c3d752660bc8fc2 | Stackoverflow Stackexchange
Q: How to open Visual Studio Solution File from the Command Prompt Today I was in the Windows Command Prompt after doing a git clone https://...MySolution.git and wanted to open the .sln (i.e., solution file) from the new directory of the cloned repo.
What is the command to open this new solution in Visual Studio? Suppose the relative path is /MySolution/MySolution.sln
A: If you haven't done cd MySolution but are still in the directory from which you did the git clone just type
start MySolution/MySolution.sln and hit Enter.
This will open whatever version of Visual Studio you currently have set to open with .sln files in Windows.
| Q: How to open Visual Studio Solution File from the Command Prompt Today I was in the Windows Command Prompt after doing a git clone https://...MySolution.git and wanted to open the .sln (i.e., solution file) from the new directory of the cloned repo.
What is the command to open this new solution in Visual Studio? Suppose the relative path is /MySolution/MySolution.sln
A: If you haven't done cd MySolution but are still in the directory from which you did the git clone just type
start MySolution/MySolution.sln and hit Enter.
This will open whatever version of Visual Studio you currently have set to open with .sln files in Windows.
A: Actually, you can also do directly \MySolution\MySolution.sln or .\MySolution\MySolution.sln and the solution will be opened.
I've been using it in CMD and Powershell with no problem.
A: If you don't mind using PowerShell, you should try WhatsNew. Once installed, you can just type sln to open the solution file in that directory.
| stackoverflow | {
"language": "en",
"length": 160,
"provenance": "stackexchange_0000F.jsonl.gz:902368",
"question_score": "37",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44659275"
} |
c775abeb0769b39e2c66f3c0392572d160b5e2d8 | Stackoverflow Stackexchange
Q: Why doesn't inline-grid work in Safari? I am working on a WordPress gravity form and used inline-grid for the layout.
It works perfectly in Firefox and Chrome.
But when it comes to Safari, display: inline-grid does not work. Although display: inline-block works.
Run the following code snippet in Safari to see what I am talking about.
.item {
width: 50px;
height: 50px;
background-color: lightgray;
display: inline-block;
margin: 5px;
}
.item2 {
width: 50px;
height: 50px;
background-color: gray;
display: inline-grid;
margin: 5px;
}
<div class="item"></div>
<div class="item"></div>
<hr>
<div class="item2"></div>
<div class="item2"></div>
A: Safari supports CSS Grid Layout
desktop -- from version 10.1
ios -- from version 10.3
http://caniuse.com/#feat=css-grid
Probably you're using not a very fresh Safari.
BTW, on my desktop v. 10.1.1 your code works as expected.
| Q: Why doesn't inline-grid work in Safari? I am working on a WordPress gravity form and used inline-grid for the layout.
It works perfectly in Firefox and Chrome.
But when it comes to Safari, display: inline-grid does not work. Although display: inline-block works.
Run the following code snippet in Safari to see what I am talking about.
.item {
width: 50px;
height: 50px;
background-color: lightgray;
display: inline-block;
margin: 5px;
}
.item2 {
width: 50px;
height: 50px;
background-color: gray;
display: inline-grid;
margin: 5px;
}
<div class="item"></div>
<div class="item"></div>
<hr>
<div class="item2"></div>
<div class="item2"></div>
A: Safari supports CSS Grid Layout
desktop -- from version 10.1
ios -- from version 10.3
http://caniuse.com/#feat=css-grid
Probably you're using not a very fresh Safari.
BTW, on my desktop v. 10.1.1 your code works as expected.
| stackoverflow | {
"language": "en",
"length": 128,
"provenance": "stackexchange_0000F.jsonl.gz:902380",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44659311"
} |
5964d4231a3a0fbaadcd691925b27b259daf1367 | Stackoverflow Stackexchange
Q: Prometheus Union of Ranged Vectors I have two range vectors (# of hits and misses) that I want to aggregate by their types. Some of the types have hits, other misses, some with both. These are two independant metrics that I'm trying to get a union of but the resulting vector doesn't make sense. It's missing some of the values and I think it's because they have either all hits or misses. Am I doing this completely the wrong way?
sum by (type) (increase(metric_hit{}[24h]) + sum by (type) (increase(metric_miss{}[24h])
A: First off, it's recommended to always initialise all your potential label values to avoid this sort of issue.
This can be handled with the or operator:
sum by (type) (
(increase(metric_hit[1d]) or metric_miss * 0)
+
(increase(metric_miss[1d]) or metric_hit * 0)
)
| Q: Prometheus Union of Ranged Vectors I have two range vectors (# of hits and misses) that I want to aggregate by their types. Some of the types have hits, other misses, some with both. These are two independant metrics that I'm trying to get a union of but the resulting vector doesn't make sense. It's missing some of the values and I think it's because they have either all hits or misses. Am I doing this completely the wrong way?
sum by (type) (increase(metric_hit{}[24h]) + sum by (type) (increase(metric_miss{}[24h])
A: First off, it's recommended to always initialise all your potential label values to avoid this sort of issue.
This can be handled with the or operator:
sum by (type) (
(increase(metric_hit[1d]) or metric_miss * 0)
+
(increase(metric_miss[1d]) or metric_hit * 0)
)
| stackoverflow | {
"language": "en",
"length": 133,
"provenance": "stackexchange_0000F.jsonl.gz:902408",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44659400"
} |
a9e874610bdcba0406b16e60fa1f568347fa11df | Stackoverflow Stackexchange
Q: Very slow response for Google Cloud Machine Learning training job UPDATED: I'm submitting a Machine Learning training job using the Google Cloud Platform command line, following the guidelines here. I've defined local paths to my Python package, the job name, the main module for GC to run, and the other variables required by gcloud. When I run the following command to upload my package and submit a job:
gcloud ml-engine jobs submit training $JOB_NAME \
--package-path $TRAINER_PACKAGE_PATH \
--module-name $MAIN_TRAINER_MODULE \
--job-dir $JOB_DIR \
--region $REGION \
--config config.yaml \
--staging-bucket $PACKAGE_STAGING_LOCATION \
-- \
--verbosity DEBUG
, the job submit takes about 45 minutes before any logs or messages appear. The package I am uploading during the job submit is only code, not large amounts of data or anything. Is this normal? Is there a way to speed the process up?
Thanks,
| Q: Very slow response for Google Cloud Machine Learning training job UPDATED: I'm submitting a Machine Learning training job using the Google Cloud Platform command line, following the guidelines here. I've defined local paths to my Python package, the job name, the main module for GC to run, and the other variables required by gcloud. When I run the following command to upload my package and submit a job:
gcloud ml-engine jobs submit training $JOB_NAME \
--package-path $TRAINER_PACKAGE_PATH \
--module-name $MAIN_TRAINER_MODULE \
--job-dir $JOB_DIR \
--region $REGION \
--config config.yaml \
--staging-bucket $PACKAGE_STAGING_LOCATION \
-- \
--verbosity DEBUG
, the job submit takes about 45 minutes before any logs or messages appear. The package I am uploading during the job submit is only code, not large amounts of data or anything. Is this normal? Is there a way to speed the process up?
Thanks,
| stackoverflow | {
"language": "en",
"length": 144,
"provenance": "stackexchange_0000F.jsonl.gz:902437",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44659502"
} |
04bdc76b5990a1c593caca2700b4cca3e6cb431e | Stackoverflow Stackexchange
Q: Why 'onClick' in react got bound to emptyFunction? Why onClick in react get bounds to emptyFunction?
I have next string in my component, it doesn't work:
onClick={ me => me.preventDefault() }
At the same time, if I change onClick to onMouseDown - it works.
There are no errors in console. Exploring the DOM gives that onClick handler is emptyFunction from react.
A: My previous findings was wrong.
The real problem was that component had been destructed before the event was handled. This may happen if prevent event propagation somewhere else, especially on body. Or it may happen if there is a handler subscribed to the same event and destruct the component.
| Q: Why 'onClick' in react got bound to emptyFunction? Why onClick in react get bounds to emptyFunction?
I have next string in my component, it doesn't work:
onClick={ me => me.preventDefault() }
At the same time, if I change onClick to onMouseDown - it works.
There are no errors in console. Exploring the DOM gives that onClick handler is emptyFunction from react.
A: My previous findings was wrong.
The real problem was that component had been destructed before the event was handled. This may happen if prevent event propagation somewhere else, especially on body. Or it may happen if there is a handler subscribed to the same event and destruct the component.
| stackoverflow | {
"language": "en",
"length": 112,
"provenance": "stackexchange_0000F.jsonl.gz:902454",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44659548"
} |
932705e1e59976f2dd69ecf332ecac7d5d766a86 | Stackoverflow Stackexchange
Q: Unlist a column while retaining character(0) as empty strings in R I am relatively new to R. I have a dataframe that has a column stored as a list. My column contain c("Benzo", "Ferri") or character(0) if it's empty. How can I change them to simply Benzo, Ferri and an empty string for character(0) instead?
I'm not able to, for instance df$general_RN <- unlist(df$general_RN) because Error in $<-.data.frame(*tmp*, general_RN, value = c("Drug Combinations", : replacement has 1992 rows, data has 10479
I am assuming that all the character(0) have been removed but I need them retained as NAs.
Here is what the column looks like
general_RN
c("Chlorambucil", "Vincristine", "Cyclophosphamide")
Pentazocine
character(0)
character(0)
c("Ampicillin", "Trimethoprim")
character(0)
I have ashamedly spent an hour on this problem.
Thanks for your advice.
A: It's tough to say without more information about your data, but maybe this can be a solution for you, or at least point you into the right direction:
a <- list('A',character(0),'B')
> a
[[1]]
[1] "A"
[[2]]
character(0)
[[3]]
[1] "B"
> unlist(lapply(a,function(x) if(identical(x,character(0))) ' ' else x))
[1] "A" " " "B"
So in your case that should be:
df$general_RN <- unlist(lapply(df$general_RN,function(x) if(identical(x,character(0))) ' ' else x))
HTH
| Q: Unlist a column while retaining character(0) as empty strings in R I am relatively new to R. I have a dataframe that has a column stored as a list. My column contain c("Benzo", "Ferri") or character(0) if it's empty. How can I change them to simply Benzo, Ferri and an empty string for character(0) instead?
I'm not able to, for instance df$general_RN <- unlist(df$general_RN) because Error in $<-.data.frame(*tmp*, general_RN, value = c("Drug Combinations", : replacement has 1992 rows, data has 10479
I am assuming that all the character(0) have been removed but I need them retained as NAs.
Here is what the column looks like
general_RN
c("Chlorambucil", "Vincristine", "Cyclophosphamide")
Pentazocine
character(0)
character(0)
c("Ampicillin", "Trimethoprim")
character(0)
I have ashamedly spent an hour on this problem.
Thanks for your advice.
A: It's tough to say without more information about your data, but maybe this can be a solution for you, or at least point you into the right direction:
a <- list('A',character(0),'B')
> a
[[1]]
[1] "A"
[[2]]
character(0)
[[3]]
[1] "B"
> unlist(lapply(a,function(x) if(identical(x,character(0))) ' ' else x))
[1] "A" " " "B"
So in your case that should be:
df$general_RN <- unlist(lapply(df$general_RN,function(x) if(identical(x,character(0))) ' ' else x))
HTH
| stackoverflow | {
"language": "en",
"length": 199,
"provenance": "stackexchange_0000F.jsonl.gz:902458",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44659559"
} |
2addedd15b49ad857b03c9a885a2dfd815c3dbb1 | Stackoverflow Stackexchange
Q: How do I pass advanced variables to Paypal PDT and IPN from hosted BuyNow Button My Paypal Hosted BuyButton applies a discount, done by adding this to advanced variables.
discount_rate=40
And that all works fine.
The problem is that in my IPN processing I check the user has paid correct amount by calling request.getParameter(mc_gross) and then I check the mc_gross figure against the expected figure, But mc_gross does not include the discount so this fails for discounted purchases.
I thought I could do
request.getParameter(discount_rate)
and then work out the net rate but it doesn't return the value.
So my question is how do get access to advanced_variable from IPN (and PDT), supplementary question is is there a standard variable that shows the amount actually paid by user (i.e after discount)
A: According to their docs, https://developer.paypal.com/docs/classic/ipn/integration-guide/IPNandPDTVariables/#id091EB04C0HS
It seems the discount amount would be retrieved with, request.getParameter(discount) which would be the total applied to the mc_gross_x.
You can get the rate by dividing the discount by the mc_gross_x.
| Q: How do I pass advanced variables to Paypal PDT and IPN from hosted BuyNow Button My Paypal Hosted BuyButton applies a discount, done by adding this to advanced variables.
discount_rate=40
And that all works fine.
The problem is that in my IPN processing I check the user has paid correct amount by calling request.getParameter(mc_gross) and then I check the mc_gross figure against the expected figure, But mc_gross does not include the discount so this fails for discounted purchases.
I thought I could do
request.getParameter(discount_rate)
and then work out the net rate but it doesn't return the value.
So my question is how do get access to advanced_variable from IPN (and PDT), supplementary question is is there a standard variable that shows the amount actually paid by user (i.e after discount)
A: According to their docs, https://developer.paypal.com/docs/classic/ipn/integration-guide/IPNandPDTVariables/#id091EB04C0HS
It seems the discount amount would be retrieved with, request.getParameter(discount) which would be the total applied to the mc_gross_x.
You can get the rate by dividing the discount by the mc_gross_x.
| stackoverflow | {
"language": "en",
"length": 168,
"provenance": "stackexchange_0000F.jsonl.gz:902495",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44659683"
} |
5cc70324592cd0d93c937993073fb14eaeab6b67 | Stackoverflow Stackexchange
Q: Installing selenium with python with homewbrew I have downloaded selenium to my Mac. I am trying to run a script in python using selenium but I am getting the error:
Traceback (most recent call last):
File "selenium.py", line 3, in <module>
from selenium import webdrive
File "/Users/shynds23/python/selenium.py", line 3, in <module>
from selenium import webdriver
ImportError: cannot import name webdriver
My script has these headers:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.common.exceptions import NoSuchElementException
import time
Any ideas on how to fix it?
A: Try downloading selenium using pip instead:
pip install selenium
If pip is not installed on your machine, run:
python get-pip.py
Here is the link for get-pip:
https://bootstrap.pypa.io/get-pip.py
| Q: Installing selenium with python with homewbrew I have downloaded selenium to my Mac. I am trying to run a script in python using selenium but I am getting the error:
Traceback (most recent call last):
File "selenium.py", line 3, in <module>
from selenium import webdrive
File "/Users/shynds23/python/selenium.py", line 3, in <module>
from selenium import webdriver
ImportError: cannot import name webdriver
My script has these headers:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.common.exceptions import NoSuchElementException
import time
Any ideas on how to fix it?
A: Try downloading selenium using pip instead:
pip install selenium
If pip is not installed on your machine, run:
python get-pip.py
Here is the link for get-pip:
https://bootstrap.pypa.io/get-pip.py
| stackoverflow | {
"language": "en",
"length": 115,
"provenance": "stackexchange_0000F.jsonl.gz:902519",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44659756"
} |
15f6b116a858682a2694e6bd29b42eda73879394 | Stackoverflow Stackexchange
Q: jenkins pipeline: can't pass build parameters to shared library vars Basically I can't pass build properties to Library var call without extra nonsense.
jenkinsfile relevant chunk:
tc_test{
repo = 'test1'
folder = 'test2'
submodules = true
refs = params.GitCheckout
}
That results in error
java.lang.NullPointerException: Cannot get property 'GitCheckout' on
null object
This, however, works:
def a1 = params.GitCheckout
tc_test{
repo = 'test1'
folder = 'test2'
submodules = true
refs = a1
}
The contents of the vars/tc_test.groovy in shared library :
def call ( body ) {
def config = [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = config
try {
body()
} catch(e) {
currentBuild.result = "FAILURE";
throw e;
} finally {
config.each{ k, v -> println "${k}:${v}" }
}
}
I'm not really good with groovy, so it might be something obvious.
A: Got the answer from Jenkins JIRA.
Small workaround is using maps instead of closures:
tc_test ([
repo: 'test1',
folder: 'test2',
submodules: true,
refs = params.GitCheckout
])
May have drawbacks, but for me that worked perfectly.
Still have to transfer params as argument to have access to them, but at least the code makes more sense now.
| Q: jenkins pipeline: can't pass build parameters to shared library vars Basically I can't pass build properties to Library var call without extra nonsense.
jenkinsfile relevant chunk:
tc_test{
repo = 'test1'
folder = 'test2'
submodules = true
refs = params.GitCheckout
}
That results in error
java.lang.NullPointerException: Cannot get property 'GitCheckout' on
null object
This, however, works:
def a1 = params.GitCheckout
tc_test{
repo = 'test1'
folder = 'test2'
submodules = true
refs = a1
}
The contents of the vars/tc_test.groovy in shared library :
def call ( body ) {
def config = [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = config
try {
body()
} catch(e) {
currentBuild.result = "FAILURE";
throw e;
} finally {
config.each{ k, v -> println "${k}:${v}" }
}
}
I'm not really good with groovy, so it might be something obvious.
A: Got the answer from Jenkins JIRA.
Small workaround is using maps instead of closures:
tc_test ([
repo: 'test1',
folder: 'test2',
submodules: true,
refs = params.GitCheckout
])
May have drawbacks, but for me that worked perfectly.
Still have to transfer params as argument to have access to them, but at least the code makes more sense now.
A: Suppose you have a sharedLibrary to call a Rundeck Job,
Parameters:
1 runDeckJobId - Rundeck unique job id thats available in settings.
2 role - AD Group associated with Rundeck Job
3 runDeckProject - Name of the project configured in rundeck.
4 optional - All optional parameters as a Map.
- rundeckInstanceType - Rundeck instances are currently in UK or HK.
- retries - Number of retries for checking job status once started (Default value=100)
- timeout - Number of seconds to be waited before each retry (Default value=15)
- verbose - If verbose calls need to be made in the rundeck api calls (Default value=false)
- rundeckArgs - All rundeck parameters as a map. Eg: Name of the playBook, location of inventory file.
Example Usage in JenkinsFile:
if (isRundeckDeployment == "true") {
def optional = [
rundeckInstance : "UK",
timeout : 10,
rundeckArgs : [
artifactPack : "${artifactPath}",
DEPLOYMENT_ENVIRONMENT: "${deploymentEnvironment}",
EXTRA_ARGS : "-e deployment_serial=1"
]
]
triggerRundeckJob("job-id", "AD-group-id", "BitbucketKey", optional)
}
Shared Library Function with filename : triggerRundeckJob in vars folder
def call(String rundeckJobId, String role, String rundeckProject, Map optional) {
String jobUserId
wrap([$class: 'BuildUser']) {
jobUserId = "${BUILD_USER_ID}"
}
// Determine rundeck instance type, by default instance is UK (rundeckAuthToken)
String mainRundeckId = optional.rundeckInstance == "HK" ? "rundeckAuthTokenHK": "rundeckAuthToken"
String rundeckBaseURL = optional.rundeckInstance == "HK" ? "https://rundeckUrl/selfservice" : "https://rundeckUrl:9043/selfservice"
withCredentials([string(credentialsId: mainRundeckId, variable: 'mainRundeckIdVariable')]) {
int retries = optional.retries ?: 100
int timeout = optional.timeout ?: 15
String verbose = optional.verbose? "-v" : "-s"
String rundeckArgsString = optional.rundeckArgs.collect{ "-${it.key} \\\"${it.value}\\\"" }.join(" ")
def tokenResponse = sh(returnStdout: true, script: "curl -k ${verbose} -X POST -d '{\"user\": \"${jobUserId}\",\"roles\":\"${role}\",\"duration\":\"30m\"}' -H Accept:application/json -H 'Content-Type: application/json' -H X-Rundeck-Auth-Token:${mainRundeckIdVariable} ${rundeckBaseURL}/api/19/tokens")
def tokenResponseJson = readJSON text: tokenResponse
def rundeckResponse = sh(returnStdout: true, script: "curl -k ${verbose} --data-urlencode argString=\"${rundeckArgsString}\" -H Accept:application/json -H X-Rundeck-Auth-Token:${tokenResponseJson.token} ${rundeckBaseURL}/api/19/job/${rundeckJobId}/run")
def rundeckResponseJson = readJSON text: rundeckResponse
if(!rundeckResponseJson.error){
while(true){
if(retries==0) {
currentBuild.result = "FAILURE"
echo "Rundeck Job Timedout, See: ${rundeckBaseURL}/project/${rundeckProject}/job/show/${rundeckJobId}"
break;
}
def jobStateResponse = sh(returnStdout: true, script:"curl -k ${verbose} -H Accept:application/json -H X-Rundeck-Auth-Token:${tokenResponseJson.token} ${rundeckBaseURL}/api/19/execution/${rundeckResponseJson.id}/state")
def jobStateResponseJson = readJSON text: jobStateResponse
if(jobStateResponseJson.completed) {
if(jobStateResponseJson.executionState == "FAILED") {
currentBuild.result = "FAILURE"
echo "Rundeck Job FAILED, See: ${rundeckBaseURL}/project/${rundeckProject}/job/show/${rundeckJobId}"
break
}else{
currentBuild.result = "SUCCESS"
echo "Rundeck Job SUCCESS, See: ${rundeckBaseURL}/project/${rundeckProject}/job/show/${rundeckJobId}"
break
}
}
else{
sleep timeout
}
retries--
}
}else{
echo "******************Rundeck Job Error: ${rundeckResponseJson.message} ******************"
currentBuild.result = "FAILURE"
}
}
}
| stackoverflow | {
"language": "en",
"length": 577,
"provenance": "stackexchange_0000F.jsonl.gz:902521",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44659759"
} |
f0c5067f8c05bd9ccb5522c2a4e13760fb2d90c6 | Stackoverflow Stackexchange
Q: ttk button span multiple columns I am trying to make a TTK Button that spans multiple columns within a frame. Basically I have two rows of buttons, and I want the last button underneath both rows, to span the width of both rows.
However I am not sure how to accomplish this. This is the code I have on the button:
btnOff = ttk.Button(self, text = "OFF", command = tc.Off).
grid(column = 1, row = 10, columnspan = 2, rowspan = 2)
I have tried increasing the column width, but it doesn't seem to help. In fact, even when I try to just set it up normally it is smaller than the other buttons in the rows above it, even though all those buttons have the same grid code as what I posted above.
A: Example expand last two columns. Row 10 and columns 1 and 2
python 2
import Tkinker as tk
python 3
import tkinter as tk
btnOff = ttk.Button(self, text = "OFF", command = tc.Off).
grid(column = 1, row = 10, columnspan = 2, sticky = tk.W+tk.E)
| Q: ttk button span multiple columns I am trying to make a TTK Button that spans multiple columns within a frame. Basically I have two rows of buttons, and I want the last button underneath both rows, to span the width of both rows.
However I am not sure how to accomplish this. This is the code I have on the button:
btnOff = ttk.Button(self, text = "OFF", command = tc.Off).
grid(column = 1, row = 10, columnspan = 2, rowspan = 2)
I have tried increasing the column width, but it doesn't seem to help. In fact, even when I try to just set it up normally it is smaller than the other buttons in the rows above it, even though all those buttons have the same grid code as what I posted above.
A: Example expand last two columns. Row 10 and columns 1 and 2
python 2
import Tkinker as tk
python 3
import tkinter as tk
btnOff = ttk.Button(self, text = "OFF", command = tc.Off).
grid(column = 1, row = 10, columnspan = 2, sticky = tk.W+tk.E)
A: when you want to use columnspan you will need to make sure sticky W and E selected, similar for rowspan you will need N and S
| stackoverflow | {
"language": "en",
"length": 208,
"provenance": "stackexchange_0000F.jsonl.gz:902553",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44659879"
} |
37468f6dcdbcd64117254e145a416d03f544f856 | Stackoverflow Stackexchange
Q: PyCharm OpenCV- autocomplete with import cv2.cv2, no errors with import cv2 I'm just getting started with PyCharm, python, and OpenCV, and I'm trying to set up my environment. I've installed all the necessary packages and I import OpenCV like so:
import cv2
However, this does not autocomplete and shows warnings that the method may be missing when called, BUT if I import like so:
import cv2.cv2
autocomplete does work, but running produces the following error:
Traceback (most recent call last):
File "C:/Users/dunnj/PycharmProjects/TransformApps/transformapps/blackwhite.py", line 1, in <module>
import cv2.cv2 as cv2
AttributeError: 'module' object has no attribute 'cv2'
A: just execute the following commands in your project working environment.
*
*pip uninstall opencv-python
*pip install opencv-python==4.5.4.60
| Q: PyCharm OpenCV- autocomplete with import cv2.cv2, no errors with import cv2 I'm just getting started with PyCharm, python, and OpenCV, and I'm trying to set up my environment. I've installed all the necessary packages and I import OpenCV like so:
import cv2
However, this does not autocomplete and shows warnings that the method may be missing when called, BUT if I import like so:
import cv2.cv2
autocomplete does work, but running produces the following error:
Traceback (most recent call last):
File "C:/Users/dunnj/PycharmProjects/TransformApps/transformapps/blackwhite.py", line 1, in <module>
import cv2.cv2 as cv2
AttributeError: 'module' object has no attribute 'cv2'
A: just execute the following commands in your project working environment.
*
*pip uninstall opencv-python
*pip install opencv-python==4.5.4.60
A: The proposed import solution did not work for me.
I had exactly this problem with OpenCV 4.2.0 compiled from sources, installed in my Conda environment and PyCharm 2020.1.
I solved this way:
*
*Select project interpreter
*Click on the settings button next to it and then clicking on the Show paths for selected interpreter
*added the directory containing the cv2 library (in my case in the Conda Python library path - e.g. miniconda3/lib/python3.7/site-packages/cv2/python-3.7). In general check the site-packages/cv2/python-X.X directory)
A: Following Workaround 2 from the JetBrains issue tracker (https://youtrack.jetbrains.com/issue/PY-54649) helped me:
*
*In PyCharm open from menue FILE - SETTINGS
*Go to PROJECT:<your_project_name> and select PYTHON INTERPRETER
*Click on the gear symbol next to the interpreter path and select SHOW ALL.
Make sure the correct interpreter is selected.
*Click on that icon that looks like a folder tree (on the top)
*Click on the "+" icon
*Select the folder where the opencv package is located
normally (if you installed it via package manager) you will find it in:
<your_project_path>\venv\Lib\site-packages\cv2
*Click OK (twice)
*Wait for updating skeletons
A: My Configuration:
*
*PyCharm 2021.2.3 on macOS 11.6
*Python 3.9.7 running in a Virtual Environment(VE)
*opencv-python 4.5.4.58 installed into the VE via pip using the PyCharm Terminal window
Steps that worked for me to get autocompletion working:
tldr: Update python interpreter settings to point to <full path to venv>/lib/python3.9/site-packages/cv2
*
*In preferences, Select Python Interpreter
*Click the setting icon ( gear on right of box that display your Python Interpreter and select Show All
*A list of all your configured Interpreters is show with your current interpreter already hi-lighted.
*With your interpreter still highlighted, click the Icon that shows a folder and subfolder at the top. Tool tip should say "Show Paths for Selected Interpreter.
*Click the + button and add the following path:
<full path to the venv>/lib/python3.9/site-packages/cv2
The .../python3.9... will be different if you are using a different Python Version.
*Click Ok until you are back to the main IDE window.
This has worked in three different Virtual environments for me so far. For two of those, I had to restart the IDE for the completions to show up. The remaining one did not require a restart and worked immediately.
A: Credit to ingolemo from r/learnpython. I was stuck on this for ages and it drove me mad so I'm here sharing.
My OpenCV was installed by using the wrapper opencv-python package
The sys.modules hacking that that module is doing is the source of the
problem. Pycharm doesn't exactly import modules in order to know
what's inside of them, so messing with the imports dynamically like
that confuses pycharm greatly. It's not pycharm's fault, the
opencv-python maintainer should have used a star import rather than
that messy import hack. You should be able to work around the problem
using the technique you stumbled upon. All you have to do is catch and
ignore the error under normal operation:
import cv2
# this is just to unconfuse pycharm
try:
from cv2 import cv2
except ImportError:
pass
A: Installing Jedi solved this problem for me.
You can use pip install jedi in terminal
You can find more info about jedi here: https://pypi.org/project/jedi/
A: i had the same problem.
i used
import cv2 as cv2
and after that both importing meth
A: try
try:
import cv2.__init__ as cv2
except ImportError:
pass
A: If you are using virtualenv, then mark the virtualenv directory as excluded in your project structure in Pycharm project settings.
A: Encountered this before.
*
*find "cv2.cp38-win_amd64.pyd" in "Lib\site-packages\cv2" path.
*Copy it to "DLLs" path.
Work for system python and anaconda environments(need to do this in conda envs path)
PS.
*
*"site-packages" path can be found by "pip --version"
*"DLLs" path is located at "Lib\site-packages....\DLLs"
| stackoverflow | {
"language": "en",
"length": 740,
"provenance": "stackexchange_0000F.jsonl.gz:902561",
"question_score": "13",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44659906"
} |
4e6503c128a2d2932ffa601d8b8b32421473e7b1 | Stackoverflow Stackexchange
Q: Using dplyr group_by in a function I am trying to use dplyr's group_by in a local function, example:
testFunction <- function(df, x) {
df %>%
group_by(x) %>%
summarize(mean.Petal.Width = mean(Petal.Width))
}
testFunction(iris, Species)
and I get an error "... unknown variable to group by: x"
I've tried group_by_ and it gives me a summary of the entire dataset.
Anybody have a clue how I can fix this?
Thanks in advance!
A: Here is one way to work with the new enquo from dplyr, where enquo takes the string and converts to quosure which gets evaluated by unquoting (UQ or !!) in group_by, mutate, summarise etc.
library(dplyr)
testFunction <- function(df, x) {
x <- enquo(x)
df %>%
group_by(!! x) %>%
summarize(mean.Petal.Width = mean(Petal.Width))
}
testFunction(iris, Species)
# A tibble: 3 x 2
# Species mean.Petal.Width
# <fctr> <dbl>
#1 setosa 0.246
#2 versicolor 1.326
#3 virginica 2.026
| Q: Using dplyr group_by in a function I am trying to use dplyr's group_by in a local function, example:
testFunction <- function(df, x) {
df %>%
group_by(x) %>%
summarize(mean.Petal.Width = mean(Petal.Width))
}
testFunction(iris, Species)
and I get an error "... unknown variable to group by: x"
I've tried group_by_ and it gives me a summary of the entire dataset.
Anybody have a clue how I can fix this?
Thanks in advance!
A: Here is one way to work with the new enquo from dplyr, where enquo takes the string and converts to quosure which gets evaluated by unquoting (UQ or !!) in group_by, mutate, summarise etc.
library(dplyr)
testFunction <- function(df, x) {
x <- enquo(x)
df %>%
group_by(!! x) %>%
summarize(mean.Petal.Width = mean(Petal.Width))
}
testFunction(iris, Species)
# A tibble: 3 x 2
# Species mean.Petal.Width
# <fctr> <dbl>
#1 setosa 0.246
#2 versicolor 1.326
#3 virginica 2.026
A: I got it to work like this:
testFunction <- function(df, x) {
df %>%
group_by(get(x)) %>%
summarize(mean.Petal.Width = mean(Petal.Width))
}
testFunction(iris,"Species")
I changed x to get(x), and Species to "Species" in testFunction(iris,...).
| stackoverflow | {
"language": "en",
"length": 180,
"provenance": "stackexchange_0000F.jsonl.gz:902577",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44659943"
} |
649ce7616008fc1ac630dc44fd0dfc9709d1db44 | Stackoverflow Stackexchange
Q: Replace array elements without losing reference? How does one replace all elements of an array without losing references?
var arr = [1, 2, 3];
var b = arr;
b == arr; // true
magic(arr, [4, 5, 6]);
b == arr; // should return true
One way of doing it is by popping and pushing. Is there a clean way?
A: You could splice the old values and append the new values.
function magic(reference, array) {
[].splice.apply(reference, [0, reference.length].concat(array));
}
var arr = [1, 2, 3],
b = arr;
console.log(b === arr); // true
magic(arr, [4, 5, 6]);
console.log(b === arr); // should return true
console.log(arr);
Another way, is to use Object.assign. This requires to set the length of the array, if it is smaller than the original array.
function magic(reference, array) {
Object.assign(reference, array, { length: array.length });
}
var arr = [1, 2, 3],
b = arr;
console.log(b === arr); // true
magic(arr, [4, 5, 6, 7]);
console.log(b === arr); // should return true
console.log(arr);
| Q: Replace array elements without losing reference? How does one replace all elements of an array without losing references?
var arr = [1, 2, 3];
var b = arr;
b == arr; // true
magic(arr, [4, 5, 6]);
b == arr; // should return true
One way of doing it is by popping and pushing. Is there a clean way?
A: You could splice the old values and append the new values.
function magic(reference, array) {
[].splice.apply(reference, [0, reference.length].concat(array));
}
var arr = [1, 2, 3],
b = arr;
console.log(b === arr); // true
magic(arr, [4, 5, 6]);
console.log(b === arr); // should return true
console.log(arr);
Another way, is to use Object.assign. This requires to set the length of the array, if it is smaller than the original array.
function magic(reference, array) {
Object.assign(reference, array, { length: array.length });
}
var arr = [1, 2, 3],
b = arr;
console.log(b === arr); // true
magic(arr, [4, 5, 6, 7]);
console.log(b === arr); // should return true
console.log(arr);
A: The magic part could be:
arr.splice(0, arr.length, 4, 5, 6);
var arr = [1, 2, 3];
var b = arr;
b == arr; // true
arr.splice(0, arr.length, 4, 5, 6);
console.log(b);
console.log(arr);
console.log(arr === b);
.as-console-wrapper { max-height: 100% !important; top: 0; }
If you already have the replacing array in a variable (let's say repl = [4, 5, 6]), then use the rest parameters syntax:
arr.splice(0, arr.length, ...repl);
var arr = [1, 2, 3];
var b = arr;
var repl = [4, 5, 6];
b == arr; // true
arr.splice(0, arr.length, ...repl);
console.log(b);
console.log(arr);
console.log(arr === b);
.as-console-wrapper { max-height: 100% !important; top: 0; }
A: Here's one way:
var arr = [1, 2, 3];
var b = arr;
console.log(`b == arr, b
`, b == arr, b.join());
var c = magic(arr, [4, 5, 6]);
console.log(`b == arr, b
`, b == arr, b.join());
console.log(`c == arr, c
`, c == arr, c.join());
function magic(to, from) {
// remove elements from existing array
var old = to.splice(0);
for (var i = 0; i < from.length; i++) {
to[i] = from[i];
}
return old;
}
This implementation returns a copy of the old elements that were originally in the array.
A: Copy the new values over the old ones.
function magic(arr, newvals) {
for (let i = 0; i < newvals.length; i++) arr[i] = newvals[i];
arr.length = newvals.length;
}
A: function replaceArrValues(arrRef, newValues)
{
arrRef.length = 0; // clear the array without losing reference
newValues.forEach(x => arrRef.push(x));
}
| stackoverflow | {
"language": "en",
"length": 416,
"provenance": "stackexchange_0000F.jsonl.gz:902612",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44660024"
} |
633fa8c6daf171937f036e3744ad711917d78080 | Stackoverflow Stackexchange
Q: Powershell - check if a CD is in CD-ROM drive Is this possible?
My first guess would be something like:
C:> Get-WmiObject Win32_CDROMDrive
But when I tried this, it only tells me Caption, Drive, Manufacturer,VolumeName
No information on whether or not there is a CD in the disc drive.
A: You can get this information by
(Get-WMIObject -Class Win32_CDROMDrive -Property *).MediaLoaded
You can see what properties are available for that WMI class by
Get-WMIObject -Class Win32_CDROMDrive -Property * | Get-Member
and more detailed documentation from
Get-WMIHelp -Class Win32_CDROMDrive
In general, you will find that liberal use of the Get-Help, Get-Member, Get-Command, and Get-WMIHelp cmdlets will provide you with a great deal of information, and possibly eliminate the need to ask questions like this here and wait for an answer that may or may not come.
| Q: Powershell - check if a CD is in CD-ROM drive Is this possible?
My first guess would be something like:
C:> Get-WmiObject Win32_CDROMDrive
But when I tried this, it only tells me Caption, Drive, Manufacturer,VolumeName
No information on whether or not there is a CD in the disc drive.
A: You can get this information by
(Get-WMIObject -Class Win32_CDROMDrive -Property *).MediaLoaded
You can see what properties are available for that WMI class by
Get-WMIObject -Class Win32_CDROMDrive -Property * | Get-Member
and more detailed documentation from
Get-WMIHelp -Class Win32_CDROMDrive
In general, you will find that liberal use of the Get-Help, Get-Member, Get-Command, and Get-WMIHelp cmdlets will provide you with a great deal of information, and possibly eliminate the need to ask questions like this here and wait for an answer that may or may not come.
| stackoverflow | {
"language": "en",
"length": 136,
"provenance": "stackexchange_0000F.jsonl.gz:902613",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44660027"
} |
d471790363db53b0fef94c562c7a06ec9406ecde | Stackoverflow Stackexchange
Q: Pandas left outer join exclusion How can I do a left outer join, excluding the intersection, in Pandas?
I have 2 pandas dataframes
df1 = pd.DataFrame(data = {'col1' : ['finance', 'finance', 'finance', 'accounting', 'IT'], 'col2' : ['az', 'bh', '', '', '']})
df2 = pd.DataFrame(data = {'col1' : ['finance', 'finance', 'finance', 'finance', 'finance'], 'col2' : ['', 'az', '', '', '']})
df1
col1 col2
0 finance az
1 finance bh
2 finance
3 accounting
4 IT
df2
col1 col2
0 finance
1 finance az
2 finance
3 finance
4 finance
As you can see the dataframe has blank values as well. I tried using the example and its not giving me the result I want.
common = df1.merge(df2,on=['col1','col2'])
df3=df1[(~df1.col1.isin(common.col1))&(~df1.col2.isin(common.col2))]
I want the output to look something like
col1 col2
3 accounting
4 IT
A: A one liner for this based on Bin's answer may be:
df=pd.merge(df1,df2[['col1']],on=['col1'],how="outer",indicator=True).query('_merge=="left_only"')
| Q: Pandas left outer join exclusion How can I do a left outer join, excluding the intersection, in Pandas?
I have 2 pandas dataframes
df1 = pd.DataFrame(data = {'col1' : ['finance', 'finance', 'finance', 'accounting', 'IT'], 'col2' : ['az', 'bh', '', '', '']})
df2 = pd.DataFrame(data = {'col1' : ['finance', 'finance', 'finance', 'finance', 'finance'], 'col2' : ['', 'az', '', '', '']})
df1
col1 col2
0 finance az
1 finance bh
2 finance
3 accounting
4 IT
df2
col1 col2
0 finance
1 finance az
2 finance
3 finance
4 finance
As you can see the dataframe has blank values as well. I tried using the example and its not giving me the result I want.
common = df1.merge(df2,on=['col1','col2'])
df3=df1[(~df1.col1.isin(common.col1))&(~df1.col2.isin(common.col2))]
I want the output to look something like
col1 col2
3 accounting
4 IT
A: A one liner for this based on Bin's answer may be:
df=pd.merge(df1,df2[['col1']],on=['col1'],how="outer",indicator=True).query('_merge=="left_only"')
A: Pandas left outer join exclusion can be achieved by setting pandas merge's indicator=True. Then filter by the indicator in _merge column.
df=pd.merge(df1,df2[['col1']],on=['col1'],how="outer",indicator=True)
df=df[df['_merge']=='left_only']
# this following line is just formating
df = df.reset_index()[['col1', 'col2']]
Output:
col1 col2
0 accounting
1 IT
==================================
====The following is an example showing the mechanism====
df1 = pd.DataFrame({'key1': ['0', '1'],
'key2': [-1, -1],
'A': ['A0', 'A1'],
})
df2 = pd.DataFrame({'key1': ['0', '1'],
'key2': [1, -1],
'B': ['B0', 'B1']
})
:
df1
Output:
A key1 key2
0 A0 0 -1
1 A1 1 -1
:
df2
Output:
B key1 key2
0 B0 0 1
1 B1 1 -1
:
df=pd.merge(df1,df2,on=['key1','key2'],how="outer",indicator=True)
:
Output:
A key1 key2 B _merge
0 A0 0 -1 NaN left_only
1 A1 1 -1 B1 both
2 NaN 0 1 B0 right_only
:With the above indicators in the _merge column. you can select rows in one dataframe but not in another.
df=df[df['_merge']=='left_only']
df
Output:
A key1 key2 B _merge
0 A0 0 -1 NaN left_only
A: This fails because you're independently checking for a match in col1 & col2, and excluding a match on either. The empty strings match the empty strings in the finance rows.
You'd want:
df3 = df1[(~df1.col1.isin(common.col1))|(~df1.col2.isin(common.col2))]
df3
Out[150]:
col1 col2
1 finance bh
3 accounting
4 IT
To get the rows in df1 not in df2 .
To get specifically
df3
col1 col2
3 accounting
4 IT
you might try just selecting those with a non-matching col1.
df3 = df1[~df1.col1.isin(df2.col1)]
df3
Out[172]:
col1 col2
3 accounting
4 IT
To independently check for a match in col1 & col2 and exclude a match on either while having NaNs compare unequal/never count as a match, you could use
df3 = df1[(~df1.col1.isin(common.col1)|df1.col1.isnull())&(~df1.col2.isin(common.col2)|df1.col2.isnull())]
df3
Out[439]:
col1 col2
3 accounting NaN
4 IT NaN
assuming you're working with actual NaNs, either None or np.nan, in your actual data, instead of empty strings as in this example. If the latter, you'll need to add
df1.replace('', np.nan, inplace=True)
df2.replace('', np.nan, inplace=True)
first.
| stackoverflow | {
"language": "en",
"length": 473,
"provenance": "stackexchange_0000F.jsonl.gz:902618",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44660043"
} |
1ce2081fe2cfe3c480d24c315f9d637071c1706b | Stackoverflow Stackexchange
Q: IntelliJ is running old class files I am using Intellij for my project development(Java). Strangely, whenever I try to run the project making some new changes to it, Intellij is always running old class files which were complied for a older version of my project. I tried recompiling, rebuilding, tried invalidate caches and restart, removed the project and opened it again but nothing seems to work. Not able to figure out the reason and now I am clueless what to do.
Any help would be appreciated. Thank you.
A: I've had the same issue and I haven't been able to consistently make it go away, however, here's some things I've tried that could help:
*
*delete .class files
*invalidate caches and restart
*check settings: sdk and source path imports of IDEA code might affect it
*delete and reinstall intellij
Or do what I've ultimately done, which is to create a new project and avoid spending 5 hours tinkering with IntelliJ to get it to run my code properly.
| Q: IntelliJ is running old class files I am using Intellij for my project development(Java). Strangely, whenever I try to run the project making some new changes to it, Intellij is always running old class files which were complied for a older version of my project. I tried recompiling, rebuilding, tried invalidate caches and restart, removed the project and opened it again but nothing seems to work. Not able to figure out the reason and now I am clueless what to do.
Any help would be appreciated. Thank you.
A: I've had the same issue and I haven't been able to consistently make it go away, however, here's some things I've tried that could help:
*
*delete .class files
*invalidate caches and restart
*check settings: sdk and source path imports of IDEA code might affect it
*delete and reinstall intellij
Or do what I've ultimately done, which is to create a new project and avoid spending 5 hours tinkering with IntelliJ to get it to run my code properly.
A: Follow these steps to let IntelliJ "forget" all old internal files:
*
*Close the running IntelliJ instance.
*Delete the .idea directory of your project.
*Open the project like a new project.
After that you have a fresh IntelliJ project that probably needs some configuration (as usual).
A: I had such a problem in IntelliJ by a maven enabled project.
Running maven clean phase and ... , but no effect,
any change to classes had no effect in deployed project.
finally I found the problem.
classes was hidden in this path:
{Project Path}\src\main\webapp\WEB-INF\classes{my packages}
by removing this, problems has gone.
Hope it was useful.
A: In my case it was the maven issue within Intellij settings. When I used bundled version it gave the error it should have. I don't know why giving actual path won't work but Bundled Version will work.
A: In the context of Java Parser, I needed to run mvn clean install -DskipTests after each change on the command line. Then, class changes were available in IntelliJ. No other way helped here.
| stackoverflow | {
"language": "en",
"length": 345,
"provenance": "stackexchange_0000F.jsonl.gz:902628",
"question_score": "13",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44660071"
} |
e6ce86730df80ed36dd4dbe955828ba5ad1d7517 | Stackoverflow Stackexchange
Q: AttributeError: while using monkeypatch of pytest src/mainDir/mainFile.py
contents of mainFile.py
import src.tempDir.tempFile as temp
data = 'someData'
def foo(self):
ans = temp.boo(data)
return ans
src/tempDir/tempFile.py
def boo(data):
ans = data
return ans
Now I want to test foo() from src/tests/test_mainFile.py and I want to mock temp.boo(data) method in foo() method
import src.mainDir.mainFile as mainFunc
testData = 'testData'
def test_foo(monkeypatch):
monkeypatch.setattr('src.tempDir.tempFile', 'boo', testData)
ans = mainFunc.foo()
assert ans == testData
but I get error
AttributeError: 'src.tempDir.tempFile' has no attribute 'boo'
I expect ans = testData.
I would like to know if I am correctly mocking my tempDir.boo() method or I should use pytest's mocker instead of monkeypatch.
A: You're telling monkeypatch to patch the attribute boo of the string object you pass in.
You'll either need to pass in a module like monkeypatch.setattr(tempFile, 'boo', testData), or pass the attribute as a string too (using the two-argument form), like monkeypatch.setattr('src.tempDir.tempFile.boo', testData).
| Q: AttributeError: while using monkeypatch of pytest src/mainDir/mainFile.py
contents of mainFile.py
import src.tempDir.tempFile as temp
data = 'someData'
def foo(self):
ans = temp.boo(data)
return ans
src/tempDir/tempFile.py
def boo(data):
ans = data
return ans
Now I want to test foo() from src/tests/test_mainFile.py and I want to mock temp.boo(data) method in foo() method
import src.mainDir.mainFile as mainFunc
testData = 'testData'
def test_foo(monkeypatch):
monkeypatch.setattr('src.tempDir.tempFile', 'boo', testData)
ans = mainFunc.foo()
assert ans == testData
but I get error
AttributeError: 'src.tempDir.tempFile' has no attribute 'boo'
I expect ans = testData.
I would like to know if I am correctly mocking my tempDir.boo() method or I should use pytest's mocker instead of monkeypatch.
A: You're telling monkeypatch to patch the attribute boo of the string object you pass in.
You'll either need to pass in a module like monkeypatch.setattr(tempFile, 'boo', testData), or pass the attribute as a string too (using the two-argument form), like monkeypatch.setattr('src.tempDir.tempFile.boo', testData).
A: My use case was was slightly different but should still apply. I wanted to patch the value of sys.frozen which is set when running an application bundled by something like Pyinstaller. Otherwise, the attribute does not exist. Looking through the pytest docs, the raising kwarg controls wether or not AttributeError is raised when the attribute does not already exist. (docs)
Usage Example
import sys
def test_frozen_func(monkeypatch):
monkeypatch.setattr(sys, 'frozen', True, raising=False)
# can use ('fq_import_path.sys.frozen', ...)
# if what you are trying to patch is imported in another file
assert sys.frozen
A: Update: mocking function calls can be done with monkeypatch.setattr('package.main.slow_fun', lambda: False) (see answer and comments in https://stackoverflow.com/a/44666743/3219667) and updated snippet below
I don't think this can be done with pytest's monkeypatch, but you can use the pytest-mock package. Docs: https://github.com/pytest-dev/pytest-mock
Quick example with the two files below:
# package/main.py
def slow_fun():
return True
def main_fun():
if slow_fun():
raise RuntimeError('Slow func returned True')
# tests/test_main.py
from package.main import main_fun
# Make sure to install pytest-mock so that the mocker argument is available
def test_main_fun(mocker):
mocker.patch('package.main.slow_fun', lambda: False)
main_fun()
# UPDATE: Alternative with monkeypatch
def test_main_fun_monkeypatch(monkeypatch):
monkeypatch.setattr('package.main.slow_fun', lambda: False)
main_fun()
Note: this also works if the functions are in different files
| stackoverflow | {
"language": "en",
"length": 351,
"provenance": "stackexchange_0000F.jsonl.gz:902672",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44660196"
} |
c7f4e21576f323f6993a838d00ebc7774a5ea778 | Stackoverflow Stackexchange
Q: How can I do ModelBinding with HttpTrigger in Azure Functions? I need to create an Azure Function that responds to a HTTP POST, and leverages the integrated model binding.
How can I modify this
[FunctionName("TokenPolicy")]
public static HttpResponseMessage Run([HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = "TokenPolicy/{IssuerID}/{SpecificationID}")]HttpRequestMessage req, string IssuerID, string specificationID, TraceWriter log)
{
log.Info("C# HTTP trigger function processed a request. TokenPolicy");
// Fetching the name from the path parameter in the request URL
return req.CreateResponse(HttpStatusCode.OK, "data " + specificationID);
}
in such a way that my client POST's the object, and I have normal ASP.NET style model binding?
A: Instead of using an HttpRequestMessage parameter, you can use a custom type. The binding will attempt to parse the request body as JSON and populate that object before calling the function. Some details here: https://learn.microsoft.com/azure/azure-functions/functions-bindings-http-webhook-trigger?tabs=csharp#payload
| Q: How can I do ModelBinding with HttpTrigger in Azure Functions? I need to create an Azure Function that responds to a HTTP POST, and leverages the integrated model binding.
How can I modify this
[FunctionName("TokenPolicy")]
public static HttpResponseMessage Run([HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = "TokenPolicy/{IssuerID}/{SpecificationID}")]HttpRequestMessage req, string IssuerID, string specificationID, TraceWriter log)
{
log.Info("C# HTTP trigger function processed a request. TokenPolicy");
// Fetching the name from the path parameter in the request URL
return req.CreateResponse(HttpStatusCode.OK, "data " + specificationID);
}
in such a way that my client POST's the object, and I have normal ASP.NET style model binding?
A: Instead of using an HttpRequestMessage parameter, you can use a custom type. The binding will attempt to parse the request body as JSON and populate that object before calling the function. Some details here: https://learn.microsoft.com/azure/azure-functions/functions-bindings-http-webhook-trigger?tabs=csharp#payload
A: Based on the documentation from HTTP trigger from code, you can simply accept your own object:
For a custom type (such as a POCO), Functions will attempt to parse
the request body as JSON to populate the object properties.
public class MyModel
{
public int Id { get; set; }
public string Name { get; set; }
}
[FunctionName("TokenPolicy")]
public static HttpResponseMessage Run([HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = "TokenPolicy/{IssuerID}/{SpecificationID}")]MyModel myObj, string IssuerID, string specificationID, TraceWriter log)
{
log.Info("C# HTTP trigger function processed a request. TokenPolicy");
// Do something your your object
return new HttpResponseMessage(HttpStatusCode.OK);
}
| stackoverflow | {
"language": "en",
"length": 229,
"provenance": "stackexchange_0000F.jsonl.gz:902709",
"question_score": "12",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44660314"
} |
1b829312010d2102f196f3c0c4e0da3bb5452cd6 | Stackoverflow Stackexchange
Q: How can I indicate that a postgresql plpython function argument can be any type? I'm using (or trying to) the following function:
CREATE FUNCTION replace_value(data JSON, key text, value anyelement)
RETURNS JSON AS $$
import json
d = json.loads(data)
d[key] = value
return json.dumps(d)
$$ LANGUAGE plpython3u;
This, as I did not expect, does not work. Postgres complains:
ERROR: PL/Python functions cannot accept type anyelement
Well... that's just silly, because native Python functions can accept anything of any type, since variables are just names for things.
And in this case, I could not care what the actual type of the value is, I just want to be able to replace it. How can I do such a thing in Postgres/PLPython?
A: Define the parameter as text and cast to text when calling the function.
| Q: How can I indicate that a postgresql plpython function argument can be any type? I'm using (or trying to) the following function:
CREATE FUNCTION replace_value(data JSON, key text, value anyelement)
RETURNS JSON AS $$
import json
d = json.loads(data)
d[key] = value
return json.dumps(d)
$$ LANGUAGE plpython3u;
This, as I did not expect, does not work. Postgres complains:
ERROR: PL/Python functions cannot accept type anyelement
Well... that's just silly, because native Python functions can accept anything of any type, since variables are just names for things.
And in this case, I could not care what the actual type of the value is, I just want to be able to replace it. How can I do such a thing in Postgres/PLPython?
A: Define the parameter as text and cast to text when calling the function.
A: I agree: the lack of anyelement is quite an inconvenient omission.
A workaround would be to overload your function with the various types you meet in practice (providing they are enumerable and not too many)?
In practice, for JSON, you might get away with text, float and bigint?
CREATE FUNCTION replace_value(data JSON, key text, value text)...
CREATE FUNCTION replace_value(data JSON, key text, value float)...
CREATE FUNCTION replace_value(data JSON, key text, value bigint)...
Rewriting your code n times might be tedious, but you could automate that, e.g. using python and psycopg2.
| stackoverflow | {
"language": "en",
"length": 226,
"provenance": "stackexchange_0000F.jsonl.gz:902723",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44660345"
} |
7cf579b2925aa19ee0ba0670f16319b274140db8 | Stackoverflow Stackexchange
Q: Using python_requires to require Python 2.7 or 3.2+ How do I use python_requires classifier in setup.py to require Python 2.7.* or 3.2+?
I have tried many configurations, including this one: ~=2.7,==3,!=3.0,!=3.1,<4 but none have worked
A: This argument for setuptools uses the PEP440 version specifiers spec, so you can ask for:
python_requires='>=2.7,!=3.0.*,!=3.1.*'
The commas , are equivalent to logical and operator.
Note that the metadata generated is only respected by pip>=9.0.0 (Nov 2016).
| Q: Using python_requires to require Python 2.7 or 3.2+ How do I use python_requires classifier in setup.py to require Python 2.7.* or 3.2+?
I have tried many configurations, including this one: ~=2.7,==3,!=3.0,!=3.1,<4 but none have worked
A: This argument for setuptools uses the PEP440 version specifiers spec, so you can ask for:
python_requires='>=2.7,!=3.0.*,!=3.1.*'
The commas , are equivalent to logical and operator.
Note that the metadata generated is only respected by pip>=9.0.0 (Nov 2016).
| stackoverflow | {
"language": "en",
"length": 74,
"provenance": "stackexchange_0000F.jsonl.gz:902750",
"question_score": "30",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44660448"
} |
9ed6f760f078597ac3cdaf304ec023ba1c3ac3cb | Stackoverflow Stackexchange
Q: Gitlab can't open Directory I have to subprojects in my gitlab repository, client and server, but I can't open the server directory, what does the symbol in the screenshot mean? I suppose it means it is a seperate git repository, but I checked the server directory and there was no .git directory in it.
A: This folder is a git submodule. If you have committed and pushed the .gitmodules file, it should point to the specified repository in this file, for instance :
[submodule "server"]
path = server
url = [email protected]:username/somerepo.git
Note that if when you click on the folder and it doesn't redirect to the remote repository, it could mean :
*
*the external repo hasn't been added as a submodule (git submodule) but has been cloned into your repo (git clone)
*you haven't committed/pushed the .gitmodules file but you did push the submodule
*you have deleted the submodule locally but you didn't remove it from your remote so the it's still present on Gitlab
If you want to remove this submodule check this post
| Q: Gitlab can't open Directory I have to subprojects in my gitlab repository, client and server, but I can't open the server directory, what does the symbol in the screenshot mean? I suppose it means it is a seperate git repository, but I checked the server directory and there was no .git directory in it.
A: This folder is a git submodule. If you have committed and pushed the .gitmodules file, it should point to the specified repository in this file, for instance :
[submodule "server"]
path = server
url = [email protected]:username/somerepo.git
Note that if when you click on the folder and it doesn't redirect to the remote repository, it could mean :
*
*the external repo hasn't been added as a submodule (git submodule) but has been cloned into your repo (git clone)
*you haven't committed/pushed the .gitmodules file but you did push the submodule
*you have deleted the submodule locally but you didn't remove it from your remote so the it's still present on Gitlab
If you want to remove this submodule check this post
| stackoverflow | {
"language": "en",
"length": 177,
"provenance": "stackexchange_0000F.jsonl.gz:902751",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44660449"
} |
e02d625ccb2f42daf1f2ab5d26bf6d498cf1d843 | Stackoverflow Stackexchange
Q: How to disable HTTP/2 on IIS of Windows Server 2016 We are running into some issues that seem to be affected by http2 and I want to turn it off temporarily to troubleshoot. I tried the registry keys outlined in this question but that did not help with Windows Server 2016.
How to disable HTTP/2 on IIS of Windows 10
A: Another solution, if you are only testing, is run Chrome without http2 enabled. from start run, chrome --disable-http2
Also, apparently a fix is coming, we just have to be patient for the rollout. See THIS article
| Q: How to disable HTTP/2 on IIS of Windows Server 2016 We are running into some issues that seem to be affected by http2 and I want to turn it off temporarily to troubleshoot. I tried the registry keys outlined in this question but that did not help with Windows Server 2016.
How to disable HTTP/2 on IIS of Windows 10
A: Another solution, if you are only testing, is run Chrome without http2 enabled. from start run, chrome --disable-http2
Also, apparently a fix is coming, we just have to be patient for the rollout. See THIS article
A: *
*Start → regedit
*Navigate to the folder/path: HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\HTTP\Parameters
*Under the Parameters folder, right-click the white-space, add 2 new DWORD (32-bit) values:
*
*EnableHttp2Tls
*EnableHttp2Cleartext
*Ensure both new values have been set to 0(disabled) by right-clicking the value and clicking "Modify..."
*Restart the OS.
| stackoverflow | {
"language": "en",
"length": 143,
"provenance": "stackexchange_0000F.jsonl.gz:902812",
"question_score": "11",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44660634"
} |
0bf8bbcc194f5dcc848998f1bdb56fb158e3b618 | Stackoverflow Stackexchange
Q: Why is ParentComponent.childContextTypes and ChildComponent.contextTypes required? I am in the middle of learning React, and after reading about contexts, I keep wondering about this. Why is ParentComponent.childContextTypes and ChildComponent.contextTypes required for the child component to be able to receive contexts?
A: Parent Component passes context to that subtree. So we will be mention which type of your context by using childContextTypes method.
Child Component receives context from their parent. In your app, there is a chance many parent component passes context as same names. So you define that type to identify these contexts.
In React JS Context API is one of the powerful API. So when you need this that time only use Context.
| Q: Why is ParentComponent.childContextTypes and ChildComponent.contextTypes required? I am in the middle of learning React, and after reading about contexts, I keep wondering about this. Why is ParentComponent.childContextTypes and ChildComponent.contextTypes required for the child component to be able to receive contexts?
A: Parent Component passes context to that subtree. So we will be mention which type of your context by using childContextTypes method.
Child Component receives context from their parent. In your app, there is a chance many parent component passes context as same names. So you define that type to identify these contexts.
In React JS Context API is one of the powerful API. So when you need this that time only use Context.
| stackoverflow | {
"language": "en",
"length": 115,
"provenance": "stackexchange_0000F.jsonl.gz:902816",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44660652"
} |
44ff106198082af305f3c79a934257ff9eae718d | Stackoverflow Stackexchange
Q: fetch and Headers are undefined in IE11.0.9600 with babel-polyfill when process.env.NODE_ENV=='production' When process.env.NODE_ENV=='development' - it is OK!
But our production build failed in IE 11 (11.0.9600).
All work fine in chrome 55.
devDependencies:
...
babel-core: "6.22.0",
babel-eslint: "^7.0.0",
babel-loader: "^6.2.5",
babel-preset-env: "^1.5.2",
babel-preset-es2015: "^6.16.0",
babel-preset-es2016: "^6.22.0",
babel-preset-es2017: "^6.16.0",
babel-preset-react: "^6.16.0",
babel-preset-stage-0: "^6.22.0"
...
dependencies:
...
babel-polyfill: "^6.16.0"
...
.babelrc:
{
"presets": [
"react",
["env", {
"useBuiltIns": true
}],
"stage-0"
]
}
Try "useBuiltIns": false, es2016, es2015, es2017 presets. Nothing changes.
index.js:
"use strict";
import 'babel-polyfill'
...
webpack.config module.exports.entry:
vendor: ['babel-polyfill', 'immutable', 'react', 'react-dom', ...],
...
bundle: [path.resolve(__dirname, srcPath + ""index.js)]
vendor is the first script in index.html.
Typing _babelPolyfill in ie console return true.
But Headers, fetch are undefined...
Why process.env.NODE_ENV=='production' broke my app in IE11?
How to fix my config?
A: core.js do not have polyfill for Headers() and fetch, so babel-polyfill don't.
Use one of fetch polyfills:
*
*whatwg-fetch polyfill for browsers only support https://github.com/github/fetch
*isomorphic-fetch - polyfill, based on whatwg-fetch, for node and browsers support
For more info:
https://github.com/zloirock/core-js
What is the difference between isomorphic-fetch and fetch?
| Q: fetch and Headers are undefined in IE11.0.9600 with babel-polyfill when process.env.NODE_ENV=='production' When process.env.NODE_ENV=='development' - it is OK!
But our production build failed in IE 11 (11.0.9600).
All work fine in chrome 55.
devDependencies:
...
babel-core: "6.22.0",
babel-eslint: "^7.0.0",
babel-loader: "^6.2.5",
babel-preset-env: "^1.5.2",
babel-preset-es2015: "^6.16.0",
babel-preset-es2016: "^6.22.0",
babel-preset-es2017: "^6.16.0",
babel-preset-react: "^6.16.0",
babel-preset-stage-0: "^6.22.0"
...
dependencies:
...
babel-polyfill: "^6.16.0"
...
.babelrc:
{
"presets": [
"react",
["env", {
"useBuiltIns": true
}],
"stage-0"
]
}
Try "useBuiltIns": false, es2016, es2015, es2017 presets. Nothing changes.
index.js:
"use strict";
import 'babel-polyfill'
...
webpack.config module.exports.entry:
vendor: ['babel-polyfill', 'immutable', 'react', 'react-dom', ...],
...
bundle: [path.resolve(__dirname, srcPath + ""index.js)]
vendor is the first script in index.html.
Typing _babelPolyfill in ie console return true.
But Headers, fetch are undefined...
Why process.env.NODE_ENV=='production' broke my app in IE11?
How to fix my config?
A: core.js do not have polyfill for Headers() and fetch, so babel-polyfill don't.
Use one of fetch polyfills:
*
*whatwg-fetch polyfill for browsers only support https://github.com/github/fetch
*isomorphic-fetch - polyfill, based on whatwg-fetch, for node and browsers support
For more info:
https://github.com/zloirock/core-js
What is the difference between isomorphic-fetch and fetch?
| stackoverflow | {
"language": "en",
"length": 181,
"provenance": "stackexchange_0000F.jsonl.gz:902916",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44660928"
} |
3211dd2be7e797d528002094b957b437fcc985a3 | Stackoverflow Stackexchange
Q: Why does java stream.count() return a long? Why doesn't a stream.count() return an int?
I understand that I can easily convert the long to an int by casting,
return (int) players.stream().filter(Player::isActive).count();
but why would a java stream.count() return a long instead of an int?
A: When Java came out in early 1996, common PCs had 8 to 16 Mb of memory. Since both arrays and collections were closely tied to memory size, using int to represent element counts seemed natural, because it was sufficient to address an array of ints that is 4Gb in size - a size gigantic even for hard drives in 1996, let alone RAM. Hence, using long instead of int for collection sizes would seem wasteful at the time.
Although int size may be a limiting factor at times, Java designers cannot change it to long, because it would be a breaking change.
Unlike Java collections, streams could have potentially unlimited number of elements, and they carry no compatibility considerations. Therefore, using long with its wider range of values seems like a very reasonable choice.
| Q: Why does java stream.count() return a long? Why doesn't a stream.count() return an int?
I understand that I can easily convert the long to an int by casting,
return (int) players.stream().filter(Player::isActive).count();
but why would a java stream.count() return a long instead of an int?
A: When Java came out in early 1996, common PCs had 8 to 16 Mb of memory. Since both arrays and collections were closely tied to memory size, using int to represent element counts seemed natural, because it was sufficient to address an array of ints that is 4Gb in size - a size gigantic even for hard drives in 1996, let alone RAM. Hence, using long instead of int for collection sizes would seem wasteful at the time.
Although int size may be a limiting factor at times, Java designers cannot change it to long, because it would be a breaking change.
Unlike Java collections, streams could have potentially unlimited number of elements, and they carry no compatibility considerations. Therefore, using long with its wider range of values seems like a very reasonable choice.
A: This statement
players.stream().filter(Player::isActive).count();
is equivalent to:
players.stream().filter(Player::isActive).collect(Collectors.counting());
This still returns a long because Collectors.counting() is implemented as
reducing(0L, e -> 1L, Long::sum)
Returning an int can be accomplished with the following:
players.stream().filter(Player::isActive).collect(Collectors.reducing(0, e -> 1, Integer::sum));
This form can be used in groupingBy statement
Map<Player, Integer> playerCount = players.stream().filter(Player::isActive).collect(Collectors.groupingBy(Function.identity(), Collectors.reducing(0, e -> 1, Integer::sum)));
A: Well simply because it's the biggest 64-bit primitive value that java has.
The other way would be two counts:
countLong/countInt
and that would look really weird.
int fits in a long, but not the other way around. Anything you want to do with int you can fit in a long, so why the need to provide both?
| stackoverflow | {
"language": "en",
"length": 293,
"provenance": "stackexchange_0000F.jsonl.gz:902954",
"question_score": "38",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44661078"
} |
88d4eb6c37ea22a23d0451557f86c7e5ebd99dd7 | Stackoverflow Stackexchange
Q: How do I reuse a parameter witha Spring-Data-JPA Repository? In looking at the Query Creation for the Spring Data JPA Repositories, I'm wondering how I would reuse a parameter. For example, how would I name the method if I wanted to do something like:
@Query("select c from #{#entityName} c where c.lower <= ?1 and c.upper >= ?1")
E findByConversionFor(Double amount);
Can that query be converted to a SpEL method name (to be used by the query builder)?
It seems like a kludge to require the same value to be passed twice:
E findByLowerLessThanOrEqualAndUpperGreaterThanOrEqual(Double a, Double b); // where a==b
A: Just mark your parameter with @Param("amount") and then will be able to use it by name:
@Query("select c from #{#entityName} c where c.lower <= :amount and c.upper >= :amount")
| Q: How do I reuse a parameter witha Spring-Data-JPA Repository? In looking at the Query Creation for the Spring Data JPA Repositories, I'm wondering how I would reuse a parameter. For example, how would I name the method if I wanted to do something like:
@Query("select c from #{#entityName} c where c.lower <= ?1 and c.upper >= ?1")
E findByConversionFor(Double amount);
Can that query be converted to a SpEL method name (to be used by the query builder)?
It seems like a kludge to require the same value to be passed twice:
E findByLowerLessThanOrEqualAndUpperGreaterThanOrEqual(Double a, Double b); // where a==b
A: Just mark your parameter with @Param("amount") and then will be able to use it by name:
@Query("select c from #{#entityName} c where c.lower <= :amount and c.upper >= :amount")
| stackoverflow | {
"language": "en",
"length": 130,
"provenance": "stackexchange_0000F.jsonl.gz:902969",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44661123"
} |
ae4de55060f6fb1d72d61be95404a13f41c3f1a3 | Stackoverflow Stackexchange
Q: Why does Apple Push Notification Authentication Key (Sandbox & Production) not appear I'm trying to set up my notifications for firebase, and I have it set up already using a .p12 file, but i've been reading that is now recommended to start using the .p8 file which is the auth key, but when I go into my developer account for apple I don't see that option anywhere, nor do I even see an option to see "APNs Auth Key" in my certificates option
A: I think now you can generate .p8 in Key section in "Certificates, Identifiers & Profiles".
press continue
press confirm
Now you can download your .p8 file.
| Q: Why does Apple Push Notification Authentication Key (Sandbox & Production) not appear I'm trying to set up my notifications for firebase, and I have it set up already using a .p12 file, but i've been reading that is now recommended to start using the .p8 file which is the auth key, but when I go into my developer account for apple I don't see that option anywhere, nor do I even see an option to see "APNs Auth Key" in my certificates option
A: I think now you can generate .p8 in Key section in "Certificates, Identifiers & Profiles".
press continue
press confirm
Now you can download your .p8 file.
| stackoverflow | {
"language": "en",
"length": 111,
"provenance": "stackexchange_0000F.jsonl.gz:902985",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44661166"
} |
51149cedb3bc452feb7e1bdb07d40afc68bf2562 | Stackoverflow Stackexchange
Q: Unable to locate element using selenium webdriver in python I want to do some automation testing on a website called http://elegalix.allahabadhighcourt.in. I am using the following python code to click a button called "Advanced" on the above website:
Code#
from selenium import webdriver
driver = webdriver.Chrome('./chromedriver')
driver.get('http://elegalix.allahabadhighcourt.in')
driver.set_page_load_timeout(20)
driver.maximize_window()
driver.find_element_by_xpath("//input[@value='Advanced']").click()
Error#
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: //input[@value='Advanced']
P.S I am a newbie to python programming. I have tried different variations of xpath and have tried to use other find_element_by methods but none seems to work on this website...I m experiencing similar error on Firefox browser as well...
A: It's because the element you are looking for is inside a frame, switch to the frame first and then search for the element
from selenium import webdriver
driver = webdriver.Chrome('./chromedriver')
driver.get('http://elegalix.allahabadhighcourt.in')
driver.set_page_load_timeout(20)
driver.maximize_window()
driver.switch_to.frame(driver.find_element_by_name('sidebarmenu'))
driver.find_element_by_xpath("//input[@value='Advanced']").click()
driver.switch_to.default_content()
| Q: Unable to locate element using selenium webdriver in python I want to do some automation testing on a website called http://elegalix.allahabadhighcourt.in. I am using the following python code to click a button called "Advanced" on the above website:
Code#
from selenium import webdriver
driver = webdriver.Chrome('./chromedriver')
driver.get('http://elegalix.allahabadhighcourt.in')
driver.set_page_load_timeout(20)
driver.maximize_window()
driver.find_element_by_xpath("//input[@value='Advanced']").click()
Error#
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: //input[@value='Advanced']
P.S I am a newbie to python programming. I have tried different variations of xpath and have tried to use other find_element_by methods but none seems to work on this website...I m experiencing similar error on Firefox browser as well...
A: It's because the element you are looking for is inside a frame, switch to the frame first and then search for the element
from selenium import webdriver
driver = webdriver.Chrome('./chromedriver')
driver.get('http://elegalix.allahabadhighcourt.in')
driver.set_page_load_timeout(20)
driver.maximize_window()
driver.switch_to.frame(driver.find_element_by_name('sidebarmenu'))
driver.find_element_by_xpath("//input[@value='Advanced']").click()
driver.switch_to.default_content()
A: I faced with this issue and the problem about the window size. Because when window size is small, objects place has changed so i use driver.maximize_window() and then i use driver.find() method it works properly.
A: After the command to open the link add this:
driver.maximize_window()
| stackoverflow | {
"language": "en",
"length": 183,
"provenance": "stackexchange_0000F.jsonl.gz:902988",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44661187"
} |
aadfc54da0b39e8bfeba864d4b9d47faade9aac3 | Stackoverflow Stackexchange
Q: Spacy to extract specific noun phrase Can I use spacy in python to find NP with specific neighbors? I want Noun phrases from my text that has verb before and after it.
A: From https://spacy.io/usage/linguistic-features#dependency-parse
You can use Noun chunks.
Noun chunks are "base noun phrases" – flat phrases that have a noun as their head. You can think of noun chunks as a noun plus the words describing the noun – for example, "the lavish green grass" or "the world’s largest tech fund". To get the noun chunks in a document, simply iterate over Doc.noun_chunks.
In:
import spacy
nlp = spacy.load('en_core_web_sm')
doc = nlp(u"Autonomous cars shift insurance liability toward manufacturers")
for chunk in doc.noun_chunks:
print(chunk.text)
Out:
Autonomous cars
insurance liability
manufacturers
| Q: Spacy to extract specific noun phrase Can I use spacy in python to find NP with specific neighbors? I want Noun phrases from my text that has verb before and after it.
A: From https://spacy.io/usage/linguistic-features#dependency-parse
You can use Noun chunks.
Noun chunks are "base noun phrases" – flat phrases that have a noun as their head. You can think of noun chunks as a noun plus the words describing the noun – for example, "the lavish green grass" or "the world’s largest tech fund". To get the noun chunks in a document, simply iterate over Doc.noun_chunks.
In:
import spacy
nlp = spacy.load('en_core_web_sm')
doc = nlp(u"Autonomous cars shift insurance liability toward manufacturers")
for chunk in doc.noun_chunks:
print(chunk.text)
Out:
Autonomous cars
insurance liability
manufacturers
A: *
*You can merge the noun phrases ( so that they do not get tokenized seperately).
*Analyse the dependency parse tree, and see the POS of neighbouring tokens.
>>> import spacy
>>> nlp = spacy.load('en')
>>> sent = u'run python program run, to make this work'
>>> parsed = nlp(sent)
>>> list(parsed.noun_chunks)
[python program]
>>> for noun_phrase in list(parsed.noun_chunks):
... noun_phrase.merge(noun_phrase.root.tag_, noun_phrase.root.lemma_, noun_phrase.root.ent_type_)
...
python program
>>> [(token.text,token.pos_) for token in parsed]
[(u'run', u'VERB'), (u'python program', u'NOUN'), (u'run', u'VERB'), (u',', u'PUNCT'), (u'to', u'PART'), (u'make', u'VERB'), (u'this', u'DET'), (u'work', u'NOUN')]
*By analysing the POS of adjacent tokens, you can get your desired noun phrases.
*A better approach would be to analyse the dependency parse tree, and see the lefts and rights of the noun phrase, so that even if there is a punctuation or other POS tag between the noun phrase and verb, you can increase your search coverage
A: If you want to re-tokenize using merge phrases, I prefer this (rather than noun chunks) :
import spacy
nlp = spacy.load('en_core_web_sm')
nlp.add_pipe(nlp.create_pipe('merge_noun_chunks'))
doc = nlp(u"Autonomous cars shift insurance liability toward manufacturers")
for token in doc:
print(token.text)
and the output will be :
Autonomous cars
shift
insurance liability
toward
manufacturers
I choose this way because each token has property for further process :)
| stackoverflow | {
"language": "en",
"length": 336,
"provenance": "stackexchange_0000F.jsonl.gz:902992",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44661200"
} |
5fbcbadce97bab9f5cc4cd7d3bae6147f85937be | Stackoverflow Stackexchange
Q: Visual Studio Code: Use Git Bash (windows) I have found questions that give direction on using Git Bash with full blown Visual Studio, but I've not been able to locate any directions on how one might be able to set the built-in Terminal that Visual Studio Code offers to be Git Bash. Is this possible?
A: There is this setting for your workspace:
// The path of the shell that the terminal uses on Windows. When using shells shipped with Windows (cmd, PowerShell or Bash on Ubuntu), prefer C:\Windows\sysnative over C:\Windows\System32 to use the 64-bit versions.
"terminal.integrated.shell.windows": “C:\\Windows\\sysnative\\bash.exe”,
Assuming you already have it installed. I found this info here:
installing and setting up git bash in vscode
| Q: Visual Studio Code: Use Git Bash (windows) I have found questions that give direction on using Git Bash with full blown Visual Studio, but I've not been able to locate any directions on how one might be able to set the built-in Terminal that Visual Studio Code offers to be Git Bash. Is this possible?
A: There is this setting for your workspace:
// The path of the shell that the terminal uses on Windows. When using shells shipped with Windows (cmd, PowerShell or Bash on Ubuntu), prefer C:\Windows\sysnative over C:\Windows\System32 to use the 64-bit versions.
"terminal.integrated.shell.windows": “C:\\Windows\\sysnative\\bash.exe”,
Assuming you already have it installed. I found this info here:
installing and setting up git bash in vscode
A: The accepted answer doesn't answer the original question: How to use git-bash in VS Code.
"terminal.integrated.shell.windows": "C:\\Git\\bin\\bash.exe"
Just replace the path to the C:\Git folder to the actual path of your git installation.
A: I'm not sure if setting the built-in terminal to Git Bash is possible. Opening a Git Bash terminal in your projects root directory should suffice.
You can do this by right clicking withing the opened directory and clicking 'Git Bash Here'.
| stackoverflow | {
"language": "en",
"length": 194,
"provenance": "stackexchange_0000F.jsonl.gz:903004",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44661227"
} |
0e8d3555e7d3305d1e1a35867b4c44420e0c7735 | Stackoverflow Stackexchange
Q: r shiny navbarpage keep navigation bar at top of screen In R shiny, if you had a really long single page, is there anyway to keep the navigation bar (fromnavbarPage) at the top of the screen even while you're scrolling down?
Thanks for any help
A: Yes, this is possible. You should follow the navbarPage reference regarding this:
Shiny reference: navbarPage
Bottomline: you have to argue the position argument of navbarPage():
navbarPage(title, ..., position = "fixed-top")
"fixed-top" will pin the navbar to the top.
| Q: r shiny navbarpage keep navigation bar at top of screen In R shiny, if you had a really long single page, is there anyway to keep the navigation bar (fromnavbarPage) at the top of the screen even while you're scrolling down?
Thanks for any help
A: Yes, this is possible. You should follow the navbarPage reference regarding this:
Shiny reference: navbarPage
Bottomline: you have to argue the position argument of navbarPage():
navbarPage(title, ..., position = "fixed-top")
"fixed-top" will pin the navbar to the top.
| stackoverflow | {
"language": "en",
"length": 85,
"provenance": "stackexchange_0000F.jsonl.gz:903008",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44661240"
} |
4ef928718016fbbaf26a71c75b881847a2451831 | Stackoverflow Stackexchange
Q: How do I read an EDN file from ClojureScript running on NodeJS? I have a simple data file in EDN format I need to read in a ClojureScript cli app running on NodeJS, but none of the relevant core libraries from Clojure seem to be available (core.java.io/read, clojure.edn/read, etc.)
What should I be using instead?
A: You could use:
(ns app.core
(:require [cljs.reader :as reader]))
(def fs (js/require "fs"))
(defn read-edn [path f]
(.readFile fs path "utf8" (fn [err data] (f (reader/read-string data)))))
(defn process [coll])
(read-edn "/tmp/x.clj" process)
In the example above, process would receive the data structure that was read from the file. You would need to implement process and add error handling to read-edn.
| Q: How do I read an EDN file from ClojureScript running on NodeJS? I have a simple data file in EDN format I need to read in a ClojureScript cli app running on NodeJS, but none of the relevant core libraries from Clojure seem to be available (core.java.io/read, clojure.edn/read, etc.)
What should I be using instead?
A: You could use:
(ns app.core
(:require [cljs.reader :as reader]))
(def fs (js/require "fs"))
(defn read-edn [path f]
(.readFile fs path "utf8" (fn [err data] (f (reader/read-string data)))))
(defn process [coll])
(read-edn "/tmp/x.clj" process)
In the example above, process would receive the data structure that was read from the file. You would need to implement process and add error handling to read-edn.
| stackoverflow | {
"language": "en",
"length": 118,
"provenance": "stackexchange_0000F.jsonl.gz:903061",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44661385"
} |
4939ebcfce323c23eb0d91257c42fb61f66bde75 | Stackoverflow Stackexchange
Q: Understanding findIndex in Javascript / Typescript I am working on a piece of JS code. In a tutorial I found a piece of code I don't understand:
const position = this.quotes.findIndex((quoteEl: Quote) => {
return quoteEl.id == quote.id;
});
I think the person who wrote the code stuffed a lot of different pieces into this line.
Can somebody help me bring that into a more "easy to understand" form?
For example, the argument of the findIndex method can probably written in a separate function, right?
Thanks,
Benjamin
A: findIndex calls the passed function with each element of the array and returns the index of the first element that returned true, or -1 if none did.
This is your callback function
(quoteEl: Quote) => {
return quoteEl.id == quote.id;
}
| Q: Understanding findIndex in Javascript / Typescript I am working on a piece of JS code. In a tutorial I found a piece of code I don't understand:
const position = this.quotes.findIndex((quoteEl: Quote) => {
return quoteEl.id == quote.id;
});
I think the person who wrote the code stuffed a lot of different pieces into this line.
Can somebody help me bring that into a more "easy to understand" form?
For example, the argument of the findIndex method can probably written in a separate function, right?
Thanks,
Benjamin
A: findIndex calls the passed function with each element of the array and returns the index of the first element that returned true, or -1 if none did.
This is your callback function
(quoteEl: Quote) => {
return quoteEl.id == quote.id;
}
| stackoverflow | {
"language": "en",
"length": 130,
"provenance": "stackexchange_0000F.jsonl.gz:903090",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44661490"
} |
6ddd1ea1326d19813defc4dff4b75a4e3dbb309d | Stackoverflow Stackexchange
Q: React native animation scrollview onScroll event not working with external method I make a collapsing tollbar in ReactNative and i need stop de animation when the Animated.ScrollView contentOffset.y is equal 240. If i put any condition or call the Animated.event in external function it dosn´t work.
The Animated.Value.stopAnimation() doesn´t work either.
This works:
<Animated.ScrollView
scrollEventThrottle={1}
onScroll={
Animated.event(
[{nativeEvent: {contentOffset: {y: this.state.scrollY}}}],
{useNativeDriver: true}
)
}
>
...
This doesn´t work:
handlerScroll() {
Animated.event(
[{nativeEvent: {contentOffset: {y: this.state.scrollY}}}]
{useNativeDriver: true}
)
}
...
render() {
return(
<Animated.ScrollView
scrollEventThrottle={1}
onScroll={this.handlerScroll.bind(this)}
>
)
}
...
and this doesn´t work either
<Animated.ScrollView
scrollEventThrottle={1}
onScroll={
this.state.canScroll &&
Animated.event(
[{nativeEvent: {contentOffset: {y: this.state.scrollY}}}],
{useNativeDriver: true}
)
}
>
...
I don´t know what more i can use to stop my animation.
I need make this effect:
A: onScroll= {Animated.event(
[{ nativeEvent: { contentOffset: { y: this. state.scrollY } } }],
{
useNativeDriver: true,
listener: event => {
handlerScroll(event);
},
},
)}
see https://reactnative.dev/docs/animated#event
| Q: React native animation scrollview onScroll event not working with external method I make a collapsing tollbar in ReactNative and i need stop de animation when the Animated.ScrollView contentOffset.y is equal 240. If i put any condition or call the Animated.event in external function it dosn´t work.
The Animated.Value.stopAnimation() doesn´t work either.
This works:
<Animated.ScrollView
scrollEventThrottle={1}
onScroll={
Animated.event(
[{nativeEvent: {contentOffset: {y: this.state.scrollY}}}],
{useNativeDriver: true}
)
}
>
...
This doesn´t work:
handlerScroll() {
Animated.event(
[{nativeEvent: {contentOffset: {y: this.state.scrollY}}}]
{useNativeDriver: true}
)
}
...
render() {
return(
<Animated.ScrollView
scrollEventThrottle={1}
onScroll={this.handlerScroll.bind(this)}
>
)
}
...
and this doesn´t work either
<Animated.ScrollView
scrollEventThrottle={1}
onScroll={
this.state.canScroll &&
Animated.event(
[{nativeEvent: {contentOffset: {y: this.state.scrollY}}}],
{useNativeDriver: true}
)
}
>
...
I don´t know what more i can use to stop my animation.
I need make this effect:
A: onScroll= {Animated.event(
[{ nativeEvent: { contentOffset: { y: this. state.scrollY } } }],
{
useNativeDriver: true,
listener: event => {
handlerScroll(event);
},
},
)}
see https://reactnative.dev/docs/animated#event
A: Instead of stopping scroll event mapping, why not use interpolate for your animation with extrapolate set to 'clamp'? This will stop your animation from going beyond the bounds of input and output values.
Not sure what styles you’re trying to animate but for the sake of showing an example let’s say it was a translateY transform:
// onScroll map data to Animated value
onScroll={Animated.event(
[{ nativeEvent: { contentOffset: { y: this.state.scrollY } } }],
{ useNativeDriver: true }
)}
<Animated.View
style={{
transform: [{
translateY: this.state.scrollY.interpolate({
inputRange: [0, 240],
outputRange: [0, -160],
extrapolate: 'clamp' // clamp so translateY can’t go beyond -160
})
}]
}}
>
...
</Animated.View>
| stackoverflow | {
"language": "en",
"length": 266,
"provenance": "stackexchange_0000F.jsonl.gz:903113",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44661557"
} |
8c34bc17ed1aed30d11529dd953b5a34bc37d128 | Stackoverflow Stackexchange
Q: Binding redirect not redirecting? I've run into an issue where I'm getting an error about something trying to load an old version of a dll that is no longer even on the machine.
Could not load file or assembly 'Newtonsoft.Json, Version=6.0.0.0,
Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed' or one of its
dependencies. The located assembly's manifest definition does not
match the assembly reference. (Exception from HRESULT: 0x80131040)
I already had a redirect in the webconfig to deal with this:
<dependentAssembly>
<assemblyIdentity name="NewtonSoft.Json" publicKeyToken="30ad4fe6b2a6aeed" culture="neutral" />
<bindingRedirect oldVersion="0.0.0.0-10.0.0.0" newVersion="10.0.0.0" />
</dependentAssembly>
There are no references to the 6.0.0.0 build in the solution. A dependency perhaps? If so I have no idea how to get the run time to tell me who the guilty part is.
Why is this still faulting?
A: Turns out the answer was right in front of me. The assemblyBinding tag has an appliesTo attribute that specifies which versions should be redirected per .Net framework version.
assemblyBinding appliesTo="v2.0.50727"
For some reason it was set to v2.0 - the application is running v4.0 so the redirects where not applying. Removing the attribute corrects the issue.
<runtime>
<assemblyBinding>
<dependentAssembly>
<assemblyIdentity name="NewtonSoft.Json" publicKeyToken="30ad4fe6b2a6aeed" culture="neutral" />
<bindingRedirect oldVersion="0.0.0.0-10.0.0.0" newVersion="10.0.0.0" />
</dependentAssembly>
</assemblyBinding>
</runtime>
| Q: Binding redirect not redirecting? I've run into an issue where I'm getting an error about something trying to load an old version of a dll that is no longer even on the machine.
Could not load file or assembly 'Newtonsoft.Json, Version=6.0.0.0,
Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed' or one of its
dependencies. The located assembly's manifest definition does not
match the assembly reference. (Exception from HRESULT: 0x80131040)
I already had a redirect in the webconfig to deal with this:
<dependentAssembly>
<assemblyIdentity name="NewtonSoft.Json" publicKeyToken="30ad4fe6b2a6aeed" culture="neutral" />
<bindingRedirect oldVersion="0.0.0.0-10.0.0.0" newVersion="10.0.0.0" />
</dependentAssembly>
There are no references to the 6.0.0.0 build in the solution. A dependency perhaps? If so I have no idea how to get the run time to tell me who the guilty part is.
Why is this still faulting?
A: Turns out the answer was right in front of me. The assemblyBinding tag has an appliesTo attribute that specifies which versions should be redirected per .Net framework version.
assemblyBinding appliesTo="v2.0.50727"
For some reason it was set to v2.0 - the application is running v4.0 so the redirects where not applying. Removing the attribute corrects the issue.
<runtime>
<assemblyBinding>
<dependentAssembly>
<assemblyIdentity name="NewtonSoft.Json" publicKeyToken="30ad4fe6b2a6aeed" culture="neutral" />
<bindingRedirect oldVersion="0.0.0.0-10.0.0.0" newVersion="10.0.0.0" />
</dependentAssembly>
</assemblyBinding>
</runtime>
| stackoverflow | {
"language": "en",
"length": 198,
"provenance": "stackexchange_0000F.jsonl.gz:903115",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44661567"
} |
5ef065cc0ec5068dcf861d88bbb1aff8d58008e1 | Stackoverflow Stackexchange
Q: How to grant project permissions to new users/groups? I have SonarQube 6.4 installed and I created few projects. How to grant a user/group for a private project? When I go to Project Administration Permissions (where I can switch a project between public and private), it only shows sonar-administrators group. Where can I add a group for this project?
A: Assuming your users and groups already exist: search for them. By default, this interface shows entities that already have permissions. To add more, just search for the missing entities by name. They'll show up in the interface and you can toggle the boxes to grant them permissions.
| Q: How to grant project permissions to new users/groups? I have SonarQube 6.4 installed and I created few projects. How to grant a user/group for a private project? When I go to Project Administration Permissions (where I can switch a project between public and private), it only shows sonar-administrators group. Where can I add a group for this project?
A: Assuming your users and groups already exist: search for them. By default, this interface shows entities that already have permissions. To add more, just search for the missing entities by name. They'll show up in the interface and you can toggle the boxes to grant them permissions.
| stackoverflow | {
"language": "en",
"length": 107,
"provenance": "stackexchange_0000F.jsonl.gz:903135",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44661644"
} |
fc3a72223d280c12bf06be1f45797a1ccea6b6e8 | Stackoverflow Stackexchange
Q: Facebook login in react-native WebView I am developing a simple web browser (WebView) using react-native.
Everything works well except logging into Facebook.
The website has a Facebook login and when I tap it, it takes me to the in-app mobile Facebook login page. Entering the correct user/pass redirects me to /dialog/oauth?redirect_uri=https://staticxx.facebook.com/connect....... and I get stuck there.
Without using react-native-fbsdk how do I solve this?
Using the same site on desktop and mobile safari works well. (Although it opens a new tab).
Trying https://meetup.com fails as well but https://vimeo.com works well.
Is there anything I should be aware of or is it a problem with the websites?
| Q: Facebook login in react-native WebView I am developing a simple web browser (WebView) using react-native.
Everything works well except logging into Facebook.
The website has a Facebook login and when I tap it, it takes me to the in-app mobile Facebook login page. Entering the correct user/pass redirects me to /dialog/oauth?redirect_uri=https://staticxx.facebook.com/connect....... and I get stuck there.
Without using react-native-fbsdk how do I solve this?
Using the same site on desktop and mobile safari works well. (Although it opens a new tab).
Trying https://meetup.com fails as well but https://vimeo.com works well.
Is there anything I should be aware of or is it a problem with the websites?
| stackoverflow | {
"language": "en",
"length": 107,
"provenance": "stackexchange_0000F.jsonl.gz:903149",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44661677"
} |
a43a2998aa9d71d0d3b17779c86e0dabbc6d1d78 | Stackoverflow Stackexchange
Q: Redirect requests to CloudFront based on header (for crawlers) I'm serving a React app from CloudFront, and I need to be able to redirect requests coming from crawlers (identified via a user-agent header) to a static version of the site.
It looks like lambda@edge would fit my needs (I could inspect the headers in a Lambda function, then redirect if necessary) but it's in a limited preview and I've been unable to get access.
How else can I achieve this?
| Q: Redirect requests to CloudFront based on header (for crawlers) I'm serving a React app from CloudFront, and I need to be able to redirect requests coming from crawlers (identified via a user-agent header) to a static version of the site.
It looks like lambda@edge would fit my needs (I could inspect the headers in a Lambda function, then redirect if necessary) but it's in a limited preview and I've been unable to get access.
How else can I achieve this?
| stackoverflow | {
"language": "en",
"length": 81,
"provenance": "stackexchange_0000F.jsonl.gz:903174",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44661749"
} |
0b269a3d8100cf047029a611bc9a38eecdeea2d9 | Stackoverflow Stackexchange
Q: Add an entry point at runtime in Python I want to use setuptools entry points to subscribe groups of methods to messages coming from a communication channel.
That is very easy to do by declaring all entry points in your setup.py. But I'm not sure if I'll be able to add more entry points while the program is running.
Is it possible to add new entry points to an application that is already running?
I tried to add a new "fake" setuptools command in an interactive session:
std = pkg_resources.get_distribution('setuptools')
pkg_resources.EntryPoint.parse_group(
'distutils.commands', 'antialias = setuptools.command.alias:alias', std)
pprint(std.get_entry_map())
But my new entry point is not present in the printed object. Am I doing something wrong?
| Q: Add an entry point at runtime in Python I want to use setuptools entry points to subscribe groups of methods to messages coming from a communication channel.
That is very easy to do by declaring all entry points in your setup.py. But I'm not sure if I'll be able to add more entry points while the program is running.
Is it possible to add new entry points to an application that is already running?
I tried to add a new "fake" setuptools command in an interactive session:
std = pkg_resources.get_distribution('setuptools')
pkg_resources.EntryPoint.parse_group(
'distutils.commands', 'antialias = setuptools.command.alias:alias', std)
pprint(std.get_entry_map())
But my new entry point is not present in the printed object. Am I doing something wrong?
| stackoverflow | {
"language": "en",
"length": 115,
"provenance": "stackexchange_0000F.jsonl.gz:903194",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44661815"
} |
1f79ef66b5b5798559f2b215a51d9e925113b738 | Stackoverflow Stackexchange
Q: GIT - how to test a forked change / pull request? I need to test the following pull request: https://github.com/grobian/carbon-c-relay/pull/274. I have cloned the master repo to my local drive: git clone https://github.com/grobian/carbon-c-relay.git carbon-c-relay. How do I incorporate the changes from the pull request to my local copy so that I can compile and test?
A: I found that you can pull a pull request as follows: git pull origin pull/274/head
| Q: GIT - how to test a forked change / pull request? I need to test the following pull request: https://github.com/grobian/carbon-c-relay/pull/274. I have cloned the master repo to my local drive: git clone https://github.com/grobian/carbon-c-relay.git carbon-c-relay. How do I incorporate the changes from the pull request to my local copy so that I can compile and test?
A: I found that you can pull a pull request as follows: git pull origin pull/274/head
| stackoverflow | {
"language": "en",
"length": 72,
"provenance": "stackexchange_0000F.jsonl.gz:903206",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44661842"
} |
750af93c0c1633a5c4a5f62ea0221aa60da9fd3f | Stackoverflow Stackexchange
Q: FFMPEG output frame name issue I'm going to convert DPX sequence of files to JPG sequence.
ffmpeg -i F:\test\my_files.%07d.dpx F:\test2\my_files.%07d.jpg -report
DPX files starts from zero frame (example: my_files.0000000.dpx), but JPG files after ffmpeg conversion get name that starts from first frame (example: my_files.0000001.jpg).
How to get name of JPG files start from zero frame?
A: Use
ffmpeg -i F:\test\my_files.%07d.dpx -start_number 0 F:\test2\my_files.%07d.jpg
The image2 muxer's default value for start_number is 1.
| Q: FFMPEG output frame name issue I'm going to convert DPX sequence of files to JPG sequence.
ffmpeg -i F:\test\my_files.%07d.dpx F:\test2\my_files.%07d.jpg -report
DPX files starts from zero frame (example: my_files.0000000.dpx), but JPG files after ffmpeg conversion get name that starts from first frame (example: my_files.0000001.jpg).
How to get name of JPG files start from zero frame?
A: Use
ffmpeg -i F:\test\my_files.%07d.dpx -start_number 0 F:\test2\my_files.%07d.jpg
The image2 muxer's default value for start_number is 1.
| stackoverflow | {
"language": "en",
"length": 73,
"provenance": "stackexchange_0000F.jsonl.gz:903216",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44661866"
} |
7188c0028a200bba83b32de322304ea0e9473e37 | Stackoverflow Stackexchange
Q: Cannot pip install sklearn for Python 3.6 First, I downloaded the numpy+nlk whl file here and did
python3 -m pip install numpy‑1.11.3+mkl‑cp36‑none-any.whl
I renamed thanks to this tip: filename.whl is not supported wheel on this platform
But when I do
python3 -m pip install sklearn
I get Original error was: cannot import name 'multiarray'
I've tried unistalling and reinstalling numpy, but I have no idea how to fix this.
A: I just uploaded the windows wheels for scikit-learn 0.18.2 and Python 3.6 to PyPi: https://pypi.python.org/pypi/scikit-learn/0.18.2 Can you try again and give the full traceback if that still does not work?
| Q: Cannot pip install sklearn for Python 3.6 First, I downloaded the numpy+nlk whl file here and did
python3 -m pip install numpy‑1.11.3+mkl‑cp36‑none-any.whl
I renamed thanks to this tip: filename.whl is not supported wheel on this platform
But when I do
python3 -m pip install sklearn
I get Original error was: cannot import name 'multiarray'
I've tried unistalling and reinstalling numpy, but I have no idea how to fix this.
A: I just uploaded the windows wheels for scikit-learn 0.18.2 and Python 3.6 to PyPi: https://pypi.python.org/pypi/scikit-learn/0.18.2 Can you try again and give the full traceback if that still does not work?
A: If you have installed pip then follow the given steps:
*
*upgrade pip to the latest version
*install
pip install numpy scipy scikit-learn
pip install sklearn
| stackoverflow | {
"language": "en",
"length": 128,
"provenance": "stackexchange_0000F.jsonl.gz:903272",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44662054"
} |
9a982e3e53fcb46c2122039ad5af2aa3111d9de9 | Stackoverflow Stackexchange
Q: Mapper could not assemble any primary key columns
I have created a tmp table from a sqllite table which is a subset of the original table based on various selection criteria. A sample is in the screenshot.
I'm trying to loop through the table records one at a time in order to update a field in each. I have:
source_table= self.source
engine = create_engine(db_path)
Base = declarative_base()
# metadata = Base.metadata
# Look up the existing tables from database
Base.metadata.reflect(engine)
# Create class that maps via ORM to the database table
table = type(source_table, (Base,), {'__tablename__': source_table})
Session = sessionmaker(bind=engine)
session = Session()
i = 0
for row in session.query(table).limit(500):
i += 1
print object_as_dict(row)
But this gives:
ArgumentError: Mapper Mapper|tmp|tmp could not assemble any primary key columns for mapped table 'tmp'
How can I perform this loop ?
| Q: Mapper could not assemble any primary key columns
I have created a tmp table from a sqllite table which is a subset of the original table based on various selection criteria. A sample is in the screenshot.
I'm trying to loop through the table records one at a time in order to update a field in each. I have:
source_table= self.source
engine = create_engine(db_path)
Base = declarative_base()
# metadata = Base.metadata
# Look up the existing tables from database
Base.metadata.reflect(engine)
# Create class that maps via ORM to the database table
table = type(source_table, (Base,), {'__tablename__': source_table})
Session = sessionmaker(bind=engine)
session = Session()
i = 0
for row in session.query(table).limit(500):
i += 1
print object_as_dict(row)
But this gives:
ArgumentError: Mapper Mapper|tmp|tmp could not assemble any primary key columns for mapped table 'tmp'
How can I perform this loop ?
| stackoverflow | {
"language": "en",
"length": 140,
"provenance": "stackexchange_0000F.jsonl.gz:903313",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44662184"
} |
ddd46c7b4fd033dfff1c4a4813159f5c9f1f4aac | Stackoverflow Stackexchange
Q: Pip and/or installing the .pyd of library to site-packages leads "import" of library to DLL load faliure I attempted to install Opencv for python two ways,
A) Downloading the opencv zip, then copying cv2.pyd to /Python36/lib/site-packages.
B) undoing that, and using "pip install opencv-python"
/lib/site-packages is definitly the place where python is loading my modules, as tensorflow and numpy are there, but any attempt to "import cv2" leads to "ImportError: DLL Load Failed: The specified module could not be found"
I am at a loss, any help appreciated. And yes i have tried reinstalling VC redist 2015
A: Use the zip, extract it, and run sudo python3 setup.py install if you are on Mac or Linux. If on Windows, open cmd or Powershell in Admin mode and then run py -3.6 setup.py install, after cding to the path of the zip. If on Linux, you also have to run sudo apt-get install python-opencv. Maybe on Mac you have to use Homebrew, but I am not sure.
| Q: Pip and/or installing the .pyd of library to site-packages leads "import" of library to DLL load faliure I attempted to install Opencv for python two ways,
A) Downloading the opencv zip, then copying cv2.pyd to /Python36/lib/site-packages.
B) undoing that, and using "pip install opencv-python"
/lib/site-packages is definitly the place where python is loading my modules, as tensorflow and numpy are there, but any attempt to "import cv2" leads to "ImportError: DLL Load Failed: The specified module could not be found"
I am at a loss, any help appreciated. And yes i have tried reinstalling VC redist 2015
A: Use the zip, extract it, and run sudo python3 setup.py install if you are on Mac or Linux. If on Windows, open cmd or Powershell in Admin mode and then run py -3.6 setup.py install, after cding to the path of the zip. If on Linux, you also have to run sudo apt-get install python-opencv. Maybe on Mac you have to use Homebrew, but I am not sure.
| stackoverflow | {
"language": "en",
"length": 167,
"provenance": "stackexchange_0000F.jsonl.gz:903347",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44662278"
} |
93e2540341270fc632cadc719f0d123bf89b9858 | Stackoverflow Stackexchange
Q: How to build srcset attribute with Thymeleaf What would be the correct way to build a srcset-attribute with Thymeleaf using the standart url syntax @{/...} ?
Example:
<img th:src="@{/i/1000.jpg}" srcset="/i/1500.jpg 1500w, /i/2000.jpg 2000w" />
A: Nevermind, it was easier than expected and logical at the same time:
<img
th:src="@{/i/1000.jpg}"
th:attr="srcset=@{/i/1500.jpg} + ' 1500w, ' + @{/i/2000.jpg} + ' 2000w'"
/>
| Q: How to build srcset attribute with Thymeleaf What would be the correct way to build a srcset-attribute with Thymeleaf using the standart url syntax @{/...} ?
Example:
<img th:src="@{/i/1000.jpg}" srcset="/i/1500.jpg 1500w, /i/2000.jpg 2000w" />
A: Nevermind, it was easier than expected and logical at the same time:
<img
th:src="@{/i/1000.jpg}"
th:attr="srcset=@{/i/1500.jpg} + ' 1500w, ' + @{/i/2000.jpg} + ' 2000w'"
/>
A: The correct way for building of srcset attribute in thymeleaf is as follows:
<img th:attr="srcset=|@{/img/image1x.png} 1x, @{/img/image2x.png} 2x, @{/img/image3x.png} 3x|" th:src="@{/img/image1x.png}" alt="My image description text" />
| stackoverflow | {
"language": "en",
"length": 88,
"provenance": "stackexchange_0000F.jsonl.gz:903362",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44662338"
} |
ad78422018899b43e4e4741493452ebb40de97ad | Stackoverflow Stackexchange
Q: Change font of tick labels in matplotlib How can I change the font of matplotlib tick labels? I'd like to change it to Computer Modern 10 (called "cm" in matplotlib I believe), however, I don't want to render it in TeX. I've tried numerous ways of doing this, but none of them seem to work.
A: func = lambda x, pos: "" if np.isclose(x,0) else x
plt.gca().xaxis.set_major_formatter(matplotlib.ticker.FuncFormatter(func))
plt.gca().yaxis.set_major_formatter(matplotlib.ticker.FuncFormatter(func))
Works in my case!
| Q: Change font of tick labels in matplotlib How can I change the font of matplotlib tick labels? I'd like to change it to Computer Modern 10 (called "cm" in matplotlib I believe), however, I don't want to render it in TeX. I've tried numerous ways of doing this, but none of them seem to work.
A: func = lambda x, pos: "" if np.isclose(x,0) else x
plt.gca().xaxis.set_major_formatter(matplotlib.ticker.FuncFormatter(func))
plt.gca().yaxis.set_major_formatter(matplotlib.ticker.FuncFormatter(func))
Works in my case!
| stackoverflow | {
"language": "en",
"length": 73,
"provenance": "stackexchange_0000F.jsonl.gz:903364",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44662342"
} |
c88b3180e052ac857d7a2fcbe0fc909a61a86139 | Stackoverflow Stackexchange
Q: Android O NotificationChannels default category I recently added a category to my notification categories and I was able to set what I wanted into that category. However, another category is showing in lists of these categories under "uncategorized" which I believe it's the default category, image below:
Any idea how I can delete that category?
If it's bad practice to delete it, why is that?
A: I had that same issue but only while my targetSdkVersion was 25. It went away after I updated it to 26.
| Q: Android O NotificationChannels default category I recently added a category to my notification categories and I was able to set what I wanted into that category. However, another category is showing in lists of these categories under "uncategorized" which I believe it's the default category, image below:
Any idea how I can delete that category?
If it's bad practice to delete it, why is that?
A: I had that same issue but only while my targetSdkVersion was 25. It went away after I updated it to 26.
| stackoverflow | {
"language": "en",
"length": 88,
"provenance": "stackexchange_0000F.jsonl.gz:903365",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44662357"
} |
1d3d571f44837eff8d2e95ba475c61adeb8a4c05 | Stackoverflow Stackexchange
Q: Failing to find script-runner.jar Here's the code to install and run hive over EMR
args = ['s3://' + zone_name + '.elasticmapreduce/libs/hive/hive-script',
'--base-path', 's3://' + zone_name + '.elasticmapreduce/libs/hive/',
'--install-hive', '--hive-versions', '0.13.1']
args2 = ['s3://' + zone_name + '.elasticmapreduce/libs/hive/hive-script',
'--base-path', 's3://' + zone_name + '.elasticmapreduce/libs/hive/',
'--hive-versions', '0.13.1',
'--run-hive-script', '--args',
'-f', s3_url]
steps = []
for name, args in zip(('Setup Hive', 'Run Hive Script'), (args, args2)):
step = JarStep(name,
's3://us-east-1.elasticmapreduce/libs/script-runner/script-runner.jar',
step_args=args,
# action_on_failure="CANCEL_AND_WAIT"
)
# should be inside loop
steps.append(step)
Now when i feed this to run_jobflow, for some reason
i get error
Error fetching jar file. java.lang.RuntimeException: Error whilst fetching 's3://us-east-1.elasticmapreduce/libs/script-runner/script-runner.jar
I can access bucket elasticmapreduce/libs/script-runner/ with my credentials. How can i resolve this?Or is there any other way script-runner can be provided
A: This is caused by your cluster being in a different region than the bucket you a fetching the jar from. Make sure that the EMR cluster is in the same region that you are passing as "zone_name".
| Q: Failing to find script-runner.jar Here's the code to install and run hive over EMR
args = ['s3://' + zone_name + '.elasticmapreduce/libs/hive/hive-script',
'--base-path', 's3://' + zone_name + '.elasticmapreduce/libs/hive/',
'--install-hive', '--hive-versions', '0.13.1']
args2 = ['s3://' + zone_name + '.elasticmapreduce/libs/hive/hive-script',
'--base-path', 's3://' + zone_name + '.elasticmapreduce/libs/hive/',
'--hive-versions', '0.13.1',
'--run-hive-script', '--args',
'-f', s3_url]
steps = []
for name, args in zip(('Setup Hive', 'Run Hive Script'), (args, args2)):
step = JarStep(name,
's3://us-east-1.elasticmapreduce/libs/script-runner/script-runner.jar',
step_args=args,
# action_on_failure="CANCEL_AND_WAIT"
)
# should be inside loop
steps.append(step)
Now when i feed this to run_jobflow, for some reason
i get error
Error fetching jar file. java.lang.RuntimeException: Error whilst fetching 's3://us-east-1.elasticmapreduce/libs/script-runner/script-runner.jar
I can access bucket elasticmapreduce/libs/script-runner/ with my credentials. How can i resolve this?Or is there any other way script-runner can be provided
A: This is caused by your cluster being in a different region than the bucket you a fetching the jar from. Make sure that the EMR cluster is in the same region that you are passing as "zone_name".
| stackoverflow | {
"language": "en",
"length": 160,
"provenance": "stackexchange_0000F.jsonl.gz:903381",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44662408"
} |
87085cc06ffd504fe1438355a607abf242cb6182 | Stackoverflow Stackexchange
Q: Horizontally center first item of RecyclerView I want to use a RecyclerView to emulate the behavior of a MultiViewPager, in particular I'd like to have the selected item at the center of the screen, including the first and the last element.
As you can see in this image, the first item is centered and this would be my expected result.
What I did was to setup a RecyclerView with an horizontal LinearLayoutManager and a LinearSnapHelper. The problem with this solution is that the first and the last item will never be horizontally centered as selection. Should I switch my code so that it uses a MultiViewPager or is it possible to achieve a similar result taking advantage of a RecyclerView?
A: Just add padding on RecyclerView and add clipToPadding=false and it'll only affect the items on the ends.
| Q: Horizontally center first item of RecyclerView I want to use a RecyclerView to emulate the behavior of a MultiViewPager, in particular I'd like to have the selected item at the center of the screen, including the first and the last element.
As you can see in this image, the first item is centered and this would be my expected result.
What I did was to setup a RecyclerView with an horizontal LinearLayoutManager and a LinearSnapHelper. The problem with this solution is that the first and the last item will never be horizontally centered as selection. Should I switch my code so that it uses a MultiViewPager or is it possible to achieve a similar result taking advantage of a RecyclerView?
A: Just add padding on RecyclerView and add clipToPadding=false and it'll only affect the items on the ends.
A:
The problem with this solution is that the first and the last item
will never be horizontally centered as selection.
This is probably because your RecycleView is responsible for showing, within its layout bounds, exactly the number of items that are inside of your data set.
In the example image you provided, you can achieve that effect by adding a "placeholder" item in the first and last position of your dataset. This way, you can have an invisible item taking up the first slot, thus offsetting the item you want to be centered.
This placeholder item should not respond to touch events and should not interfere with handling of click events on other items (specifically, the position handling).
You will have to modify your adapters getItemCount and perhaps getItemType.
A: I ended up with this implementation in my project. You can pass different dimension in constructor to set the spacing between the items. As I wrote in the class' KDoc, it will add (total parent space - child width) / 2 to the left of first and to the right of last item in order to center first and last items.
import android.graphics.Rect
import android.view.View
import androidx.annotation.DimenRes
import androidx.recyclerview.widget.OrientationHelper
import androidx.recyclerview.widget.RecyclerView
/**
* Adds (total parent space - child width) / 2 to the left of first and to the right of last item (in order to center first and last items),
* and [spacing] between items.
*/
internal class OffsetItemDecoration constructor(
@DimenRes private val spacing: Int,
) : RecyclerView.ItemDecoration() {
override fun getItemOffsets(
outRect: Rect,
view: View,
parent: RecyclerView,
state: RecyclerView.State,
) {
val itemPosition: Int = parent.getChildAdapterPosition(view)
if (itemPosition == RecyclerView.NO_POSITION) return
val spacingPixelSize: Int = parent.context.resources.getDimensionPixelSize(spacing)
when (itemPosition) {
0 ->
outRect.set(getOffsetPixelSize(parent, view), 0, spacingPixelSize / 2, 0)
parent.adapter!!.itemCount - 1 ->
outRect.set(spacingPixelSize / 2, 0, getOffsetPixelSize(parent, view), 0)
else ->
outRect.set(spacingPixelSize / 2, 0, spacingPixelSize / 2, 0)
}
}
private fun getOffsetPixelSize(parent: RecyclerView, view: View): Int {
val orientationHelper = OrientationHelper.createHorizontalHelper(parent.layoutManager)
return (orientationHelper.totalSpace - view.layoutParams.width) / 2
}
}
A: You can implement this with an RecyclerView.ItemDecoration in getItemOffsets(), to offset the first and last item appropriately.
Retrieve any offsets for the given item. Each field of outRect specifies the number of pixels that the item view should be inset by, similar to padding or margin. The default implementation sets the bounds of outRect to 0 and returns.
If you need to access Adapter for additional data, you can call getChildAdapterPosition(View) to get the adapter position of the View.
You might need to use the messured size of the item and the RecyclerView as well. But these information is available to be used anyhow.
A: An improvement on @I.S answer which works 100% of the time and is very easy to implement without any glitchy animation. First we have to use PagerSnapHelper() to have view pager like scroll. To center the items you need to add a large padding on the recyclerView and then clip to the padding. Then use a customized LinearSmoothScroller to smoothly center your item. To center the item on load, just use smooth scroll to position 0. Below is the code
<android.support.v7.widget.RecyclerView
android:id="@+id/recycler_selection"
android:layout_width="0dp"
android:layout_height="0dp"
android:layout_marginLeft="30dp"
android:layout_marginRight="30dp"
android:paddingTop="5dp"
android:paddingBottom="5dp"
android:paddingLeft="150dp"
android:paddingRight="150dp"
app:layout_constraintLeft_toLeftOf="parent"
app:layout_constraintRight_toRightOf="parent"
app:layout_constraintTop_toBottomOf="@+id/text_selection_alert"
app:layout_constraintBottom_toBottomOf="@+id/guideline_1"
android:clipToPadding="false"
android:background="@drawable/bg_stat"/>
And in Code (in C#)
RecyclerView.LayoutManager lm = new LinearLayoutManager(Context, LinearLayoutManager.Horizontal, false);
recycler_selection = view.FindViewById<RecyclerView>(Resource.Id.recycler_selection);
recycler_selection.SetLayoutManager(new LinearLayoutManager(Context, LinearLayoutManager.Horizontal, false));
// <Set Adapter to the Recycler>
RecyclerView.SmoothScroller smoothScroller = new CenterScroller(recycler_selection.Context);
SnapHelper helper = new PagerSnapHelper();
helper.AttachToRecyclerView(recycler_selection);
smoothScroller.TargetPosition = 0;
lm.StartSmoothScroll(smoothScroller);
public class CenterScroller : LinearSmoothScroller
{
float MILLISECONDS_PER_INCH = 350f;
public CenterScroller(Context context) : base(context)
{
}
public override int CalculateDtToFit(int viewStart, int viewEnd, int boxStart, int boxEnd, int snapPreference)
{
return (boxStart + (boxEnd - boxStart) / 2) - (viewStart + (viewEnd - viewStart) / 2);
}
protected override float CalculateSpeedPerPixel(DisplayMetrics displayMetrics)
{
return MILLISECONDS_PER_INCH / displayMetrics.Xdpi;
}
}
| stackoverflow | {
"language": "en",
"length": 776,
"provenance": "stackexchange_0000F.jsonl.gz:903386",
"question_score": "14",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44662420"
} |
b4bf7c006dc32592009dbb26cf8db8233e594a5f | Stackoverflow Stackexchange
Q: Node-Inspector not starting First, I was having an issue installing node-inspector, I had to revert to installing version @0.7.5.. That installed globally on my machine, but now when I try to run node-inspector I get the error below. I find it odd that I haven't been able to find much in regards to these two errors.
module.js:487
throw err;
^
Error: Cannot find module '_debugger'
at Function.Module._resolveFilename (module.js:485:15)
at Function.Module._load (module.js:437:25)
at Module.require (module.js:513:17)
at require (internal/module.js:11:18)
at Object.<anonymous> (/usr/local/lib/node_modules/node-inspector/lib/debugger.js:2:16)
at Module._compile (module.js:569:30)
at Object.Module._extensions..js (module.js:580:10)
at Module.load (module.js:503:32)
at tryModuleLoad (module.js:466:12)
at Function.Module._load (module.js:458:3)
A: This ecosystem moves fast. Now the command is...
node inspect yourScript.js
https://nodejs.org/dist/latest-v8.x/docs/api/debugger.html
| Q: Node-Inspector not starting First, I was having an issue installing node-inspector, I had to revert to installing version @0.7.5.. That installed globally on my machine, but now when I try to run node-inspector I get the error below. I find it odd that I haven't been able to find much in regards to these two errors.
module.js:487
throw err;
^
Error: Cannot find module '_debugger'
at Function.Module._resolveFilename (module.js:485:15)
at Function.Module._load (module.js:437:25)
at Module.require (module.js:513:17)
at require (internal/module.js:11:18)
at Object.<anonymous> (/usr/local/lib/node_modules/node-inspector/lib/debugger.js:2:16)
at Module._compile (module.js:569:30)
at Object.Module._extensions..js (module.js:580:10)
at Module.load (module.js:503:32)
at tryModuleLoad (module.js:466:12)
at Function.Module._load (module.js:458:3)
A: This ecosystem moves fast. Now the command is...
node inspect yourScript.js
https://nodejs.org/dist/latest-v8.x/docs/api/debugger.html
A: I was facing the same issue today. After some googling I found that is deprecated now. Use instead this:
node --inspect-brk yourScript.js
Head over to Official Docs for complete reference.
After running the above command, do either of the following two options:
Option 1: Open chrome://inspect in a Chromium-based browser. Click the Configure button and ensure your target host and port are listed. Then select your Node.js app from the list.
Option 2: Install the Chrome Extension NIM (Node Inspector Manager):
https://chrome.google.com/webstore/detail/nim-node-inspector-manage/gnhhdgbaldcilmgcpfddgdbkhjohddkj
Hope that helps!
| stackoverflow | {
"language": "en",
"length": 195,
"provenance": "stackexchange_0000F.jsonl.gz:903397",
"question_score": "17",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44662442"
} |
290ac7cc9f915f4d6f1d5c750d54264cfd440fcc | Stackoverflow Stackexchange
Q: Toggle - Hide and show I copied w3schools hide and show toggle, but I want it to be reversed, so that the extra information isn't there from the beginning, but the button shows it.
This is the code:
html:
<button onclick="myFunction()">Click Me</button>
<div id="myDIV">
This is my DIV element.
</div>
js:
function myFunction() {
var x = document.getElementById('myDIV');
if (x.style.display === 'none') {
x.style.display = 'block';
} else {
x.style.display = 'none';
}
}
Any help would be much appreciated!
A: Solution is simple: Just hide the div.
<div id="myDIV" style="display:none">
This is my DIV element.
</div>
Even cooler if you hide it in css instead:
<div id="myDIV">
This is my DIV element.
</div>
And this in your css:
#myDIV {
display: none;
}
| Q: Toggle - Hide and show I copied w3schools hide and show toggle, but I want it to be reversed, so that the extra information isn't there from the beginning, but the button shows it.
This is the code:
html:
<button onclick="myFunction()">Click Me</button>
<div id="myDIV">
This is my DIV element.
</div>
js:
function myFunction() {
var x = document.getElementById('myDIV');
if (x.style.display === 'none') {
x.style.display = 'block';
} else {
x.style.display = 'none';
}
}
Any help would be much appreciated!
A: Solution is simple: Just hide the div.
<div id="myDIV" style="display:none">
This is my DIV element.
</div>
Even cooler if you hide it in css instead:
<div id="myDIV">
This is my DIV element.
</div>
And this in your css:
#myDIV {
display: none;
}
A: I'd us a utility CSS class for this:
.is--hidden {
display: none;
}
Then you can apply it to the element by default:
<button class="mybutton">Click Me</button>
<div class="example is--hidden">Some Text</div>
and toggle it via jQuery:
$('.mybutton').on('click', function () {
$('.example').toggleClass('is--hidden');
})
Fiddle: https://jsfiddle.net/tL5mj54n/
A: You just need to add display : none in your code.
function myFunction() {
var x = document.getElementById('myDIV');
if (x.style.display === 'none') {
x.style.display = 'block';
} else {
x.style.display = 'none';
}
}
<button onclick="myFunction()">Click Me</button>
<div id="myDIV" style="display:none;">
This is my DIV element.
</div>
A: No changes to styles or HTML required. Your javascript should be the following:
(function () {
var x = document.getElementById('myDIV');
if (x.style.display != 'none') {
x.style.display = 'none';
} else {
x.style.display = 'block';
}
} )();
function myFunction() {
var x = document.getElementById('myDIV');
if (x.style.display != 'none') {
x.style.display = 'none';
} else {
x.style.display = 'block';
}
};
The first function runs and hides your div and the second reacts to clicks and toggles the div.
A: Here's a snippet example
Set the style to hide the element (display:none) from the start. Toggle it on click.
document.getElementById('myButton').onclick = function() {
var x = document.getElementById('myDIV');
x.style.display = x.style.display === 'none' ? 'block' : 'none';
};
<button id='myButton' >Click Me</button>
<div id="myDIV" style="display:none">
This is my DIV element.
</div>
| stackoverflow | {
"language": "en",
"length": 346,
"provenance": "stackexchange_0000F.jsonl.gz:903417",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44662502"
} |
9fe481f13f43896cafc4578f9fb1a37aca1d255c | Stackoverflow Stackexchange
Q: Why is session store initializer removed in Rails 5.1.1 I went to this website to see the differences between Rails 5.0.0 and Rails 5.1.1
Why does 5.1.1 not anymore include: config/initializers/session_store.rb?
Thanks
A: Here's the commit where it was removed: Setup default session store internally, no longer through an application initializer
In summary, new apps don't have that initializer, instead the session store is set to cookie store by default. i.e. the same as the value that used to be specified in the generated version of that initializer.
| Q: Why is session store initializer removed in Rails 5.1.1 I went to this website to see the differences between Rails 5.0.0 and Rails 5.1.1
Why does 5.1.1 not anymore include: config/initializers/session_store.rb?
Thanks
A: Here's the commit where it was removed: Setup default session store internally, no longer through an application initializer
In summary, new apps don't have that initializer, instead the session store is set to cookie store by default. i.e. the same as the value that used to be specified in the generated version of that initializer.
| stackoverflow | {
"language": "en",
"length": 89,
"provenance": "stackexchange_0000F.jsonl.gz:903427",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44662545"
} |