Dataset Viewer (First 5GB)
Search is not available for this dataset
text
stringlengths 1.27k
99.6M
| id
stringlengths 23
24
| file_path
stringclasses 46
values |
---|---|---|
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Cloning into 'ALAE'...\r\n",
"remote: Enumerating objects: 7, done.\u001b[K\r\n",
"remote: Counting objects: 100% (7/7), done.\u001b[K\r\n",
"remote: Compressing objects: 100% (7/7), done.\u001b[K\r\n",
"remote: Total 2318 (delta 2), reused 2 (delta 0), pack-reused 2311\u001b[K\r\n",
"Receiving objects: 100% (2318/2318), 208.04 MiB | 13.63 MiB/s, done.\r\n",
"Resolving deltas: 100% (898/898), done.\r\n",
"Collecting package metadata (current_repodata.json): - \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\bdone\r\n",
"Solving environment: | \b\bfailed with initial frozen solve. Retrying with flexible solve.\r\n",
"Collecting package metadata (repodata.json): - \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\bdone\r\n",
"Solving environment: \\ \b\bfailed with initial frozen solve. Retrying with flexible solve.\r\n",
"\r\n",
"PackagesNotFoundError: The following packages are not available from current channels:\r\n",
"\r\n",
" - torch\r\n",
"\r\n",
"Current channels:\r\n",
"\r\n",
" - https://conda.anaconda.org/conda-forge/linux-64\r\n",
" - https://conda.anaconda.org/conda-forge/noarch\r\n",
" - https://repo.anaconda.com/pkgs/main/linux-64\r\n",
" - https://repo.anaconda.com/pkgs/main/noarch\r\n",
" - https://repo.anaconda.com/pkgs/r/linux-64\r\n",
" - https://repo.anaconda.com/pkgs/r/noarch\r\n",
"\r\n",
"To search for alternate channels that may provide the conda package you're\r\n",
"looking for, navigate to\r\n",
"\r\n",
" https://anaconda.org\r\n",
"\r\n",
"and use the search bar at the top of the page.\r\n",
"\r\n",
"\r\n",
"Collecting package metadata (current_repodata.json): - \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\bdone\r\n",
"Solving environment: - \b\bfailed with initial frozen solve. Retrying with flexible solve.\r\n",
"Collecting package metadata (repodata.json): | \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\bdone\r\n",
"Solving environment: - \b\bfailed with initial frozen solve. Retrying with flexible solve.\r\n",
"\r\n",
"PackagesNotFoundError: The following packages are not available from current channels:\r\n",
"\r\n",
" - requirements\r\n",
"\r\n",
"Current channels:\r\n",
"\r\n",
" - https://conda.anaconda.org/conda-forge/linux-64\r\n",
" - https://conda.anaconda.org/conda-forge/noarch\r\n",
" - https://repo.anaconda.com/pkgs/main/linux-64\r\n",
" - https://repo.anaconda.com/pkgs/main/noarch\r\n",
" - https://repo.anaconda.com/pkgs/r/linux-64\r\n",
" - https://repo.anaconda.com/pkgs/r/noarch\r\n",
"\r\n",
"To search for alternate channels that may provide the conda package you're\r\n",
"looking for, navigate to\r\n",
"\r\n",
" https://anaconda.org\r\n",
"\r\n",
"and use the search bar at the top of the page.\r\n",
"\r\n",
"\r\n"
]
},
{
"ename": "ModuleNotFoundError",
"evalue": "No module named 'dlutils'",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mModuleNotFoundError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-1-d31b55aef8b6>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m 14\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0mlauncher\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mrun\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 15\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0mcheckpointer\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mCheckpointer\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 16\u001b[0;31m \u001b[0;32mfrom\u001b[0m \u001b[0mdlutils\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpytorch\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mcount_parameters\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 17\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0mdefaults\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mget_cfg_defaults\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 18\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mlreq\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mModuleNotFoundError\u001b[0m: No module named 'dlutils'"
]
}
],
"source": [
"%load_ext autoreload\n",
"%autoreload 2\n",
"!git clone https://github.com/podgorskiy/ALAE.git\n",
"!conda install torch \n",
"\n",
"import os\n",
"os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0\"\n",
"\n",
"import torch.utils.data\n",
"os.chdir('/kaggle/working/ALAE')\n",
"!conda install requirements\n",
"from net import *\n",
"from model import Model\n",
"from launcher import run\n",
"from checkpointer import Checkpointer\n",
"from dlutils.pytorch import count_parameters\n",
"from defaults import get_cfg_defaults\n",
"import lreq\n",
"import logging\n",
"from PIL import Image\n",
"import bimpy\n",
"import cv2\n",
"\n",
"import matplotlib.pyplot as plt\n",
"%matplotlib inline\n",
"\n",
"lreq.use_implicit_lreq.set(True)\n",
"\n",
"\n",
"indices = [0, 1, 2, 3, 4, 10, 11, 17, 19]\n",
"\n",
"labels = [\"gender\",\n",
" \"smile\",\n",
" \"attractive\",\n",
" \"wavy-hair\",\n",
" \"young\",\n",
" \"big lips\",\n",
" \"big nose\",\n",
" \"chubby\",\n",
" \"glasses\",\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"%%capture\n",
"!pip install -r requirements.txt"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"def loadNext(index=0):\n",
" img = np.asarray(Image.open(path + '/' + paths[index]))\n",
" current_file.value = paths[index]\n",
"\n",
" if len(paths) == 0:\n",
" paths.extend(paths_backup)\n",
"\n",
" if img.shape[2] == 4:\n",
" img = img[:, :, :3]\n",
" im = img.transpose((2, 0, 1))\n",
" x = torch.tensor(np.asarray(im, dtype=np.float32), device='cpu', requires_grad=True).cuda() / 127.5 - 1.\n",
" if x.shape[0] == 4:\n",
" x = x[:3]\n",
"\n",
" needed_resolution = model.decoder.layer_to_resolution[-1]\n",
" while x.shape[2] > needed_resolution:\n",
" x = F.avg_pool2d(x, 2, 2)\n",
" if x.shape[2] != needed_resolution:\n",
" x = F.adaptive_avg_pool2d(x, (needed_resolution, needed_resolution))\n",
"\n",
" img_src = ((x * 0.5 + 0.5) * 255).type(torch.long).clamp(0, 255).cpu().type(torch.uint8).transpose(0, 2).transpose(0, 1).numpy()\n",
"\n",
" latents_original = encode(x[None, ...].cuda())\n",
" latents = latents_original[0, 0].clone()\n",
" latents -= model.dlatent_avg.buff.data[0]\n",
" \n",
" for v, w in zip(attribute_values, W):\n",
" v.value = (latents * w).sum()\n",
"\n",
" for v, w in zip(attribute_values, W):\n",
" latents = latents - v.value * w\n",
"\n",
" return latents, latents_original, img_src\n",
"\n",
"\n",
"def loadRandom():\n",
" latents = rnd.randn(1, cfg.MODEL.LATENT_SPACE_SIZE)\n",
" lat = torch.tensor(latents).float().cuda()\n",
" dlat = mapping_fl(lat)\n",
" layer_idx = torch.arange(2 * layer_count)[np.newaxis, :, np.newaxis]\n",
" ones = torch.ones(layer_idx.shape, dtype=torch.float32)\n",
" coefs = torch.where(layer_idx < model.truncation_cutoff, ones, ones)\n",
" dlat = torch.lerp(model.dlatent_avg.buff.data, dlat, coefs)\n",
" x = decode(dlat)[0]\n",
" img_src = ((x * 0.5 + 0.5) * 255).type(torch.long).clamp(0, 255).cpu().type(torch.uint8).transpose(0, 2).transpose(0, 1).numpy()\n",
" latents_original = dlat\n",
" latents = latents_original[0, 0].clone()\n",
" latents -= model.dlatent_avg.buff.data[0]\n",
" \n",
" for v, w in zip(attribute_values, W):\n",
" v.value = (latents * w).sum()\n",
"\n",
" for v, w in zip(attribute_values, W):\n",
" latents = latents - v.value * w\n",
"\n",
" return latents, latents_original, img_src\n",
" \n",
"def update_image(w, latents_original):\n",
" with torch.no_grad():\n",
" w = w + model.dlatent_avg.buff.data[0]\n",
" w = w[None, None, ...].repeat(1, model.mapping_fl.num_layers, 1)\n",
"\n",
" layer_idx = torch.arange(model.mapping_fl.num_layers)[np.newaxis, :, np.newaxis]\n",
" cur_layers = (7 + 1) * 2\n",
" mixing_cutoff = cur_layers\n",
" styles = torch.where(layer_idx < mixing_cutoff, w, latents_original)\n",
"\n",
" x_rec = decode(styles)\n",
" resultsample = ((x_rec * 0.5 + 0.5) * 255).type(torch.long).clamp(0, 255)\n",
" resultsample = resultsample.cpu()[0, :, :, :]\n",
" return resultsample.type(torch.uint8).transpose(0, 2).transpose(0, 1)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"ename": "NameError",
"evalue": "name 'get_cfg_defaults' is not defined",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-4-04425c7f0851>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m 2\u001b[0m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mset_default_tensor_type\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m'torch.cuda.FloatTensor'\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 3\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 4\u001b[0;31m \u001b[0mcfg\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mget_cfg_defaults\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 5\u001b[0m \u001b[0mcfg\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmerge_from_file\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"./configs/ffhq.yaml\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 6\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mNameError\u001b[0m: name 'get_cfg_defaults' is not defined"
]
}
],
"source": [
"torch.cuda.set_device(0)\n",
"torch.set_default_tensor_type('torch.cuda.FloatTensor')\n",
"\n",
"cfg = get_cfg_defaults()\n",
"cfg.merge_from_file(\"./configs/ffhq.yaml\")\n",
"\n",
"logger = logging.getLogger(\"logger\")\n",
"logger.setLevel(logging.DEBUG)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"collapsed": true
},
"outputs": [
{
"ename": "NameError",
"evalue": "name 'cfg' is not defined",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-5-5dcb680d04f6>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m 1\u001b[0m model = Model(\n\u001b[0;32m----> 2\u001b[0;31m \u001b[0mstartf\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mcfg\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mMODEL\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mSTART_CHANNEL_COUNT\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 3\u001b[0m \u001b[0mlayer_count\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mcfg\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mMODEL\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mLAYER_COUNT\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0mmaxf\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mcfg\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mMODEL\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mMAX_CHANNEL_COUNT\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[0mlatent_size\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mcfg\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mMODEL\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mLATENT_SPACE_SIZE\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mNameError\u001b[0m: name 'cfg' is not defined"
]
}
],
"source": [
"\n",
"\n",
"\n",
"model = Model(\n",
" startf=cfg.MODEL.START_CHANNEL_COUNT,\n",
" layer_count=cfg.MODEL.LAYER_COUNT,\n",
" maxf=cfg.MODEL.MAX_CHANNEL_COUNT,\n",
" latent_size=cfg.MODEL.LATENT_SPACE_SIZE,\n",
" truncation_psi=cfg.MODEL.TRUNCATIOM_PSI,\n",
" truncation_cutoff=cfg.MODEL.TRUNCATIOM_CUTOFF,\n",
" mapping_layers=cfg.MODEL.MAPPING_LAYERS,\n",
" channels=cfg.MODEL.CHANNELS,\n",
" generator=cfg.MODEL.GENERATOR,\n",
" encoder=cfg.MODEL.ENCODER)\n",
"\n",
"model.cuda()\n",
"model.eval()\n",
"model.requires_grad_(False)\n",
"\n",
"decoder = model.decoder\n",
"encoder = model.encoder\n",
"mapping_tl = model.mapping_tl\n",
"mapping_fl = model.mapping_fl\n",
"dlatent_avg = model.dlatent_avg\n",
"\n",
"logger.info(\"Trainable parameters generator:\")\n",
"count_parameters(decoder)\n",
"\n",
"logger.info(\"Trainable parameters discriminator:\")\n",
"count_parameters(encoder)\n",
"\n",
"arguments = dict()\n",
"arguments[\"iteration\"] = 0\n",
"\n",
"model_dict = {\n",
" 'discriminator_s': encoder,\n",
" 'generator_s': decoder,\n",
" 'mapping_tl_s': mapping_tl,\n",
" 'mapping_fl_s': mapping_fl,\n",
" 'dlatent_avg': dlatent_avg\n",
"}\n",
"\n",
"checkpointer = Checkpointer(cfg,\n",
" model_dict,\n",
" {},\n",
" logger=logger,\n",
" save=False)\n",
"\n",
"extra_checkpoint_data = checkpointer.load()\n",
"\n",
"model.eval()\n",
"\n",
"layer_count = cfg.MODEL.LAYER_COUNT\n",
"\n",
"\n",
"def encode(x):\n",
" Z, _ = model.encode(x, layer_count - 1, 1)\n",
" Z = Z.repeat(1, model.mapping_fl.num_layers, 1)\n",
" # print(Z.shape)\n",
" return Z\n",
"\n",
"\n",
"def decode(x):\n",
" layer_idx = torch.arange(2 * layer_count)[np.newaxis, :, np.newaxis]\n",
" ones = torch.ones(layer_idx.shape, dtype=torch.float32)\n",
" coefs = torch.where(layer_idx < model.truncation_cutoff, ones, ones)\n",
" # x = torch.lerp(model.dlatent_avg.buff.data, x, coefs)\n",
" return model.decoder(x, layer_count - 1, 1, noise=True)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"ename": "NameError",
"evalue": "name 'bimpy' is not defined",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-6-bb05460e67bb>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m 6\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 7\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 8\u001b[0;31m \u001b[0mrandomize\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mbimpy\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mBool\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 9\u001b[0m \u001b[0mcurrent_file\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mbimpy\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mString\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 10\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mNameError\u001b[0m: name 'bimpy' is not defined"
]
}
],
"source": [
"path = 'dataset_samples/faces/realign1024x1024'\n",
"\n",
"paths = list(os.listdir(path))\n",
"paths.sort()\n",
"paths_backup = paths[:]\n",
"\n",
"\n",
"randomize = bimpy.Bool(True)\n",
"current_file = bimpy.String(\"\")\n",
"\n",
"ctx = bimpy.Context()\n",
"\n",
"attribute_values = [bimpy.Float(0) for i in indices]\n",
"\n",
"# W: 9x512\n",
"W = [torch.tensor(np.load(\"principal_directions/direction_%d.npy\" % i), dtype=torch.float32) for i in indices]\n",
"\n",
"rnd = np.random.RandomState(5)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"scrolled": false
},
"outputs": [
{
"ename": "NameError",
"evalue": "name 'cfg' is not defined",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-7-e5b7171cb8ba>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0mim_size\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;36m2\u001b[0m \u001b[0;34m**\u001b[0m \u001b[0;34m(\u001b[0m\u001b[0mcfg\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mMODEL\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mLAYER_COUNT\u001b[0m \u001b[0;34m+\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 2\u001b[0m \u001b[0mseed\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 3\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0;31m#image_index = 6 # image index\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[0mslider_vals\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mnp\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mlinspace\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m-\u001b[0m\u001b[0;36m20\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m20\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m10\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;31m# simulate the slider form interactive demo\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mNameError\u001b[0m: name 'cfg' is not defined"
]
}
],
"source": [
"im_size = 2 ** (cfg.MODEL.LAYER_COUNT + 1)\n",
"seed = 0\n",
"\n",
"#image_index = 6 # image index \n",
"slider_vals = np.linspace(-20, 20, 10) # simulate the slider form interactive demo\n",
"\n",
"for image_index in range(10):\n",
" for target_attr in range(len(labels)):\n",
" latents, latents_original, img_src = loadNext(image_index) \n",
"\n",
" fig, ax = plt.subplots(1, len(slider_vals)+1, figsize=(25, 6))\n",
" fig.suptitle(f\"Variation across: {labels[target_attr]}\", y=0.7)\n",
" ax[0].imshow(img_src)\n",
" ax[0].set_title(\"Original image\")\n",
" ax[0].axis('off')\n",
"\n",
" for i, val in enumerate(slider_vals):\n",
" attribute_values[target_attr].value = val\n",
" new_latents = latents + sum([v.value * w for v, w in zip(attribute_values, W)])\n",
" new_im = update_image(new_latents, latents_original)\n",
"\n",
" ax[i+1].imshow(new_im)\n",
" ax[i+1].set_title(round(val, 1))\n",
" ax[i+1].axis('off')"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
| 0034/558/34558168.ipynb | s3://data-agents/kaggle-outputs/sharded/016_00034.jsonl.gz |
"{\"cells\":[{\"metadata\":{\"_uuid\":\"8f2839f25d086af736a60e9eeb907d3b93b6e0e5\",\"_cell_guid\":\"(...TRUNCATED) | 0034/558/34558237.ipynb | s3://data-agents/kaggle-outputs/sharded/016_00034.jsonl.gz |
"{\"cells\":[{\"metadata\":{},\"cell_type\":\"markdown\",\"source\":\"# *H2O AutoML for predicting H(...TRUNCATED) | 0034/558/34558719.ipynb | s3://data-agents/kaggle-outputs/sharded/016_00034.jsonl.gz |
"{\n \"cells\": [\n {\n \"cell_type\": \"code\",\n \"execution_count\": 1,\n \"metadata\": {\(...TRUNCATED) | 0034/559/34559002.ipynb | s3://data-agents/kaggle-outputs/sharded/016_00034.jsonl.gz |
"{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"execution_count\": null,\n \"metadat(...TRUNCATED) | 0034/559/34559571.ipynb | s3://data-agents/kaggle-outputs/sharded/016_00034.jsonl.gz |
"{\"cells\":[{\"metadata\":{},\"cell_type\":\"markdown\",\"source\":\"# Introduction\"},{\"metadata\(...TRUNCATED) | 0034/559/34559978.ipynb | s3://data-agents/kaggle-outputs/sharded/016_00034.jsonl.gz |
"{\n \"cells\": [\n {\n \"cell_type\": \"code\",\n \"execution_count\": 1,\n \"metadata\": {\(...TRUNCATED) | 0034/560/34560048.ipynb | s3://data-agents/kaggle-outputs/sharded/016_00034.jsonl.gz |
"{\"cells\":[{\"metadata\":{},\"cell_type\":\"markdown\",\"source\":\"# MEI Introduction to Data Sci(...TRUNCATED) | 0034/560/34560123.ipynb | s3://data-agents/kaggle-outputs/sharded/016_00034.jsonl.gz |
"{\"cells\":[{\"metadata\":{\"_uuid\":\"8f2839f25d086af736a60e9eeb907d3b93b6e0e5\",\"_cell_guid\":\"(...TRUNCATED) | 0034/560/34560382.ipynb | s3://data-agents/kaggle-outputs/sharded/016_00034.jsonl.gz |
"{\"cells\":[{\"metadata\":{\"_uuid\":\"8f2839f25d086af736a60e9eeb907d3b93b6e0e5\",\"_cell_guid\":\"(...TRUNCATED) | 0034/560/34560556.ipynb | s3://data-agents/kaggle-outputs/sharded/016_00034.jsonl.gz |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 27