Search is not available for this dataset
text
string
meta
dict
%%%%%%%%%%%%%%%%%%%%%%%% % Hao's 5 questions % % 1. What is the research Problem, Why is it important % 2. What are the specific theoretical challenges that existing work cannot well address % 3. What is the Approach, and how does it address the specific challenges % 4. What are the Novelties, why is it novel, what impact does that have? % 5. What experiments did they perform to support the novelty? What are the metrics % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentclass[11pt, conference]{IEEEtran} \IEEEoverridecommandlockouts % The preceding line is only needed to identify funding in the first footnote. If that is unneeded, please comment it out. \usepackage{cite} \usepackage{amsmath,amssymb,amsfonts} \usepackage{algorithmic} \usepackage{subcaption} \usepackage{graphicx, svg} \usepackage{textcomp} \usepackage{xcolor} \def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}} \begin{document} \title{Evaluation of Distributed Multi-Robot SLAM in a Unity Simulation Environment\\ % {\footnotesize \textsuperscript{*}Note: Sub-titles are not captured in Xplore and % should not be used} % \thanks{Identify applicable funding agency here. If none, delete this.} } \author{\IEEEauthorblockN{Luke Drong} \IEEEauthorblockA{ [email protected]} \and \IEEEauthorblockN{ Robinson Merillat} \IEEEauthorblockA{ [email protected]} } \maketitle %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%% Abstract %%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{abstract} In this project, we present and evaluate an on-line algorithm for robot simultaneous localization and mapping (SLAM). We simulate two robots within a simple maze environment in the Unity engine. Each robot is responsible for maintaining its local map, and the robots are not aware of their initial pose in their scene. When robots come into visual line of sight with each other, they may exchange relative pose data and their local maps. From this information, the robots can merge their local map with their peer's map and more effectively map their environment. \end{abstract} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%% Introduction %%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Introduction} %Intro to this field, relevant info When exploring and mapping unknown environments, a robot must create some computational understanding of the environment around it. The Simultaneous Localization and Mapping (SLAM) problem addresses not only mapping and updating a map of the environment, but also simultaneously localizing the agent within that environment. %What is lacking in the field While there are effective state of the art methods for performing SLAM with a single robot, if one distributes this task among multiple robots, this could potentially improve localization \cite{1067998} and in some cases improve computation time \cite{Bonin}. Members of the group would be able to rely on features found by other robots in the same task as well as locally-derived information. % What do we think we can do to make it better The goal of this project is to implement a distributed, multi-robot SLAM approach as previously researched, in which a team of robots cooperatively map a large-scale environment with high efficiency and accuracy. In particular our team will implement the work proposed by Chen, Lu, and Xiao in Distributed Monocular Multi-Robot SLAM \cite{monocular} This will require relative pose estimation, per robot, as well as map merging to integrate the local findings with the global map. Further, the solution will need to be resilient to perception outliers while ensuring it is not too conservative as to reject potential loop closure candidates in environment mapping. This SLAM approach will also have to consider bandwidth limitations and limit data exchange to processed data as opposed to raw sensor output. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%% Related Work %%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Related Work} One approach to multi-robot SLAM is presented by S. I. Roumeliotis and G. A. Bekey in their 2002 paper on Distributed Multirobot Localization \cite{1067998}. This approach demonstrates how the singular Kalman filter for pose estimation can be decomposed into components that relate only to the individual robot, and that this computation can be run on all members of the group. Three robots were used and began separated from one another with an unknown starting pose. After a few updates of the system, cross correlation was used to further provide estimations and begin mapping. Their approach used the same method of loop detection as standard single-robot SLAM, but added three additional modules to enable collaborative SLAM: map processing, relative pose estimation, and map merging. Each robot conducts the same monocular SLAM while sharing their incremental maps via radio. Once per step of the algorithm, the robot will traverse these maps and determine if it is in the same location another robot has already visited utilizing a place-recognition CNN. The robot will then estimate its pose compared to this candidate location, and merge the maps together. Lajoie, Tamtoula, Chang, Carlone, and Beltrame present "DOOR-SLAM", a distributed, online, and outlier-resilient SLAM for robotic teams \cite{Lajoie2020DOORSLAM}. Their approach is rooted in peer-to-peer connection between robots and has three key modules: loop closure detector, an outlier rejector, and distributed pose graph. The loop closure detector is a distributed algorithm which communicates with other in-range robots and outputs inter-robot loop closure measurements. This process has two subparts: place recognition, which uses a CNN to create compact image descriptors, followed by geometric verification, which uses relative pose estimates between two robots observing the same scene. The outlier rejector module collects odometry and inter-robot measurements and uses these to compute the maximal set of pairwise consistent measurements, thus filtering out outliers. The distributed pose graph performs distributed SLAM. The system as a whole produces high-confidence inter-robot loop closures while rejecting outliers, resulting in accurate trajectory estimates while maintaining low bandwidth requirements. Chen, Xieyuanli and Lu et. al. present another approach to distributed slam in their 2018 paper "Distributed monocular multi-robot SLAM" \cite{monocular}. This work proposes a novel pose estimation that is vision based, with extremely low data rate between peer robots. The authors developed a map merging method which uses place-recognition to determine the poses of the robots within the distributed group and builds a global map by merging each robots local map. Unlike many other solutions, this approach allows for the use of distributed monocular SLAM in large scale outdoor settings. The authors also mentioned specific assumptions and limitations to this research. Our adaption of this method uses 2D laser scans, rather than monocular location recognition, and focuses more on the novel map merging methods. In order to test or debug robotic applications, simulation software such as Gazebo, which has been the leader 3D simulation robotics software \cite{Tools} in usage, are used. However, in the ever expanding world of robotics, statistics from 2014 could be considered old news, and new methods of 3D robot simulation have begun to emerge to challenge Gazebo's tile for the most usable simulation software. It is no surprise, then, that the industry is seeking to move to professional software packages that are free for use in situations that it is not resold. A more recent review in 2019 \cite{CompareSim} compared the usability the current simulators that are most likely to be used, Gazebo, V-Rep, and Unity. While the authors ranked Unity fairly low in terms of the SUS (System Usability Scale), Unity does have some distinct advantages over the competition. It is commercially supported software, free to use for non-profit purposes. It supports easy loading of community-shared assets, and allows user-written plugins and scripts to interact dynamically with the scene. Despite all these customization options, it requires no additional work for creating a basic simulation scene when compared to other options. % [Anime pronuciation] NICEU Additionally, Santos et. al. claim that the communication layer required several dependencies such as a Virtual Machine running Ubuntu. First hand use of the ROS to Unity communication framework ROS\# has shown this claim to be false. With this in mind, \cite{Konrad2019SimulationOM} investigates a comprehensive performance comparison of robotics simulations in Gazebo, and has shown that Unity is a viable, if not great, environment to perform simulations. The authors even suggest that testing an application in different simulators could highlight potential benefits or drawbacks of the targeted research and is valuable to do when able. Our approach will build on this idea and implement a simulation environment for the Triton robot which currently only has a simulation built in Gazebo. Unlike \cite{Konrad2019SimulationOM}, which performs SLAM with a singular robot, we will explore the possibilities that multiple robots provide. % ROS # %How does our soln compare? %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%% Methods %%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Approach} Our approach to this problem utilizes the map merging method of \cite{monocular} with the pose adaption of \cite{1067998}. Each robot attempts SLAM and creates a local map. This local map is a much lower resolution than the raw sensor data, making it suitable for periodically updating a global map. Each robot will localize itself relative to other robots using the methods of \cite{1067998}, where it creates a transform relating it's current pose relative to the other robot's pose after making visual contact. These pose methods will be used for the line of sight map merging between two robots. \subsection{Simulation} The robot(s) will be simulated within the Unity Game Engine in a simple maze environment. See Figure \ref{fig:maze} for the overview shape. Each robot will publish topics to its respective ROS node, and will not access to other robot's data. This is to model the situation of each robot working independently, and not communicating with one another unless expressly initiated. \begin{figure}[ht] \includegraphics[width=\linewidth]{../unityScreenshots/maze_1.png} \caption{Simple Maze Scene in Unity} \label{fig:maze} \end{figure} By default, a Unity simulation cannot communicate with a ROS server. Recent developments in the ROS to Unity communication package, ROS\# has allowed for a simpler simulation integration without requiring a total re-implementation of the ROS framework. ROS\# is a set of open-source software libraries and tools in C\# for communicating with ROS from .NET applications, and has explicit support for Unity. The version used for this project is ROS\# version 1.6, released December 2019. \cite{bischoffm} \subsection{Triton Robot} \begin{figure}[ht] \includegraphics[width=\linewidth]{Final Reaport/unityScreenshots/triton.PNG} \caption{The Triton Robot and its' 3D model counterpart.} \label{fig:triton} \end{figure} For the purposes of furthering research, the robot for this project is a simulated version of the Triton Robot, developed by the Mines Human-Centered Robotics Lab. This robot is a small, cylindrical robot designed with indoor SLAM and robot teaming in mind. An LED ring on top of the robot can be used to provide visual signals to observers. It is nonholonomic and has omnidirectional movement with its three omnidirectional wheels. Each wheel has encoders for determining accurate odometry measurements. For exteroreceptive sensing, the robot's default configuration uses a monocular RGB$+$D camera. The compute module of the Triton is an NVIDIA Jetson Nano. The Nano is a 4-core ARM64 single board computer with a 128-core GPU. This GPU aboard such a small platform enables edge computation of multiple neural networks for image classification or speech recognition with headroom to spare. Further, the Nano has available IO to support other add-on sensors though Ethernet, USB, HDMI, and GPIO. This allows the robot to have various sensors and communication modules for the task, such as a 2D LIDAR scanner and WiFi capabilites as used in this simulation. \subsection{Algorithm} Jiří Hörner, in his 2016 paper on "Map-Merging for a Multi-Robot System" provides two very useful software packages for multi robot slam: explore\_lite and map\_merge \cite{Horner2016}. Hörner's explore\_lite package allows a robot to pick exploration goals based on a greedy frontier exploration algorithm. This is useful for autonomously driving a robot with the goal of exploring an unknown scene, discovering obstacles and frontiers to close the mapping region. The map\_merge package allows fast merging of local occupancy maps with a common reference frame, or an iterative approach for local maps with unknown relative relation. Our algorithm builds upon these technologies. When robots A and B are within range and robot A has direct line of sight of robot B, the following is computed. First, B's pose relative to A is computed as $P_{BA}$. Next, we send this relative pose $P_{BA}$, A's local map, and A's current local pose, $P_{A}$. Using this data, the receiving robot B can convert A's occupancy grid into their local frame, and use map\_merge to efficiently combine the two. The relative pose of the robot can be found by: \begin{equation} P = \frac{1}{2}R_{r2} + min(RaycastHit).distance \label{eq} \end{equation} where $RaycastHit$ is the array of raycasts that hit another robot, r2 is the robot being detected, R is the radius of the Triton robot, and P is the relative pose of r2 with respect to r1. As the Triton robot is cylindrical, adding the radius of the robot to the distance between the center of one robot and the closes hit point on the detected robot's exterior will result in the pose difference between one robot and the other. This newly updated local map results in explore\_lite providing additional exploration goals for the robot. With these map updates, robots are able to work together without ``repeating" work - exploring a known area \subsection{Simulation Details} Our simulation was setup to run in the simple maze scene with both 1 and 2 robots. As a baseline, we recorded the time to map the entire scene for the first robot and obtained an average score. We then repeated the experiment with multiple robots starting at opposite ends of the map and following the map-merging algorithm described above. To run the experiment, we started all services in the following order: \begin{enumerate} \item ROS service and Message Bridge \item Unity simulation \item SLAM service (for each Robot node) \item Rviz Map Vizualizer \item Move\_Base Local Planner \item explore\_lite service (for each node) \item Robot Detector service (for each node) \end{enumerate} % The transformation frame hierarchy and pose update rate for this simulation is shown in Figure \ref{fig:frames}. % \begin{figure}[ht] % \includegraphics[width=\linewidth]{../unityScreenshots/frames.pdf} % \caption{Transformation Frame Hierarchy} % \label{fig:frames} % \end{figure} The front half of this SLAM algorithm is shown in Figure \ref{fig:rosgraph_part}. Note how each Robot node works independently for the local mapping phase. Not shown is the exploration phase, which consumes the `map' and `pose' topics for each robot, and outputs a `twist' command to move the robot to a new position and continue mapping. \begin{figure}[hb] \def\svgwidth{\columnwidth} \includesvg{../unityScreenshots/rosgraph} \caption{ROS Graph - Mapping Components} \label{fig:rosgraph_part} \end{figure} After a few seconds of exploration, we recorded both local maps from the pair of robots. These are shown in Figure \ref{fig:local_maps}. \begin{figure}[ht] \centering \begin{subfigure}{.49\linewidth} \centering \includegraphics[width=.9\linewidth]{../unityScreenshots/robot2map.png} \caption{Robot 1} \label{fig:sub1} \end{subfigure}% \begin{subfigure}{.49\linewidth} \centering \includegraphics[width=.9\linewidth]{../unityScreenshots/robot1map.png} \caption{Robot 2} \label{fig:sub2} \end{subfigure} \caption{Local Robot Maps} \label{fig:local_maps} \end{figure} After the robots detected each other, we recorded the merged map. Figure \ref{fig:local_map_3} shows a later iteration of the map display, which highlights the most recent updates. \begin{figure}[ht] \includegraphics[width=\linewidth]{../unityScreenshots/local_maps_3.png} \caption{Local Maps after Alignment and Merging} \label{fig:local_map_3} \end{figure} Note how after the robots met one another, the nearest unexplored region in Figure \ref{fig:local_map_3} was the same location for both robots. This was an unexpected finding which will be visited later. \subsection{Challenges} This application posed several small challenges as well as a share of larger technical challenges. When working in robotic applications, having a wide understanding of robotic development and techniques is pertinent. Simple things like ROS namespaces, preexisting packages, and types of ROS topics to subscribe/publish to could cause several hours of digging through documents, examples, and tutorials. There is a lot of preexisting work that we can pull from, and discovering that is half the battle. There is no need to reinvent the wheel here for many of the things we wished to do. For example, we attempted to develop an internal robot navigation system that would publish twist messages, but after spending a lot of time with little to show for it, we switched to using pre-existing navigation methods. Further, we encountered multiple errors involving network time synchronization. Each message in a ROS system is timestamped, and when devices have out-of-sync clocks or have faulty clocks, message handling quickly breaks down. We encountered many errors between the desktop running Unity and the laptops running ROS, where messages were rejected due to clock jitter being too high. This was most prominent in our SLAM module, as the ROS Service would become flooded with invalid messages. One of our laptops was found to have a faulty clock; minutes after a forced NTP synchronization, it had already fallen out of sync by 20ms. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%% Evaluation %%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Performance Evaluation} To establish baseline performance of the SLAM system, we ran the maze scenario with a single robot, starting in the corner, and timed how long it took to map the closed area. We then added an additional robot and performed two tests: \begin{enumerate} \item where the robots started on opposite corners of the map near the thinner section so they would greedily head the opposite direction and thus explore the bounded region faster (ideal case) \item The robots were set in the other corners of the map opposite of where they were in the first experiment so they would explore similarly but have to backtrack to pick up missed regions (non-ideal case) \end{enumerate} The results of these experiments can be seen in \ref{tab1}. \begin{figure}[ht] \includegraphics[width=\linewidth]{Final Reaport/unityScreenshots/ideal_map.PNG} \caption{Ideal starting positions and trajectory of two robots in the simulation map} \label{fig:ideal_map} \end{figure} The total map construction time given more than a single robot is highly dependent on the starting position of each robot. Nothing is keeping a robot for searching the same location as another and thus each robot may receive the same goal once they discover each other. This would cause them to both begin exploring first that area, then other regions that have not yet been fully explored. In our simulation this became evident as the robots reached the center of the room. In an ideal case, each robot would start in such a way that the largest area to explore is continuously ideal. Given that this is the case, having an additional robot almost halves the exploration time (time to center then extra to explore corners of the center room) as seen in \ref{tab1}. We can assume that the ideal case is not likely to happen often and thus that the exploration time will be between $0.5*T + t$ and $T$ where $T$ is the time a single robot takes to explore, and $t$ is the time to explore the last corners of the map. \begin{table}[htbp] \caption{Exploration Time} \begin{center} \begin{tabular}{|c|c|} \hline \textbf{Number of Robots}&\textbf{Approximate Time to Explore (s)}\\ \hline 1& 280\\ \hline 2 (ideal)& 155\\ \hline 2 (non-ideal)& 190\\ \hline \end{tabular} \label{tab1} \end{center} \end{table} % \begin{table}[htbp] % \caption{Communication Costs} % \begin{center} % \begin{tabular}{|c|c|} % \hline % \textbf{Type of Message}&\textbf{Refresh Time}\\ % \hline % 1& 280\\ % \hline % 2 (ideal)& 155\\ % \hline % 2 (non-ideal)& 190\\ % \hline % \end{tabular} % \label{tab2} % \end{center} % \end{table} % \begin{enumerate} % \item experiments % {\color{red}\begin{itemize} % \item Single-Robot Exploration % \item Pair-Robot Exploration % \begin{itemize} % \item Robots construct a local map. % \item when they see each other, they can relate each other's positions. % \item then they can copy the other robot's local map onto their map. % \end{itemize} % \end{itemize}} % \item Performance % {\color{red}\begin{itemize} % \item Time to explore maze % \item Bandwidth requirements for sending an occupancy grid % \end{itemize}} % \item performance % \item evaluation of results % \end{enumerate} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%% Conclusion %%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Conclusions} % What was the Goal / Novelty As the number of limitations surrounding the connections to and from Unity begin to diminish, the number of robotics applications making their way to Unity will greatly increase, however, it is important to understand what affect, if any, moving a simulation or application to Unity has. It was our goal to implement a well known mapping and navigation task using multiple Triton robots and gain knowledge on the performance of such a task in this new environment. Using ROS\# and Unity, we created a simulation communicating ROS topics over a local websocket and used slight modifications to preexisting SLAM methods in order to localize and map the simulated environment. A global map was constructed by merging the local maps generated by each individual robot upon line of sight within scan range of another robot. % Results As far as exploration time of the room goes, everything is as expected. In general, with more robots, the time to navigate and map an environment will decrease and continue doing so likely until the overhead cost overtakes the map generation time. What is more interesting is what occurs when two robots "see" each other. Given that they have no directive on where the goal of the other robot is, they may opt to go explore the same region after meeting up and merging their maps. This could potentially add excess time to the exploration and provide worse results than a single robot exploring the region. By checking if the goals of these two robots are within the same region when the local maps are swapped, we may be able to provide additional improvements in the speed of mapping the room. % Limitations and future work The main downside with Unity is that it is designed to develop games and not robotics applications As such, it is up to the robotics community to develop and maintain the ability to create robotic applications in Unity. There is no official long term support for robotic applications on Unity. Additionally the packages like ROS\# are open source packages that are not able to support every device or use case. Perform most preexisting ROS functionality relies on the ros\_bridge server and thus the websocket connection. This creates a bottleneck with a single point of failure. There are smaller, more efficient means of transferring ROS data of certain types across the ros\_bridge. ROS laser scan and point cloud objects can be quite large and take some time to communicate via websocket. Thus one method of compacting the data would be to instead communicate a compressed image, where the pixels of the image correlate to the scanned points within the scene. This would improve the message passing performance to and from Unity. Furthermore, finding a means to compact all information into a singular structure and communicating that solely would reduce the strain on the bottleneck. The benefit/hindrance of which could only be determined by calculating the overhead of packaging the data together. ROS\# may not be the only viable method of communicating ROS messages on a ROS network in unity. \cite{ros2unity} Or ROS\# may revise their data communication to allow for other methods than websocket connection. Either of these options may increase the complexity of setting up the system, but may lead to significantly improved performance depending on the types of messages being passed. % \section*{Acknowledgment} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%% FIN %%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % \section{Prepare Your Paper Before Styling} % Before you begin to format your paper, first write and save the content as a % separate text file. Complete all content and organizational editing before % formatting. Please note sections \ref{AA}--\ref{SCM} below for more information on % proofreading, spelling and grammar. % \subsection{Abbreviations and Acronyms}\label{AA} % Define abbreviations and acronyms the first time they are used in the text, % even after they have been defined in the abstract. Abbreviations such as % IEEE, SI, MKS, CGS, ac, dc, and rms do not have to be defined. Do not use % abbreviations in the title or heads unless they are unavoidable. % \subsection{Units} % \begin{itemize} % \item Use either SI (MKS) or CGS as primary units. (SI units are encouraged.) English units may be used as secondary units (in parentheses). An exception would be the use of English units as identifiers in trade, such as ``3.5-inch disk drive''. % \item Avoid combining SI and CGS units, such as current in amperes and magnetic field in oersteds. This often leads to confusion because equations do not balance dimensionally. If you must use mixed units, clearly state the units for each quantity that you use in an equation. % \item Do not mix complete spellings and abbreviations of units: ``Wb/m\textsuperscript{2}'' or ``webers per square meter'', not ``webers/m\textsuperscript{2}''. Spell out units when they appear in text: ``. . . a few henries'', not ``. . . a few H''. % \item Use a zero before decimal points: ``0.25'', not ``.25''. Use ``cm\textsuperscript{3}'', not ``cc''.) % \end{itemize} % \subsection{Equations} % Number equations consecutively. To make your % equations more compact, you may use the solidus (~/~), the exp function, or % appropriate exponents. Italicize Roman symbols for quantities and variables, % but not Greek symbols. Use a long dash rather than a hyphen for a minus % sign. Punctuate equations with commas or periods when they are part of a % sentence, as in: % \begin{equation} % a+b=\gamma\label{eq} % \end{equation} % Be sure that the % symbols in your equation have been defined before or immediately following % the equation. Use ``\eqref{eq}'', not ``Eq.~\eqref{eq}'' or ``equation \eqref{eq}'', except at % the beginning of a sentence: ``Equation \eqref{eq} is . . .'' % \subsection{\LaTeX-Specific Advice} % Please use ``soft'' (e.g., \verb|\eqref{Eq}|) cross references instead % of ``hard'' references (e.g., \verb|(1)|). That will make it possible % to combine sections, add equations, or change the order of figures or % citations without having to go through the file line by line. % Please don't use the \verb|{eqnarray}| equation environment. Use % \verb|{align}| or \verb|{IEEEeqnarray}| instead. The \verb|{eqnarray}| % environment leaves unsightly spaces around relation symbols. % Please note that the \verb|{subequations}| environment in {\LaTeX} % will increment the main equation counter even when there are no % equation numbers displayed. If you forget that, you might write an % article in which the equation numbers skip from (17) to (20), causing % the copy editors to wonder if you've discovered a new method of % counting. % {\BibTeX} does not work by magic. It doesn't get the bibliographic % data from thin air but from .bib files. If you use {\BibTeX} to produce a % bibliography you must send the .bib files. % {\LaTeX} can't read your mind. If you assign the same label to a % subsubsection and a table, you might find that Table I has been cross % referenced as Table IV-B3. % {\LaTeX} does not have precognitive abilities. If you put a % \verb|\label| command before the command that updates the counter it's % supposed to be using, the label will pick up the last counter to be % cross referenced instead. In particular, a \verb|\label| command % should not go before the caption of a figure or a table. % Do not use \verb|\nonumber| inside the \verb|{array}| environment. It % will not stop equation numbers inside \verb|{array}| (there won't be % any anyway) and it might stop a wanted equation number in the % surrounding equation. % \subsection{Some Common Mistakes}\label{SCM} % \begin{itemize} % \item The word ``data'' is plural, not singular. % \item The subscript for the permeability of vacuum $\mu_{0}$, and other common scientific constants, is zero with subscript formatting, not a lowercase letter ``o''. % \item In American English, commas, semicolons, periods, question and exclamation marks are located within quotation marks only when a complete thought or name is cited, such as a title or full quotation. When quotation marks are used, instead of a bold or italic typeface, to highlight a word or phrase, punctuation should appear outside of the quotation marks. A parenthetical phrase or statement at the end of a sentence is punctuated outside of the closing parenthesis (like this). (A parenthetical sentence is punctuated within the parentheses.) % \item A graph within a graph is an ``inset'', not an ``insert''. The word alternatively is preferred to the word ``alternately'' (unless you really mean something that alternates). % \item Do not use the word ``essentially'' to mean ``approximately'' or ``effectively''. % \item In your paper title, if the words ``that uses'' can accurately replace the word ``using'', capitalize the ``u''; if not, keep using lower-cased. % \item Be aware of the different meanings of the homophones ``affect'' and ``effect'', ``complement'' and ``compliment'', ``discreet'' and ``discrete'', ``principal'' and ``principle''. % \item Do not confuse ``imply'' and ``infer''. % \item The prefix ``non'' is not a word; it should be joined to the word it modifies, usually without a hyphen. % \item There is no period after the ``et'' in the Latin abbreviation ``et al.''. % \item The abbreviation ``i.e.'' means ``that is'', and the abbreviation ``e.g.'' means ``for example''. % \end{itemize} % An excellent style manual for science writers is \cite{b7}. % \subsection{Figures and Tables} % \paragraph{Positioning Figures and Tables} Place figures and tables at the top and % bottom of columns. Avoid placing them in the middle of columns. Large % figures and tables may span across both columns. Figure captions should be % below the figures; table heads should appear above the tables. Insert % figures and tables after they are cited in the text. Use the abbreviation % ``Fig.~\ref{fig}'', even at the beginning of a sentence. % Figure Labels: Use 8 point Times New Roman for Figure labels. Use words % rather than symbols or abbreviations when writing Figure axis labels to % avoid confusing the reader. As an example, write the quantity % ``Magnetization'', or ``Magnetization, M'', not just ``M''. If including % units in the label, present them within parentheses. Do not label axes only % with units. In the example, write ``Magnetization (A/m)'' or ``Magnetization % \{A[m(1)]\}'', not just ``A/m''. Do not label axes with a ratio of % quantities and units. For example, write ``Temperature (K)'', not % ``Temperature/K''. \bibliographystyle{plain} \bibliography{report.bib} \end{document}
{ "alphanum_fraction": 0.7547833935, "avg_line_length": 76.766743649, "ext": "tex", "hexsha": "e50db6b7bb31ddf78308b95e308f24dc2037413b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3045f9c9b8b5f012620abbf7213539cf080b0c77", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "BloodRaine/DistroLocale", "max_forks_repo_path": "Report/report/Report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3045f9c9b8b5f012620abbf7213539cf080b0c77", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "BloodRaine/DistroLocale", "max_issues_repo_path": "Report/report/Report.tex", "max_line_length": 1165, "max_stars_count": null, "max_stars_repo_head_hexsha": "3045f9c9b8b5f012620abbf7213539cf080b0c77", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "BloodRaine/DistroLocale", "max_stars_repo_path": "Report/report/Report.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7455, "size": 33240 }
\section{201604-4} \input{problem/7/201604-4-p.tex}
{ "alphanum_fraction": 0.7307692308, "avg_line_length": 17.3333333333, "ext": "tex", "hexsha": "d228f8d4c492fdd26d9f32b8202560ecdca8c4ed", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "26ef348463c1f948c7c7fb565edf900f7c041560", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "xqy2003/CSP-Project", "max_forks_repo_path": "problem/7/201604-4.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "26ef348463c1f948c7c7fb565edf900f7c041560", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "xqy2003/CSP-Project", "max_issues_repo_path": "problem/7/201604-4.tex", "max_line_length": 32, "max_stars_count": 1, "max_stars_repo_head_hexsha": "26ef348463c1f948c7c7fb565edf900f7c041560", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "xqy2003/CSP-Project", "max_stars_repo_path": "problem/7/201604-4.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-14T01:47:19.000Z", "max_stars_repo_stars_event_min_datetime": "2022-01-14T01:47:19.000Z", "num_tokens": 22, "size": 52 }
\documentclass{beamer} \usepackage{amsmath} \usepackage{url} %\usetheme{Berlin} \title{Ported Parametrized Tutte Functions} \author{Seth Chaiken\\ CS Department\\ Univ. at Albany\\ State Univ. of New York\\ \url{[email protected]}\\ \url{http://www.cs.albany.edu/~sdc} %\verb|[email protected]|\\ %\verb|http://www.cs.albany.edu/~sdc| } %\address{Computer Science Department\\ %The University at Albany (SUNY)\\ %Albany, NY 12222, U.S.A.} %\email{\tt [email protected]} \date{Version of \today} %March 26, 2009\\ %St. Michael's College and Univ. Vermont\\ %Combinatorics Seminar} \begin{document} \newcommand{\Remph}[1]{{\color{red}#1}} % Disjoint Union %\newcommand{\dunion}{\uplus} \newcommand{\dunion} %{\mbox{\hbox{\hskip4pt$\cdot$\hskip-4.62pt$\cup$\hskip2pt}}} {\mbox{\hbox{\hskip6pt$\cdot$\hskip-5.50pt$\cup$\hskip2pt}}} % % Dot inside a cup. % If there is a better, more Latex like way % (more invariant under font size changes) way, % I'd like to know. \newcommand{\en}{\;\raisebox{-0.2\height}{\input{e1b.pdf_t}}} \newcommand{\ez}{\;\;\raisebox{-0.2\height}{\input{e0b.pdf_t}}} \newcommand{\pn}{\;\raisebox{-0.2\height}{\input{p1b.pdf_t}}} \newcommand{\qn}{\;\raisebox{-0.2\height}{\input{q1b.pdf_t}}} \newcommand{\pz}{\;\;\raisebox{-0.2\height}{\input{p0b.pdf_t}}} \newcommand{\qz}{\;\;\raisebox{-0.2\height}{\input{q0b.pdf_t}}} \newcommand{\pzqz}{\;\raisebox{-0.2\height}{\input{p0q0b.pdf_t}}} \newcommand{\pzqn}{\;\raisebox{-0.2\height}{\input{p0q1b.pdf_t}}} \newcommand{\pnqz}{\;\raisebox{-0.2\height}{\input{p1q0b.pdf_t}}} \newcommand{\pnqn}{\;\raisebox{-0.2\height}{\input{p1q1b.pdf_t}}} \newcommand{\pqegg}{\;\raisebox{-0.2\height}{\input{pqeggb.pdf_t}}} \newcommand{\pnsub}{\input{p1.pdf_t}} \newcommand{\qnsub}{\input{q1.pdf_t}} \newcommand{\pzsub}{\input{p0.pdf_t}} \newcommand{\qzsub}{\input{q0.pdf_t}} \newcommand{\pzqzsub}{\input{p0q0.pdf_t}} \newcommand{\pzqnsub}{\input{p0q1.pdf_t}} \newcommand{\pnqzsub}{\input{p1q0.pdf_t}} \newcommand{\pnqnsub}{\input{p1q1.pdf_t}} \newcommand{\pqeggsub}{\input{pqegg.pdf_t}} \begin{frame} \titlepage \end{frame} \section*{Outline} \begin{frame} \tableofcontents \end{frame} \section{Generalizing Tutte Functions} \begin{frame} \frametitle{Our Ported Parametrized {\small separator-strong} Tutte Equations} \begin{itemize} \item $T(G)=x_eT(G/e)+y_eT(G\setminus e)$\\ if $e$ is a non-separator and $e\not\in P$. \item $T(G)=X_eT(G/e) \text{ if } e \text{ is a coloop (isthmus) and } e\not\in P.$ \item $T(G)=Y_eT(G\setminus e) \text{ if } e \text{ is a loop and } e\not\in P.$ \end{itemize} \begin{block}{They express two generalizations of Famous Tutte Polynomial properties} \begin{enumerate} \item We include four generally different parameters for each $e$\\ ($x_e$, $y_e$ in additive equation; $X_e$, $Y_e$ in separator multiplicative equations).\\ Zaslavsky, Bollobas-Riordan, Ellis-Monaghan-Traldi:\\ \Remph{NO SOLUTION} unless conditions hold on the parameters and $T(\emptyset)$. \item Deletion ($G\setminus e$), Contraction ($G/e$) and reduction of separators ($T(G'\oplus\ez)=Y_eT(G')$, $T(G'\oplus\en)=X_eT(G')$) is \Remph{restricted to $e\not\in P$}.\\ Las Vergnas. Diao-Hetyei and sdc combined (1) with (2). \end{enumerate} \end{block} \end{frame} \begin{frame} \frametitle{The Famous Tutte Polynomial} \begin{block}{Recursive charactization} Take $P=\emptyset$, $x_e=y_e=1$, $X_e=X$ and $Y_e=Y$ for all $e$,\\ define $T(\emptyset)=1$: $T(G)(X,Y)$ is then a well-defined polynomial in $X,Y$. \end{block} \begin{block}{Interesting graph or matroid invariants are evaluations of $T(G)$ for values of $X,Y$.} \begin{itemize} \item $T(G;1,1)=$ number of spanning trees (in a connected graphs) or number of bases (in a matroid). \item $T(G;\text{???}(-\lambda))=$ number of graph colorings over $\lambda$ colors (\Remph{chromatic polynomial}). \end{itemize} \end{block} \end{frame} \begin{frame} \frametitle{Tutte Equations} \begin{block}{Classical Tutte Equations} $T(G)$ is a matroid \emph{invariant} that satisfies: \[ T(G)=T(G/e)+T(G\setminus e)\text{ if $e$ is a non-separator} \] \[ T(G_1\oplus G_2)=T(G_1)T(G_2) \] and so it is a polynomial in $X$ and $Y$ where \[ X=T(\text{coloop, i.e. one element isthmus matroid}) \] \[ Y=T(\text{(one element) loop matroid}). \] \Remph{provided we know that } the Tutte equations uniquely determine $T(G)$. \end{block} \end{frame} \begin{frame} \frametitle{Well-definedness of Tutte Eq. Solutions---Newer, Easy Way} \begin{theorem} The following polynomial in $u$, $w$ defined below for all matroids $G$ satisfies the additive and multiplicative Tutte Equations: \[ R(G)=\sum_{A\subseteq E}u^{\text{rank}(G)-\text{rank}(A)} w^{|A|-\text{rank}(A)} \] \end{theorem} \begin{corollary} $T(G;X,Y)$ is well-defined by $R(G)(X-1,Y-1)$. \end{corollary} \begin{proof} Verify that $R(\en)=u+1$, $R(\ez)=v+1$; use $T(\en)=X$, $T(\ez)=Y$; and apply induction on $|E|$. \end{proof} \end{frame} \begin{frame} \frametitle{Well-definedness of Tutte Eq. Solutions---Orig, Hard Way} \begin{theorem}[Tutte, Brylawski] \[ T(X,Y)=\sum_{\text{Bases }B\subseteq E} X^{\text{Internal Activity}(B)} Y^{\text{External Activity}(B)} \] \Remph{independently} of $E$'s order used to define the activities. \end{theorem} \end{frame} \begin{frame}{Reminder about Activities} Given a linear order on $E$,\\ \hspace{0.5in}Given a basis $B$ (spanning tree if $G$ is connected): \begin{itemize} \item $e\not\in B$ is \Remph{externally active} if $e$ is the \Remph{smallest} element of the (unique) circuit in $B\cup\{e\}$. \item $e\in B$ is \Remph{internally active} if $e$ is the \Remph{smallest} element of the (unique) cocircuit in $E\setminus B\cup\{e\}$. \item $\text{Internal (External) Activity}(B)$ is the \Remph{number} of internally (externally) active elements. \end{itemize} \Remph{Huh??} We will get intuition for this and extend it with $P\neq\emptyset$ with a Tutte (Computation) Tree (Gordon-MacMahon) view. \vfill H. Crapo also proved the well-definedness of the Tutte polynomial from its corank-nullity polynomial expression. But that doesn't fully generalize to parametrized Tutte functions (Zaslavsky). \end{frame} \subsection{Tutte (Computation) Trees} \begin{frame} \frametitle{Recursive Computations and Trees} \begin{itemize} \item Every process of applying subset of the Tutte equations applied left-to-right to calculate some $T(G)$ is a \Remph{recursive computation}. \item Recursive computations (ignoring dataflow independent orderings) correspond to \Remph{computation trees}. \end{itemize} \end{frame} \begin{frame} \frametitle{The result from a Tutte Tree} bla bla \vfill It is natural, but not mandatory, to use a somehow-defined \Remph{next} non-separator to determine the two recursions used to compute $T(G')$ for an intermediate minor $G'$. \vfill Tutte (computation) trees were defined formally and used by Gordon-MacMahon to study Tutte polynomials of \Remph{greedoids}, where sometimes, the same element priority order cannot be used under each branch. \end{frame} \subsection{Ambiguities among Tutte Equations} \begin{frame} {First Ambiguity among Tutte Equations} \[ x_eY_f+y_eX_f=x_fY_e+y_fX_e \] \begin{center}\input{DyadProblem.pdf_t}\end{center} \begin{block}{A Detail} $T(\text{loop matroid on }e) = Y_eT(\emptyset\text{(empty matroid)})$, etc. so the real ZBR condition is \[ T(\emptyset)(x_eY_f+y_eX_f)=T(\emptyset)(x_fY_e+y_fX_e) \] \end{block} \end{frame} \begin{frame}{Two More: One for separated $P$ and another like it.} \input{TriadProblems.pdf_t} \end{frame} \begin{frame}{Another Two More: With all 5, are we done?} \input{TriangleProblems.pdf_t} \end{frame} \subsection{Solution} \begin{frame} \frametitle{Solution---Setup} \begin{block}{When do recursive equations have a solution?} ``Have a solution'' here means ``\Remph{Every calculation of $T(G)$ using the Tutte equations and initial values on members of $\mathcal{F}$ gives the same answer.} \end{block} \begin{definition}[Sep. Strong Ported Parametrized Tutte Function] Let $P$ be a set and $\mathcal{F}$ be a family of graphs, oriented matroids or matroids that is closed under deletion and contraction of elements \Remph{not in} $P$. Deletion of loops and contraction of coloops is allowed. Let ring $R$ elements $X_e, Y_e, x_e$ and $y_e$ (for each $e\not\in P$) and $R-$module elements $I(Q)$ for every $Q\in \mathcal{F}$ with $Q$ \Remph{over elements of $P$ only} also be given. This structure \Remph{has a Tutte function} if and only if the Ported Parametrized Tutte Equations have (a necessarilly unique) solution over all of $\mathcal{F}$. \end{definition} The $X_e, Y_e, x_e, y_e$ and $I(Q)$ are called parameters and initial values. \end{frame} \begin{frame}{Solution---Theorem} \begin{theorem}[After Zaslavsky, Bollobas-Riordan, Ellis-Monaghan-Traldi] $\mathcal{F}$ and values as above \Remph{has a Tutte function} iff the following equations are satisfied whenever they arise from a member $G\in\mathcal{F}$: \begin{enumerate} \item Suppose $G=Q\oplus G'$ where $S(Q)\subseteq P$. \begin{enumerate} \item With $G'$ a 2-circuit $\{e,f\}$ (and so 2-cocircuit too), $I(Q)(x_eY_f+y_eX_f)=I(Q)(x_fY_e+y_fX_e)$. \item With $G'$ a 3-circuit $\{e,f,g\}$, $I(Q)X_g(x_ey_f+y_eX_f)= I(Q)X_g(x_fy_e+y_fX_e)$. \item With $G'$ a 3-cocircuit $\{e,f,g\}$, $I(Q)Y_g(x_eY_f+y_ex_f)= I(Q)Y_g(x_fY_e+y_fx_e)$. \end{enumerate} These generalize the 3 ZBR equations merely by replacing $I(\emptyset)$ with $I(Q)$. \item With $\{e,f\}=E$ in series and not isolated (from $P$), $I(G/e\setminus f)(x_ey_f+y_eX_f)= I(G/e\setminus f)(x_fy_e+y_fX_e)$. \item With $\{e,f\}=E$ in parallel and not isolated, $I(G/e\setminus f)(x_eY_f+y_ex_f)= I(G/e\setminus f)(x_fY_e+y_fx_e)$. \end{enumerate} \end{theorem} \end{frame} \subsection{Proof Ideas} \begin{frame} \frametitle{Proof Outline} \begin{block}{Ported ZBR equations are necessary} Consider the $1+4$ matroid/graph classes with $E(G)=\{e,f\}$ or $E(G)=\{e,f,g\}$, where $E(G)=S(G)\setminus P$, corresponding to the 5 ZBR conditions. For each, show (as I illustrated before) that assuming certain pairs of computations of $T(G)$ give equal results implies the condition. \end{block} \begin{block}{Ported ZBR equations are sufficient} Induction: Assume $G$ is a minimum $|E(G)|$ counter example, where $E(G)=S(G)\setminus P$. So: $T(G/e)$ and $T(G\setminus e)$ are well-defined from the Tutte Equations for every $e\in E(G)$. Lemma (Zaslavsky) shows \Remph{all of} $E(G)$ is a series class or a parallel class. The relevent Tutte equations\\ (Is $E$ isolated? Or is $E$ connected to some of $P$?)\\ show there's a smaller $E$ counterexample. \end{block} \end{frame} \subsection{Proof Details} \begin{frame} \frametitle{Some Details} \begin{itemize} \item $|E|\geq 2$. \item No $e\in E$ is a separator in $G$. \item For \Remph{no} $e,f\in E(G)$ is this a Tutte tree: \input{Tree2Ordinary.pdf_t} \raisebox{0.25in}{\hspace{0.2in}\framebox{\begin{minipage}[b]{2in} The Tutte Tree formalism here \Remph{means} $e$ is a non-separator in $G$ and $f$ is a non-separator in both $G/e$ and $G\setminus e$. \end{minipage}}} \item Lemmas: Each $e\in E(G),f\in E(G)$, $e\neq f$, is series pair or a parallel pair.\\ $e,f$ parallel and $f,g$ series is impossible.\\ So \Remph{all} of $E$ is a series class or is a parallel class. \end{itemize} \end{frame} \begin{frame} \frametitle{One of 5 cases} \begin{center}...\end{center} \end{frame} \section{Tutte (Computation) Trees and Internal/External Activities} \begin{frame} \frametitle{Root-Leaf Paths in Tutte (Computation) Trees} A {$P$-subbasis $T\subseteq E(G)$ (``contracting set'' [Diao-Hetyei])} is an independent set (forest) for which $T\cup P$ is spanning. %So, in $G/T$, $P$ is spanning. \input{TutteTree.pdf_t} \raisebox{0.4in}{\begin{minipage}[b]{2in} Path $\pi$ contributes\\ $[G'|P]x^{IP(T)}y^{EP(T)}X^{IA(T)}Y^{EA(T)}$ to our Tutte Poly. \end{minipage}} \vfill \Remph{All is determined by the Tutte tree, NOT an element order!} \end{frame} \begin{frame} \frametitle{Internal/External Activities and Tutte Trees} $E$ is partitioned: $T=IP(T)\cup IA(T)$, $E\setminus T=EP(T)\cup EA(T)$. $IP(T)=$ $\{$elements contracted along$\;\pi\}$. \vfill $EP(T)=$ $\{$elements deleted along$\;\pi$\}. \vfill In $G'$, $IA(T)$ is all coloops, $EA(T)$ is all loops. \vfill $2^E$ is partitioned into intervals \\ $\{[X_T,Y_T]|P-\text{subbasis} T\}$,\\ $X_T=IP(T)\subseteq (T=IP(T)\cup IA(T))\subseteq (T\cup EA(T))=Y_T$. \end{frame} \begin{frame} \frametitle{Tutte Polynomials and Activities} \begin{enumerate} \item When the conditions in our $P$-ported ZBR theorem are satisfied, \Remph{all} Tutte trees yield the same \Remph{value in the $R$-module}, called \Remph{THE} Tutte polynomial (because trees$\leftrightarrow$computations.) This value has multiple \Remph{polynomial expressions}. \item The $P$-quotient $[G/IP(T)|P]$ in the term contributed by $P$-subbasis $T$ is \Remph{determined by} the \Remph{internally passive} elements of $T$. \end{enumerate} \end{frame} \section{Normal Ported Param. Tutte Functions} \begin{frame} \frametitle{Ported Rank-Nullity Polynomial} \[ R(G)=\sum_{A\subseteq E(G)}[G/A|P]x_A y_{E\setminus A} u^{\text{rank}(G)-\text{rank}[G/A|P]-\text{rank(A)}} w^{|A|-\text{rank}(A)} \] \begin{block}{$R(G)$ is a Tutte function.} \begin{itemize} \item \Remph{$R(G)$ is well-defined} as the above generating function. \item Exercise: $R(G)$ satisfies the $P$-ported parametrized Tutte Equations. \item The Tutte equations plus \[ X_e=x_e+y_eu;\;\;\;Y_e=x_ew+y_e;\;\;\;I(Q)=[Q] \] have the unique solution $R(G)$, a polynomial in the (many) $x_e, y_e and [Q]$ and (two) $u, w$. \item Any Tutte function writable this way is called \Remph{normal}, because Zaslavsky coined this name for them when $P=\emptyset$. \item The most popular Tutte functions are normal; only the ``Bla Bla the matroid structure'' (Zaslavsky). \end{itemize} \end{block} \end{frame} \subsection{Tree and Forest Enumerators} \begin{frame} \frametitle{Ported Tree Enumerator} \end{frame} \begin{frame} \frametitle{Ported Forest Enumerator} \end{frame} \subsection{Determinantal and Extensor Tutte Functions} \begin{frame} \frametitle{Determinantal and Extensor Tutte Functions} \end{frame} \section{Tutte Functions of Ported and Labelled Graphs} \begin{frame} \begin{itemize} \item The matroid structure determines the Tutte Trees. \item Other structures, on which matroid deletion and contraction operations act, determine the initial values. \item ZBR analyzed parametrized Tutte functions of non-ported graphs with unlabelled vertices. The indecomposibles are then $E_n$, the $n$-vertex graphs with no edges. \end{itemize} \end{frame} \begin{frame} \frametitle{ZBR Conditions for edge-Parametrized unlabelled Graphs} \end{frame} \end{document}
{ "alphanum_fraction": 0.7058508143, "avg_line_length": 28.3669201521, "ext": "tex", "hexsha": "cf38fb46c42f4cecef13674955b5b228e270cd1c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6292a8cffe1441a557212b0fd23f3fd7769975a7", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "chaikens/MathOfElec", "max_forks_repo_path": "Presentations/StMicMar09/ParamTutte.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6292a8cffe1441a557212b0fd23f3fd7769975a7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "chaikens/MathOfElec", "max_issues_repo_path": "Presentations/StMicMar09/ParamTutte.tex", "max_line_length": 85, "max_stars_count": null, "max_stars_repo_head_hexsha": "6292a8cffe1441a557212b0fd23f3fd7769975a7", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "chaikens/MathOfElec", "max_stars_repo_path": "Presentations/StMicMar09/ParamTutte.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5187, "size": 14921 }
\chapter{Conclusion} \label{chap:conclusion} This thesis presented a novel approach to automatically mark programming assignments. Our approach consists of two components: (1) we first prune a program's AST to isolate key features relevant for assignment marking; (2) we then compare a student solution to a set of reference solutions in order to generate a final mark for the student. Assignments that have received automated deductions will require manual review to provide more meaningful and individualized feedback. This tool assumes the majority of the students will write their programs in specific patterns. We are concerned this may limit student creativity when trying to solve their assignments. However, we note that the programming assignments we tested with had rigid designs and the easiest-to-implement solutions often fall under specific patterns. Therefore we do not believe this to be a major concern. We implemented our processes as the ClangAutoMarker tool and tested it with student submissions and marks from previous offerings of the ECE459 course at the University of Waterloo. Our initial results were not as successful as we had originally hoped. Our tool did not perform better than the baseline approach of simply always assigning full marks. However, due to the uncertainty in our ground-truth, we faithfully recollected the ground-truth data for a smaller subset of previous classes. When we reevaluated our tool with the more accurate sample, we were able to achieve a better false positive rate of 21\% compared to always assigning full marks which had a false positive rate of 35\%. Although this was still not very accurate, we have demonstrated that our tool has promising potential for automated marking and further improvements may make it viable for a live classroom.
{ "alphanum_fraction": 0.8220994475, "avg_line_length": 201.1111111111, "ext": "tex", "hexsha": "494aa957e5e404d5efd3006cba6b4c112120fd5f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a7f036a08cda7e508b0c51fefa6ac150555ec2ee", "max_forks_repo_licenses": [ "BSD-Source-Code" ], "max_forks_repo_name": "Trinovantes/Masters", "max_forks_repo_path": "thesis/thesis/body/conclusion.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a7f036a08cda7e508b0c51fefa6ac150555ec2ee", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-Source-Code" ], "max_issues_repo_name": "Trinovantes/Masters", "max_issues_repo_path": "thesis/thesis/body/conclusion.tex", "max_line_length": 885, "max_stars_count": null, "max_stars_repo_head_hexsha": "a7f036a08cda7e508b0c51fefa6ac150555ec2ee", "max_stars_repo_licenses": [ "BSD-Source-Code" ], "max_stars_repo_name": "Trinovantes/Masters", "max_stars_repo_path": "thesis/thesis/body/conclusion.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 338, "size": 1810 }
% Options for packages loaded elsewhere \PassOptionsToPackage{unicode}{hyperref} \PassOptionsToPackage{hyphens}{url} % \documentclass[ ]{book} \usepackage{lmodern} \usepackage{amsmath} \usepackage{ifxetex,ifluatex} \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{textcomp} % provide euro and other symbols \usepackage{amssymb} \else % if luatex or xetex \usepackage{unicode-math} \defaultfontfeatures{Scale=MatchLowercase} \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1} \fi % Use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} \IfFileExists{microtype.sty}{% use microtype if available \usepackage[]{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \makeatletter \@ifundefined{KOMAClassName}{% if non-KOMA class \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt}} }{% if KOMA class \KOMAoptions{parskip=half}} \makeatother \usepackage{xcolor} \IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available \IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}} \hypersetup{ pdftitle={Basic R}, pdfauthor={Kálmán Abari}, hidelinks, pdfcreator={LaTeX via pandoc}} \urlstyle{same} % disable monospaced font for URLs \usepackage{color} \usepackage{fancyvrb} \newcommand{\VerbBar}{|} \newcommand{\VERB}{\Verb[commandchars=\\\{\}]} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \usepackage{framed} \definecolor{shadecolor}{RGB}{248,248,248} \newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{0.94,0.16,0.16}{#1}} \newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.77,0.63,0.00}{#1}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\BuiltInTok}[1]{#1} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{#1}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.64,0.00,0.00}{\textbf{#1}}} \newcommand{\ExtensionTok}[1]{#1} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ImportTok}[1]{#1} \newcommand{\InformationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\NormalTok}[1]{#1} \newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.81,0.36,0.00}{\textbf{#1}}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{#1}} \newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\RegionMarkerTok}[1]{#1} \newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\VariableTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\WarningTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \usepackage{longtable,booktabs} \usepackage{calc} % for calculating minipage widths % Correct order of tables after \paragraph or \subparagraph \usepackage{etoolbox} \makeatletter \patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{} \makeatother % Allow footnotes in longtable head/foot \IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}} \makesavenoteenv{longtable} \usepackage{graphicx} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} % Set default figure placement to htbp \makeatletter \def\fps@figure{htbp} \makeatother \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{5} \usepackage{booktabs} \usepackage{amsthm} \makeatletter \def\thm@space@setup{% \thm@preskip=8pt plus 2pt minus 4pt \thm@postskip=\thm@preskip } \makeatother \ifluatex \usepackage{selnolig} % disable illegal ligatures \fi \usepackage[]{natbib} \bibliographystyle{apalike} \title{Basic R} \author{Kálmán Abari} \date{Last updated: 2021-03-31} \begin{document} \maketitle { \setcounter{tocdepth}{1} \tableofcontents } \hypertarget{preface}{% \chapter*{Preface}\label{preface}} \addcontentsline{toc}{chapter}{Preface} As a researcher we need to know how to work with data. One of the best ways to do that is with R. R is a free and an open source language that was specifically developed for reading, manipulating, analysing data and publishing results. In this book, we'll take a look at how we can get started with R. This is an introductory book, so you don't need to have experience with R or with computer programming. In order to start work with R, you need to install the \emph{Base R} and \emph{RStudio}. \hypertarget{setup-instructions}{% \section*{Setup Instructions}\label{setup-instructions}} \addcontentsline{toc}{section}{Setup Instructions} The first step to working with R is to actually get \emph{Basic R} on your computer. This is easy and it's free. The most common way by far to work with R is within a desktop application called \emph{RStudio}. Like \emph{Basic R}, this is free and it's open source and available for multiple platforms. \emph{Basic R} is the underlying statistical computing environment, but using R alone is no fun. \emph{RStudio} is a graphical integrated development environment (IDE) that makes using R much easier and more interactive. You need to install \emph{Basic R} before you install \emph{RStudio}. \hypertarget{windows}{% \subsection*{Windows}\label{windows}} \addcontentsline{toc}{subsection}{Windows} \begin{itemize} \tightlist \item Download R from the \href{http://cran.r-project.org/bin/windows/base/release.htm}{CRAN website}. \item Run the \texttt{.exe} file that was just downloaded \item Go to the \href{https://www.rstudio.com/products/rstudio/download/\#download}{RStudio download page} \item Under \emph{Installers} select \textbf{RStudio x.yy.zzz - Windows XP/Vista/7/8} (where x, y, and z represent version numbers) \item Double click the file to install it \item Once it's installed, open RStudio to make sure it works and you don't get any error messages. \end{itemize} \hypertarget{macos}{% \subsection*{macOS}\label{macos}} \addcontentsline{toc}{subsection}{macOS} \begin{itemize} \tightlist \item Download R from the \href{http://cran.r-project.org/bin/macosx}{CRAN website}. \item Select the \texttt{.pkg} file for the latest R version \item Double click on the downloaded file to install R \item Go to the \href{https://www.rstudio.com/products/rstudio/download/\#download}{RStudio download page} \item Under \emph{Installers} select \textbf{RStudio x.yy.zzz - Mac OS X 10.6+ (64-bit)} (where x, y, and z represent version numbers) \item Double click the file to install RStudio \item Once it's installed, open RStudio to make sure it works and you don't get any error messages. \end{itemize} \hypertarget{linux}{% \subsection*{Linux}\label{linux}} \addcontentsline{toc}{subsection}{Linux} \begin{itemize} \tightlist \item Follow the instructions for your distribution from \href{https://cloud.r-project.org/bin/linux}{CRAN}, they provide information to get the most recent version of R for common distributions. For most distributions, you could use your package manager (e.g., for Debian/Ubuntu run \texttt{sudo\ apt-get\ install\ r-base}, and for Fedora \texttt{sudo\ yum\ install\ R}), but we don't recommend this approach as the versions provided by this are usually out of date. In any case, make sure you have at least R 4.0.0. \item Go to the \href{https://www.rstudio.com/products/rstudio/download/\#download}{RStudio download page} \item Under \emph{Installers} select the version that matches your distribution, and install it with your preferred method (e.g., with Debian/Ubuntu \texttt{sudo\ dpkg\ -i\ \ \ rstudio-x.yy.zzz-amd64.deb} at the terminal). \item Once it's installed, open RStudio to make sure it works and you don't get any error messages. \end{itemize} \hypertarget{how-to-use-r}{% \chapter{How to use R}\label{how-to-use-r}} There are so many ways to analyse data in R. In my opinion the best way is the RStudio. Most R users use R via \emph{RStudio}. We need both \emph{Base R} and \emph{RStudio}, as we did earlier, but if we only start the RStudio, so we can reach all functions of R. \emph{RStudio} is an~\emph{integrated development environment (IDE)}~that provides an interface by adding many convenient features and tools. \begin{figure} {\centering \includegraphics[width=0.6\linewidth]{img/baser_rstudio} } \caption{How to use R – the best way}\label{fig:unnamed-chunk-2} \end{figure} Of course, we can use the \emph{Base R} directly. Usually, this is the only option that is supported by mainframe environment. But, if you have the opportunity, always use \emph{RStudio} instead of Basic R directly. \begin{figure} {\centering \includegraphics[width=0.6\linewidth]{img/baser_rstudio_min} } \caption{How to use R – the minimum way}\label{fig:unnamed-chunk-3} \end{figure} Actually, there are several ways to use R, not only \emph{Base R} and \emph{RStudio}. The table below summarizes the interfaces in the columns and the tools in the rows. There are three different types of interface: \emph{Console}, \emph{Script} and \emph{Point and clic}k. Interfaces allow the user to interact with R.~ \begin{figure} {\centering \includegraphics[width=0.9\linewidth]{img/baser_rstudio_all} } \caption{How to use R – the minimum way}\label{fig:unnamed-chunk-4} \end{figure} \textbf{\emph{Console}} provides a command-line \emph{interface} that allows the user to interact with the computer by typing commands . The computer displays a prompt (\texttt{\textgreater{}}), the user types a command and presses Enter or Return, and gets the result. There are three tools, that provide console, the \emph{Base R} Console (it's the only option in mainframe environment), \emph{RGui} in \emph{Base R} on Windows, and \emph{RStudio}. The second interface is the \textbf{script interface}. It gives you an editor window. You can type multiple lines of code into the source editor without having each line evaluated by R. Then, when you're ready, you can send the instructions to R - in other words, source the script -, and you get the result. You can reach this functionality in \emph{RGui}, \emph{R Commander} and \emph{RStudio}. \textbf{Remember, the best way to use R is creating, editing and running scripts in \emph{RStudio}. This is the best option.} For beginners, the best option would be to use a \textbf{Point-and-click interface}. It has a menu, you can choose menupoints, menuitems, you can get dialog boxes and type in editfields, point on radio buttons and checkboxes. \textbf{But the knowledge of these systems have limits}. You can execute only methods, which you can reach from the menu. The descriptive measures, tables, plots and hypothesis tests, which you can point and click, are part of knowledge of R. The whole knowledge can be reached only from console or script interface. For example I only use \emph{jamovi}, if I have a simple question and I want to get a quick answer. ~So I encourage you to install and try jamovi or JASP. These are free and user friendly ways to do statistics. By the way, the tools I listed in this table are all free, except the BlueSky. It is worth installing and trying them. \hypertarget{base-r}{% \section{Base R}\label{base-r}} What components were installed with \emph{Base R}? The \emph{Base R} consists of three elements. The console for typing commands and getting results, the interpreter for evaluating the commands, and packages for extending R's knowledge. The interpreter is the heart of the R, all commands will be executed by the interpreter. For the users, for us, the console is the key. Apart from point and click interfaces, we will interact with the console directly or indirectly. \begin{figure} {\centering \includegraphics[width=0.7\linewidth]{img/baser} } \caption{Components of Base R}\label{fig:unnamed-chunk-5} \end{figure} \hypertarget{console-of-base-r}{% \subsection{Console of Base R}\label{console-of-base-r}} As we mentioned, the R users meet the console all the time. One main part of \emph{Base R} is the console. To start the console, we should type R (the capital R letter) on all systems, or we can find and click on the R icon. If you are on Windows, you can launch the \emph{R.exe} (e.g. \texttt{c:\textbackslash{}Program\ Files\textbackslash{}R\textbackslash{}R-4.0.4\textbackslash{}bin\textbackslash{}x64\textbackslash{}R.exe}). \begin{figure} {\centering \includegraphics[width=0.7\linewidth]{img/console} } \caption{Console in Base R}\label{fig:unnamed-chunk-6} \end{figure} On the screen above, you can see some information about the R instance. At the bottom of the console window there is a prompt. It consists of a `greater than' sign (symbol) and a space, and of course a cursor where you can type any character. Let's type any character, delete characters with Delete and Backspace keys, move the cursor with Left arrow and Right arrow keys, insert any character in this position, and navigate the cursor the beginning of the line and the end of the line with the Home and End keys. If we are ready, we can execute this line, the command, hitting Enter. If the command is valid, R or more precisely its interpreter, will execute it, then it returns the result in the console. If the command in not valid, the interpreter returns an error message. Let's start with numbers. Type 45 and hit Enter. \begin{Shaded} \begin{Highlighting}[] \DecValTok{45} \CommentTok{\#\textgreater{} [1] 45} \end{Highlighting} \end{Shaded} This is a valid command because there is no error message. But the result, the output, is meaningless. Let's choose a more complicated expression: \begin{Shaded} \begin{Highlighting}[] \DecValTok{45} \SpecialCharTok{+} \DecValTok{5} \CommentTok{\#\textgreater{} [1] 50} \end{Highlighting} \end{Shaded} Forty-five plus five sums fifty, so fifty is displayed in the output. The 1 in brackets at the beginning of the output means this is the first line of the output. \hypertarget{console-features}{% \subsubsection{Console features}\label{console-features}} Every console has three features, that help us to execute commands. \begin{description} \item[History of commands] We can use the Up arrow and Down arrow keys to browse the history of commands, which we typed earlier. When you press the Up arrow, you get the commands you typed earlier at the command line. Of course you can modify them as well. You can hit Enter at any time to run the command that is currently displayed. \item[Autocompletion] Pressing TAB key completes the keyword or directory path we were currently typing. Type in \texttt{getw} hit TAB and you can see the whole function call, hitting Enter, you can get the working directory. \item[Continuation prompt] Let's have a look at a small example. Tpye \texttt{45\ -}(forty-five minus) and hit Enter. This is an invalid command, but we can not see any Error message. Instead, a new prompt has appeared, a continuation prompt indicated by a \texttt{+} (plus) followed by a space character. We can continue typing. The console allows us to complete the command. It's easy, type for example \texttt{5} (five), hit Enter. We can get the result. I'll show you another example. Type \texttt{getwd(} without closing parenthesis, hit Enter. We will get the continuation prompt, and typing closing parenthesis we get the working directory. It seems to help us. Continuation prompt seems to be a good thing. But, It is not. It is a really confusing feature. We can easily find ourselves in a never ending story. We can type \texttt{45\ -}, Enter, \texttt{11\ *}, Enter and so on, we keep getting the continuation prompt, and we don't really know how to complete it in a right way. So, It is very important to leave the continuation prompt as soon as possible. The key is Esc button. Lets' try this. Type in opening parenthesis and 6 (\texttt{(6}), hit Enter, and press Esc. We can get back the prompt `greater than' (\texttt{\textgreater{}}), this is default prompt. When you see the plus prompt, continuation prompt, you must press the Esc key. \end{description} \hypertarget{working-directory}{% \subsubsection{Working directory}\label{working-directory}} In R, we answer the questions we face using functions. So, the expressions that we type into the command line, usually contains \emph{function calls}. So, now, we can request the working directory. Let's type \texttt{getwd()} to get the interpreter to display our working directory. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{getwd}\NormalTok{()} \end{Highlighting} \end{Shaded} Working directory is the default directory that our command line reaches to access files if we don't specify a path ourselves. We can specify paths two ways either absolutely from the root directory or relatively starting from our working directory. Beside reaching our history in the command line we can also rely on the help of a built-in autocompletion feature, by pressing TAB key, which completes the keyword or directory path we were currently typing. For example, let's type only \texttt{set} and press TAB and TAB again to list out all the commands that start with \texttt{set}. Press \texttt{w} and press TAB again, and as you can see the command line completes our command with a \texttt{d} to get an existing function name. All function call requires parentheses after the function name, which contains additional data for the function which we call arguments. The \texttt{setwd()} function has only one required arguments, which is a path to a directory, which we want to set as our new working directory. Let's try calling the \texttt{setwd()} function, start with the function name, then the opening parenthesis and inside quote marks we give the directory's path. On Windows, after the first quote mark, type \texttt{c:/}, which refers the drive you want to use, and press TAB twice to see all subdirectories and files on the drive. From here we can build our path directory by directory till we reach the directory which we want to set as our working directory. As you can see when jumping from directory into another we mark this jump with the slash character. Instead of writing out the path ourselves we can write the first few characters of directory's name and we can rely on TAB to autocomplet it for us. Of course with more common directory names we have to be more specific to get the desired autocompletions. Finally, if we reached our desired new working directory, we can execute the command, by pressing enter, but make sure you have both your opening and closing quote marks and parenthesis. If you had, your command executed successfully thereby changing your working directory, but if you made a mistake either in the formality of the command (called syntactic error) or by giving a path to a directory which doesn't exist (called semantic error). To sum up, to set a working directory in R type: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{setwd}\NormalTok{(}\StringTok{"Path/To/Your/Workingdirectory"}\NormalTok{)} \end{Highlighting} \end{Shaded} If you need to check which working directory R thinks it is in: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{getwd}\NormalTok{()} \end{Highlighting} \end{Shaded} \hypertarget{quit-the-console}{% \subsubsection{Quit the console}\label{quit-the-console}} In the end, let's quit the console, by typing and executing the \texttt{q()} command. Don't forget the parentheses. We don't need to save the workspace. Choose \texttt{No}. \hypertarget{rgui-on-windows}{% \subsection{RGui on Windows}\label{rgui-on-windows}} On Windows operating system, \emph{Base R} has another console which is more advanced. The \emph{RGui} has a graphical user interface. To start it, find and click on the R icon. You should always use the latest version and the 64 bit version. \begin{figure} {\centering \includegraphics[width=0.7\linewidth]{img/rgui} } \caption{Console in RGui}\label{fig:unnamed-chunk-12} \end{figure} Let's start up the aforementioned 64 bit version of it. Above you can see our console which in functionality is the same as the one we used in \emph{Base R} recently. We can type any character and press Enter. If you'd like to change the appearance, or the size of the console, you can do so in the \texttt{Edit\ \textgreater{}\ GUI\ preferences} menu in the upper menu bar. Let's choose this menu item, and increase the font size to 28 and set the style to bold. Close this dialog box with \texttt{OK} button, and you can see a more readable console window. But as you can see we also have menu and tool bar. Let's try the same basic arithmetic command here. Type \texttt{45\ +\ 5} and press Enter to execute it. And as we can see we get the same result here. Let's try the history with the Up arrow and Down arrow. Navigate the cursor with Left arrow and Right arrow keys, use the Home and End buttons, insert or delete any character and press Enter. \hypertarget{scripting-in-rgui}{% \subsubsection{Scripting in RGui}\label{scripting-in-rgui}} \emph{RGui} has all the functions the Console of \emph{Base R} had, and also a new one. We can create script files with which we can use to store commands in text files. Script files makes easy to store and organize commands. So let's click \texttt{File\ \textgreater{}\ New\ script} which will make us a new script window where you can edit your script file. We can arrange the Console and Script windows, click on the \texttt{Windows\ \textgreater{}\ Tile\ horizontally}\textbf{.} You can find the typical face of \emph{RGui} below. \begin{figure} {\centering \includegraphics[width=0.7\linewidth]{img/rgui_script} } \caption{Console in RGui}\label{fig:unnamed-chunk-13} \end{figure} Let's write the two commands in to this script file: \texttt{45\ +\ 5} and \texttt{getwd()}. Here we only type out command, to actually execute them we will need to transfer them to the console. This window is only a text editor through which we edit our script file. Here we can only use basic notepad like functionalities, so no autocompletion or history. We can move in a line with the Home and End buttons and through lines with the Page Up and Page Down buttons. With Ctrl+Home we can get to the beginning of the script file and with Ctrl+End we can get to the bottom of it. Of course, this comes handy with much larger script files. We can mark parts of the text with either holding the Shift key and using the Left-Right-Up-Down arrow keys or using the mouse. And we can use the clipboard as well, Ctrl+C, Ctrl+X and Ctrl+V. It's important to know how to actually execute the commands we just wrote into our script file. With Ctrl+R we can execute the line that our cursor is currently at. The process consists of the command getting pulled into the console and then it executing it. Let's try it. Move the cursor into the first line. Click in the first line anywhere. Then press the Ctrl+R. Three things happened at he same time. The first line was pulled into the console, the line was executed, and the cursor jumped down a line. We can repress the Ctrl+R and repeat the whole process for the second line. And so on. If you have any text selected in your editor prior, pressing Ctrl+R, then the selected text will be executed. Let's also try this. Select only \texttt{5\ +\ 5} from the first line, and press Ctrl+R, and you get 10. Then, select the first two lines and execute them with Ctrl+R. The interpreter ran both lines. You can see the result in the console. As you might have noticed, we have a colored console, the inputs, the commands colored by red, and the outputs, the results colored by blue. This script files can also contain comments, which is useful to mark what's the command's intention. Which ones again may seem unimportant now, but is really useful when working with massive script files. To mark something as a comment use the hash mark (\texttt{\#}) which marks everything in a line after the hash mark as a comment. It's good practice to start your script file with 3 comment lines which contains the author of the file, the date and a name which gives some kind of information about what the script does. \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# Kálmán Abari} \CommentTok{\# 2021{-}03{-}17} \CommentTok{\# First script file} \end{Highlighting} \end{Shaded} Navigate the cursor to the first position of the file, for example pressing the Ctrl+Home. Then type a hash mark, and your name, Enter. Hash mark, date of today, Enter, hash mark \texttt{First\ script\ file}, Enter. If we are ready, we are going to save the script file. It's important to save the script file with the \texttt{File\ \textgreater{}\ Save} menu. It's good practice to save your work every 15 minutes. It's important that when we save our files we give them names that doesn't contain any special characters or whitespaces, underscores are acceptable though. Choose a proper directory and type in \texttt{first\_script.R} as the filename. Make sure that our file's extension should be \texttt{.R} which means that it contains an R script file. With that we covered the basics of the \emph{RGui}, so we can close it for now, we shouldn't worry about saving our workspace since we won't be needing it. \hypertarget{rstudio}{% \section{RStudio}\label{rstudio}} The last tool we will get to know now and will be using for the rest of the book is \emph{RStudio}. We can also launch it from the start menu. While we had multiple \emph{Basic R} instances, we only have one \emph{RStudio}, so it should be easy to find it. It's the most advanced interface to use R from the aforementioned ones. And this will be the one that we will mainly use through out the course. Even though we will only use \emph{RStudio}, it's important to mention that RStudio relies on \emph{Base R} to work. \hypertarget{customization}{% \subsection{Customization}\label{customization}} We can easily check which instance of \emph{Base R} our \emph{RStudio} is using. We can see this in the \texttt{Global\ options\ \textgreater{}\ Tools} menu option. Let's check if it really is using the 64 bit version. Here we can also do other customizations. While we're here we should also uncheck the \texttt{Restore\ .Rdata} option and set the \texttt{Save\ workspace\ to\ Rdata\ on\ exit} option to \texttt{Never}. Another important one is the \texttt{Code} menu point from the left list. Here under the \texttt{Saving} option we have to set the \texttt{Default\ text\ encoding} to \texttt{UTF-8} which is a wildly used and accepted character-code standard. You can customize to look of your editor under the \texttt{Appearance} option. Here you can change the theme of your editor, which is sets the color palette it uses. I recommend changing it to \texttt{Tomorrow\ night\ bright}. Let's close the settings, by pressing \texttt{OK}, to save the changes we made. \hypertarget{using-rstudio}{% \subsection{Using RStudio}\label{using-rstudio}} A few words about RStudio. The main area consists of 3 or 4 different panes or windows which all are responsible for a different task. You have 3 panes by basic. The fourth panes you can add is a script file editor, which you can do by creating a new script file in \texttt{File\ \textgreater{}\ New\ file\ \textgreater{}-\/-\ R\ Script}. \begin{figure} \includegraphics[width=1\linewidth]{img/rstudio-screenshot} \caption{The RStudio Interface}\label{fig:RStudio-GUI} \end{figure} You can easily resize the panes with clicking and dragging the vertical or horizontal line between the panes. RStudio is divided into 4 ``Panes'': \begin{itemize} \tightlist \item the \textbf{Source} for your scripts and documents (top-left, in the default layout), \item the R \textbf{Console} (bottom-left), \item your \textbf{Environment/History} (top-right), and \item your \textbf{Files/Plots/Packages/Help/Viewer} (bottom-right). \end{itemize} The placement of these panes and their content can be customized (see main Menu \texttt{Tools\ \textgreater{}\ Global\ Options\ \textgreater{}\ Pane\ Layout}. One of the advantages of using \texttt{RStudio} is that all the information you need to write code is available in a single window. \hypertarget{how-to-start-an-r-project}{% \subsection{How to start an R project}\label{how-to-start-an-r-project}} It is good practice to keep a set of related data, analyses, and text self-contained in a single folder. When working with R and RStudio you typically want that single top folder to be the folder you are working in. In order to tell R this, you will want to set that folder as your \textbf{working directory}. Whenever you refer to other scripts or data or directories contained within the working directory you can then use \emph{relative paths} to files that indicate where inside the project a file is located. (That is opposed to absolute paths, which point to where a file is on a specific computer). Having everything contained in a single directory makes it a lot easier to move your project around on your computer and share it with others without worrying about whether or not the underlying scripts will still work. Whenever you create a project with \emph{RStudio} it creates a working directory for you and remembers its location (allowing you to quickly navigate to it) and optionally preserves custom settings and open files to make it easier to resume work after a break. Below, we will go through the steps for creating an ``R Project'' for this workshop. \begin{itemize} \tightlist \item Start RStudio \item Under the \texttt{File} menu, click on \texttt{New\ project}, choose \texttt{New\ directory}, then \texttt{Empty\ project} \item As directory (or folder) name enter \texttt{r-intro} and create project as subdirectory of your desktop folder: \texttt{\textasciitilde{}/Desktop} \item Click on \texttt{Create\ project} \item Under the \texttt{Files} tab on the right of the screen, click on \texttt{New\ Folder} and create a folder named \texttt{data} within your newly created working directory (e.g., \texttt{\textasciitilde{}/r-intro/data}) \item On the main menu go to \texttt{Files} \textgreater{} \texttt{New\ File} \textgreater{} \texttt{R\ Script} (or use the shortcut Ctrl+Shift+\texttt{N}) to open a new file \item Save the empty script as \texttt{r-intro-script.R} in your working directory. \end{itemize} Your working directory should now look like in Figure \ref{fig:working-dir}. \begin{figure} \includegraphics[width=0.6\linewidth]{img/Rproject-setup} \caption{What it should look like at the beginning of this lesson}\label{fig:working-dir} \end{figure} \hypertarget{organizing-your-working-directory}{% \subsection{Organizing your working directory}\label{organizing-your-working-directory}} Using a consistent folder structure across your projects will help keep things organized, and will also make it easy to find/file things in the future. This can be especially helpful when you have multiple projects. In general, you may create directories (folders) for \textbf{data}, \textbf{documents}, and \textbf{outputs}. \begin{itemize} \tightlist \item \textbf{\texttt{data/}} Use this folder to store your raw input data. \item \textbf{\texttt{documents/}} If you are working on a paper this would be a place to keep outlines, drafts, and other text. \item \textbf{\texttt{output/}} Use this folder to store your intermediate or final datasets and images you may create for the need of a particular analysis. For the sake of transparency, you should \emph{always} keep a copy of your raw data accessible and do as much of your data cleanup and preprocessing programmatically, You could have subfolders in your \texttt{output} directory named \texttt{output/data} that would contain the respective processed files. I also like to save my images in \texttt{output/image} directory. \end{itemize} You may want additional directories or subdirectories depending on your project needs, but this is a good template to form the backbone of your working directory. \hypertarget{rstudio-console-and-command-prompt}{% \subsection{RStudio Console and Command Prompt}\label{rstudio-console-and-command-prompt}} The console pane in RStudio is the place where commands written in the R language can be typed and executed immediately by the computer. It is also where the results will be shown for commands that have been executed. You can type commands directly into the console and press Enter to execute those commands, but they will be forgotten when you close the session. If R is ready to accept commands, the R console by default shows a \texttt{\textgreater{}} prompt. If it receives a command (by typing, copy-pasting or sent from the script editor using Ctrl Enter), R will try to execute it, and when ready, will show the results and come back with a new \texttt{\textgreater{}} prompt to wait for new commands. If R is still waiting for you to enter more data because it isn't complete yet, the console will show a \texttt{+} prompt. It means that you haven't finished entering a complete command. This is because you have not `closed' a parenthesis or quotation, i.e.~you don't have the same number of left-parentheses as right-parentheses, or the same number of opening and closing quotation marks. When this happens, and you thought you finished typing your command, click inside the console window and press Esc; this will cancel the incomplete command and return you to the \texttt{\textgreater{}} prompt. \hypertarget{rstudio-script-editor}{% \subsection{RStudio Script Editor}\label{rstudio-script-editor}} Because we want to keep our code and workflow, it is better to type the commands we want in the script editor, and save the script. This way, there is a complete record of what we did, and anyone (including our future selves!) can easily replicate the results on their computer. Perhaps one of the most important aspects of making your code comprehensible for others and your future self is adding comments about why you did something. You can write comments directly in your script, and tell R not no execute those words simply by putting a hashtag (\texttt{\#}) before you start typing the comment. \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# this is a comment on its on line} \FunctionTok{getwd}\NormalTok{() }\CommentTok{\# comments can also go here} \end{Highlighting} \end{Shaded} One of the first things you will notice is in the R script editor that your code is colored (syntax coloring) which enhances readability. Secondly, RStudio allows you to execute commands directly from the script editor by using the Ctrl + Enter shortcut (on Macs, Cmd + Enter will work, too). The command on the current line in the script (indicated by the cursor) or all of the commands in the currently selected text will be sent to the console and executed when you press Ctrl + Enter. You can find other keyboard shortcuts under \texttt{Tools} \textgreater{} \texttt{Keyboard\ Shortcuts\ Help} (or \texttt{Alt} + \texttt{Shift} + \texttt{K}) At some point in your analysis you may want to check the content of a variable or the structure of an object, without necessarily keeping a record of it in your script. You can type these commands and execute them directly in the console. RStudio provides the Ctrl + 1 and \texttt{Ctrl} + \texttt{2} shortcuts allow you to jump between the script and the console panes. All in all, RStudio is designed to make your coding easier and less error-prone. \hypertarget{rmarkdown}{% \section{RMarkdown}\label{rmarkdown}} \hypertarget{introduction}{% \subsection{Introduction}\label{introduction}} RMarkdown allows you write reports that include both R codes and the output generated. Moreover, these reports are dynamic in the sense that changing the data and reprocessing the file will result in a new report with updated output. RMarkdown also lets you include Latex math, hyperlinks and images. These dynamic reports can be saved as \begin{itemize} \tightlist \item PDF or PostScript documents \item Web pages \item Microsoft Word documents \item Open Document files \item and more like Beamer slides, etc. \end{itemize} When you render an RMarkdown file, it will appear, by default, as an HTML document in Viewer window of RStudio. If you want to create PDF documents, install a LaTeX compiler. Install MacTeX for Macs (\url{http://} tug.org/mactex), MiKTeX (www.miktex.org) for Windows, and TeX Live for Linux (www.tug.org/texlive). Alternatively, you can install TinyTeX from \url{https://yihui.name/tinytex/}. \hypertarget{basic-structure-of-r-markdown}{% \subsection{Basic Structure of R Markdown}\label{basic-structure-of-r-markdown}} Let's start with a simple RMarkdown file and see what it looks like and the output that it produces when executed. Click \texttt{File\ \textgreater{}\ New\ File\ \textgreater{}\ R\ Markdown}, type in the title \texttt{Homework\ problems}, and the author. Click \texttt{OK}. Save the file with Ctrl+S, choose a name for example, \texttt{homework\_1.Rmd}. Rmarkdown files ends with the \texttt{.Rmd} extension. An \texttt{.Rmd} file contains three types of contents: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item A YAML header : \end{enumerate} \begin{Shaded} \begin{Highlighting}[] \PreprocessorTok{{-}{-}{-}} \FunctionTok{title}\KeywordTok{:}\AttributeTok{ }\StringTok{"Homework problems"} \FunctionTok{author}\KeywordTok{:}\AttributeTok{ }\StringTok{"Abari Kálmán"} \FunctionTok{date}\KeywordTok{:}\AttributeTok{ }\StringTok{\textquotesingle{}2021 03 31 \textquotesingle{}} \FunctionTok{output}\KeywordTok{:}\AttributeTok{ html\_document} \PreprocessorTok{{-}{-}{-}} \end{Highlighting} \end{Shaded} YAML stands for ``yet another markup language'' (\url{https://en.wikipedia.org/wiki/YAML}). \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{1} \tightlist \item R code chuncks. For example: \end{enumerate} \begin{Shaded} \begin{Highlighting}[] \InformationTok{\textasciigrave{}\textasciigrave{}\textasciigrave{}\{r\}} \InformationTok{myDataFrame \textless{}{-} data.frame(names = LETTERS[1:3], variable\_1 = runif(3))} \InformationTok{myDataFrame} \InformationTok{\textasciigrave{}\textasciigrave{}\textasciigrave{}} \end{Highlighting} \end{Shaded} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{2} \tightlist \item Text with formatting like bold text, mathematical expressions (), or headings \# Heading, etc. \end{enumerate} First let's see how we can execute an \texttt{.Rmd} file to produce the output as PDF, HTML, etc. Now click \texttt{Knit} to produce a complete report containing all text, code, and results. Alternatively, pressing Ctrl+Shift+K renders the whole document. But in this case, all output formats that are specified in the YAML header will be produced. On the other hand, Knit allows you to specify the output format you want to produce. For example, \texttt{Knit\ \textgreater{}\ Knit\ to\ HTML} produces only HTML output, which is usually faster than producing PDF output. You can also render the file programmatically with the following command: \begin{Shaded} \begin{Highlighting}[] \NormalTok{rmarkdown}\SpecialCharTok{::}\FunctionTok{render}\NormalTok{(}\StringTok{"homework\_1.Rmd"}\NormalTok{) } \end{Highlighting} \end{Shaded} This will display the report in the viewer pane, and create a self-contained HTML file. Instead of running the whole document, you can run each individual code chunk by clicking the Run icon at the top right of the chunk or by pressing Ctrl+Shift+Enter. RStudio executes the code and displays the results inline with the code. \hypertarget{text-formatting-with-r-markdown}{% \subsection{Text Formatting with R Markdown}\label{text-formatting-with-r-markdown}} This section demonstrates the syntax of common components of a document written in R Markdown. Inline text will be \emph{italic} if surrounded by underscores or asterisks, e.g., \texttt{\_text\_} or \texttt{*text*}. \textbf{Bold} text is produced using a pair of double asterisks (\texttt{**text**}). A pair of tildes (\texttt{\textasciitilde{}}) turn text to a subscript (e.g., \texttt{H\textasciitilde{}3\textasciitilde{}PO\textasciitilde{}4\textasciitilde{}} renders H\textsubscript{3}PO\textsubscript{4}). A pair of carets (\texttt{\^{}}) produce a superscript (e.g., \texttt{Cu\^{}2+\^{}} renders Cu\textsuperscript{2+}). Hyperlinks are created using the syntax \texttt{{[}text{]}(link)}, e.g., \texttt{{[}RStudio{]}(https://www.rstudio.com)}. The syntax for images is similar: just add an exclamation mark, e.g., \texttt{!{[}alt\ text\ or\ image\ title{]}(path/to/image)}. Footnotes are put inside the square brackets after a caret \texttt{\^{}{[}{]}}, e.g., \texttt{\^{}{[}This\ is\ a\ footnote.{]}}. Section headers can be written after a number of pound signs, e.g., \begin{Shaded} \begin{Highlighting}[] \FunctionTok{\# First{-}level header} \FunctionTok{\#\# Second{-}level header} \FunctionTok{\#\#\# Third{-}level header} \end{Highlighting} \end{Shaded} If you do not want a certain heading to be numbered, you can add \texttt{\{-\}} or \texttt{\{.unnumbered\}} after the heading, e.g., \begin{Shaded} \begin{Highlighting}[] \FunctionTok{\# Preface \{{-}\}} \end{Highlighting} \end{Shaded} Unordered list items start with \texttt{*}, \texttt{-}, or \texttt{+}, and you can nest one list within another list by indenting the sub-list, e.g., \begin{Shaded} \begin{Highlighting}[] \SpecialStringTok{{-} }\NormalTok{one item} \SpecialStringTok{{-} }\NormalTok{one item} \SpecialStringTok{{-} }\NormalTok{one item} \SpecialStringTok{ {-} }\NormalTok{one more item} \SpecialStringTok{ {-} }\NormalTok{one more item} \SpecialStringTok{ {-} }\NormalTok{one more item} \end{Highlighting} \end{Shaded} The output is: \begin{itemize} \item one item \item one item \item one item \begin{itemize} \tightlist \item one more item \item one more item \item one more item \end{itemize} \end{itemize} Ordered list items start with numbers (you can also nest lists within lists), e.g., \begin{Shaded} \begin{Highlighting}[] \SpecialStringTok{1. }\NormalTok{the first item} \SpecialStringTok{2. }\NormalTok{the second item} \SpecialStringTok{3. }\NormalTok{the third item} \SpecialStringTok{ {-} }\NormalTok{one unordered item} \SpecialStringTok{ {-} }\NormalTok{one unordered item} \end{Highlighting} \end{Shaded} The output does not look too much different with the Markdown source: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item the first item \item the second item \item the third item \begin{itemize} \tightlist \item one unordered item \item one unordered item \end{itemize} \end{enumerate} Blockquotes are written after \texttt{\textgreater{}}, e.g., \begin{Shaded} \begin{Highlighting}[] \AttributeTok{\textgreater{} "I thoroughly disapprove of duels. If a man should challenge me,} \AttributeTok{ I would take him kindly and forgivingly by the hand and lead him} \AttributeTok{ to a quiet place and kill him."} \AttributeTok{\textgreater{}} \AttributeTok{\textgreater{} {-}{-}{-} Mark Twain} \end{Highlighting} \end{Shaded} The actual output (we customized the style for blockquotes in this book): \begin{quote} ``I thoroughly disapprove of duels. If a man should challenge me, I would take him kindly and forgivingly by the hand and lead him to a quiet place and kill him.'' --- Mark Twain \end{quote} Plain code blocks can be written after three or more backticks, and you can also indent the blocks by four spaces, e.g., \begin{Shaded} \begin{Highlighting}[] \InformationTok{\textasciigrave{}\textasciigrave{}\textasciigrave{}} \InformationTok{This text is displayed verbatim / preformatted} \InformationTok{\textasciigrave{}\textasciigrave{}\textasciigrave{}} \NormalTok{Or indent by four spaces:} \InformationTok{ This text is displayed verbatim / preformatted} \end{Highlighting} \end{Shaded} In general, you'd better leave at least one empty line between adjacent but different elements, e.g., a header and a paragraph. This is to avoid ambiguity to the Markdown renderer. For example, does ``\texttt{\#}'' indicate a header below? \begin{Shaded} \begin{Highlighting}[] \NormalTok{In R, the character} \FunctionTok{\# indicates a comment.} \end{Highlighting} \end{Shaded} And does ``\texttt{-}'' mean a bullet point below? \begin{Shaded} \begin{Highlighting}[] \NormalTok{The result of 5} \SpecialStringTok{{-} }\NormalTok{3 is 2.} \end{Highlighting} \end{Shaded} Different flavors of Markdown may produce different results if there are no blank lines. \hypertarget{math-expressions}{% \subsection{Math expressions}\label{math-expressions}} Inline LaTeX equations\index{LaTeX math} can be written in a pair of dollar signs using the LaTeX syntax, e.g., \texttt{\$f(k)\ =\ \{n\ \textbackslash{}choose\ k\}\ p\^{}\{k\}\ (1-p)\^{}\{n-k\}\$} (actual output: \(f(k)={n \choose k}p^{k}(1-p)^{n-k}\)); math expressions of the display style can be written in a pair of double dollar signs, e.g., \texttt{\$\$f(k)\ =\ \{n\ \textbackslash{}choose\ k\}\ p\^{}\{k\}\ (1-p)\^{}\{n-k\}\$\$}, and the output looks like this: \[f\left(k\right)=\binom{n}{k}p^k\left(1-p\right)^{n-k}\] You can also use math environments inside \texttt{\$\ \$} or \texttt{\$\$\ \$\$}, e.g., \begin{Shaded} \begin{Highlighting}[] \SpecialStringTok{$$}\KeywordTok{\textbackslash{}begin}\NormalTok{\{}\ExtensionTok{array}\NormalTok{\}}\SpecialStringTok{\{ccc\}} \SpecialStringTok{x\_\{11\} \& x\_\{12\} \& x\_\{13\}}\SpecialCharTok{\textbackslash{}\textbackslash{}} \SpecialStringTok{x\_\{21\} \& x\_\{22\} \& x\_\{23\}} \KeywordTok{\textbackslash{}end}\NormalTok{\{}\ExtensionTok{array}\NormalTok{\}}\SpecialStringTok{$$} \end{Highlighting} \end{Shaded} \[\begin{array}{ccc} x_{11} & x_{12} & x_{13}\\ x_{21} & x_{22} & x_{23} \end{array}\] \begin{Shaded} \begin{Highlighting}[] \SpecialStringTok{$$X = }\KeywordTok{\textbackslash{}begin}\NormalTok{\{}\ExtensionTok{bmatrix}\NormalTok{\}}\SpecialStringTok{1 \& x\_\{1\}}\SpecialCharTok{\textbackslash{}\textbackslash{}} \SpecialStringTok{1 \& x\_\{2\}}\SpecialCharTok{\textbackslash{}\textbackslash{}} \SpecialStringTok{1 \& x\_\{3\}} \KeywordTok{\textbackslash{}end}\NormalTok{\{}\ExtensionTok{bmatrix}\NormalTok{\}}\SpecialStringTok{$$} \end{Highlighting} \end{Shaded} \[X = \begin{bmatrix}1 & x_{1}\\ 1 & x_{2}\\ 1 & x_{3} \end{bmatrix}\] \begin{Shaded} \begin{Highlighting}[] \SpecialStringTok{$$}\SpecialCharTok{\textbackslash{}Theta}\SpecialStringTok{ = }\KeywordTok{\textbackslash{}begin}\NormalTok{\{}\ExtensionTok{pmatrix}\NormalTok{\}}\SpecialCharTok{\textbackslash{}alpha}\SpecialStringTok{ \& }\SpecialCharTok{\textbackslash{}beta\textbackslash{}\textbackslash{}} \SpecialCharTok{\textbackslash{}gamma}\SpecialStringTok{ \& }\SpecialCharTok{\textbackslash{}delta} \KeywordTok{\textbackslash{}end}\NormalTok{\{}\ExtensionTok{pmatrix}\NormalTok{\}}\SpecialStringTok{$$} \end{Highlighting} \end{Shaded} \[\Theta = \begin{pmatrix}\alpha & \beta\\ \gamma & \delta \end{pmatrix}\] \begin{Shaded} \begin{Highlighting}[] \SpecialStringTok{$$}\KeywordTok{\textbackslash{}begin}\NormalTok{\{}\ExtensionTok{vmatrix}\NormalTok{\}}\SpecialStringTok{a \& b}\SpecialCharTok{\textbackslash{}\textbackslash{}} \SpecialStringTok{c \& d} \KeywordTok{\textbackslash{}end}\NormalTok{\{}\ExtensionTok{vmatrix}\NormalTok{\}}\SpecialStringTok{=ad{-}bc$$} \end{Highlighting} \end{Shaded} \[\begin{vmatrix}a & b\\ c & d \end{vmatrix}=ad-bc\] \hypertarget{the-r-language}{% \chapter{The R language}\label{the-r-language}} \hypertarget{basic-data-type}{% \section{Basic data type}\label{basic-data-type}} It this chapter, we'll focus on R language. First, we need to learn about data types. The R programming language has something called types, and there are four of them: \begin{itemize} \tightlist \item character \item integer \item double \item logical. \end{itemize} Let's take a look at each one of these. Let's start with double. \hypertarget{double}{% \subsection{Double}\label{double}} We can easily create numbers in R. For example: \begin{Shaded} \begin{Highlighting}[] \DecValTok{45} \CommentTok{\#\textgreater{} [1] 45} \DecValTok{5} \CommentTok{\#\textgreater{} [1] 5} \FloatTok{0.5} \CommentTok{\#\textgreater{} [1] 0.5} \SpecialCharTok{{-}}\FloatTok{0.33} \CommentTok{\#\textgreater{} [1] {-}0.33} \end{Highlighting} \end{Shaded} We can execute these lines, these are simple commands, more precisely numerical \emph{constants}. These elements of R language have a fix value. We can't change the value of a constant. The \texttt{0.5} means 0.5. So executing of \texttt{0.5} five we get 0.5 in R console. Decimals omitting the leading zero are acceptable, we can write \texttt{.5}. It means 0.5. So, we can get a tricky form of a number for example \texttt{-.5}, which means -0.5. You can check this out executing these lines. \begin{Shaded} \begin{Highlighting}[] \SpecialCharTok{{-}}\NormalTok{.}\DecValTok{5} \CommentTok{\#\textgreater{} [1] {-}0.5} \end{Highlighting} \end{Shaded} We are going to move on to discuss the exponential format of numbers. This is the scientific notation, where the number after `e' gives the powers of ten. For example \texttt{4e2} means 400, because 4 multiplied by 10 squared is 400 (4 multiplied by 10 the power of 2). \begin{Shaded} \begin{Highlighting}[] \FloatTok{4e2} \CommentTok{\#\textgreater{} [1] 400} \end{Highlighting} \end{Shaded} Generally, we use plus-minus sign before the power, for example \texttt{4e+3}, which value is 4000, \texttt{4.2e+3} means 4200, and \texttt{4.2e-3} means 0.0042. In this case, we have to divide by 10 cubed (10 to the power of 3), or multiplied by ten to the power of -3. \begin{Shaded} \begin{Highlighting}[] \FloatTok{4e+3} \CommentTok{\#\textgreater{} [1] 4000} \FloatTok{4.2e+3} \CommentTok{\#\textgreater{} [1] 4200} \FloatTok{4.2e{-}3} \CommentTok{\#\textgreater{} [1] 0.0042} \end{Highlighting} \end{Shaded} The last format of numbers is the hexadecimal. For example after '0x' prefix, we can type \texttt{0xfe3} which means 4067. \begin{Shaded} \begin{Highlighting}[] \DecValTok{0xfe3} \CommentTok{\#\textgreater{} [1] 4067} \end{Highlighting} \end{Shaded} The hexadecimal numbering system uses 16 as the base (as opposed to ten), so in this system we have 16 digits to represent numbers. The symbols ``0''--``9'' to represent values 0 to 9, and ``A''-``F'' (or alternatively ``a''-``f'') to represent values 10 to 15. We use hexadecimal at most, when we specify a color. For example: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{plot}\NormalTok{(}\DecValTok{1}\NormalTok{, }\AttributeTok{col=}\StringTok{"\#ee0000"}\NormalTok{, }\AttributeTok{pch=}\DecValTok{16}\NormalTok{, }\AttributeTok{cex=}\DecValTok{8}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{02_The_R_language_files/figure-latex/unnamed-chunk-7-1.pdf} This command creates a plot (graph), with only one point coloured by red. A hexadecimal color is specified with a \texttt{\#} and 2 digits for red, to digits for green and two digits for blue (\#RRGGBB). RR (red), GG (green) and BB (blue) are hexadecimal integers between 00 and FF specifying the intensity of the colour. For example, \texttt{\#0000FF} is displayed as blue, because the blue component is set to its highest value (FF) and the others are set to 00. \hypertarget{integer}{% \subsection{Integer}\label{integer}} The next number type is the integer. Integer means the whole numbers, For example 4, 42 or -12. But in R we have to use the capital \texttt{L} suffix. \texttt{L} just indicates that this is a long, it's an internal storage type. It is a way to represent natural numbers like 1 and 2. Integers arise from counting, in most cases. \begin{Shaded} \begin{Highlighting}[] \NormalTok{4L} \CommentTok{\#\textgreater{} [1] 4} \NormalTok{42L} \CommentTok{\#\textgreater{} [1] 42} \SpecialCharTok{{-}}\NormalTok{12L} \CommentTok{\#\textgreater{} [1] {-}12} \end{Highlighting} \end{Shaded} To sum it up, decimal values like \texttt{4.5} and whole numbers without \texttt{L} suffix are double in R. Whole numbers with \texttt{L} suffix are integers in R. Both double and integer are numerics. Let's try something. Type in 2 and 2L, and execute them. You don't see the difference between the double 2 and the integer 2 from the output. \begin{Shaded} \begin{Highlighting}[] \DecValTok{2} \CommentTok{\#\textgreater{} [1] 2} \NormalTok{2L} \CommentTok{\#\textgreater{} [1] 2} \end{Highlighting} \end{Shaded} However, there are two functions that reveal the difference. The \texttt{typeof()} and \texttt{class()} functions return almost the same values, the types of the data. Notice, the \texttt{class()} function with double argument returns \texttt{"numeric"}. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{typeof}\NormalTok{(}\DecValTok{2}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "double"} \FunctionTok{typeof}\NormalTok{(2L)} \CommentTok{\#\textgreater{} [1] "integer"} \FunctionTok{class}\NormalTok{(}\DecValTok{2}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "numeric"} \FunctionTok{class}\NormalTok{(2L)} \CommentTok{\#\textgreater{} [1] "integer"} \end{Highlighting} \end{Shaded} Of course, we can try these functions with decimal values. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{typeof}\NormalTok{(}\FloatTok{2.4}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "double"} \FunctionTok{class}\NormalTok{(}\FloatTok{2.4}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "numeric"} \end{Highlighting} \end{Shaded} \hypertarget{characters}{% \subsection{Characters}\label{characters}} Text (or string) values are called characters in R. For example type in some text inside quote marks. \begin{Shaded} \begin{Highlighting}[] \StringTok{"some text"} \CommentTok{\#\textgreater{} [1] "some text"} \StringTok{\textquotesingle{}Dobó, István\textquotesingle{}} \CommentTok{\#\textgreater{} [1] "Dobó, István"} \StringTok{" sldjf odiuoiuoiu657676876987876875 32 23sdcsd)(/=(/\%"} \CommentTok{\#\textgreater{} [1] " sldjf odiuoiuoiu657676876987876875 32 23sdcsd)(/=(/\%"} \end{Highlighting} \end{Shaded} Note how the quotation marks in the editor indicate that \texttt{"some\ text"} is a string. Syntax highlighting also helps you to identify string values. It may also be noted that autocompletion is also working. We typed in only one quote mark, the second one appeared automatically. We can use double quote mark (\texttt{"}) and single quote mark (\texttt{\textquotesingle{}}), but the opening and the closing quote marks need to match. If we start with single quotation mark, we have to finish with single one. If we start with double quotation mark, we have to finish with double one. We can use any characters inside quotation marks, except surrounding quotation marks. Let's check out the \texttt{typeof()} and \texttt{class()} function with character data. They return \texttt{"character"}. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{typeof}\NormalTok{(}\StringTok{"Friday"}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "character"} \FunctionTok{class}\NormalTok{(}\StringTok{"Friday"}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "character"} \end{Highlighting} \end{Shaded} \hypertarget{logical}{% \subsection{Logical}\label{logical}} The last data types is the logical. Boolean values (TRUE or FALSE) are called logical in R. Let's head over to the script window and start with \texttt{TRUE}, in capital letters. \texttt{TRUE} is a logical. Logical constants can be either \texttt{TRUE} or \texttt{FALSE}. \begin{Shaded} \begin{Highlighting}[] \ConstantTok{TRUE} \CommentTok{\#\textgreater{} [1] TRUE} \ConstantTok{FALSE} \CommentTok{\#\textgreater{} [1] FALSE} \end{Highlighting} \end{Shaded} \texttt{TRUE} and \texttt{FALSE} can be abbreviated to \texttt{T} and \texttt{F} respectively. However, I want to strongly encourage you to use the full versions, \texttt{TRUE} and \texttt{FALSE}. \begin{Shaded} \begin{Highlighting}[] \NormalTok{T} \CommentTok{\#\textgreater{} [1] TRUE} \NormalTok{F} \CommentTok{\#\textgreater{} [1] FALSE} \end{Highlighting} \end{Shaded} Finally, we can check out the type these logical values. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{typeof}\NormalTok{(}\ConstantTok{TRUE}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "logical"} \FunctionTok{class}\NormalTok{(F)} \CommentTok{\#\textgreater{} [1] "logical"} \end{Highlighting} \end{Shaded} Note, we did not use quotation marks in logical vales. If we use them, we will get character vales. For example: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{typeof}\NormalTok{(}\StringTok{"TRUE"}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "character"} \end{Highlighting} \end{Shaded} To sum it up, R works with numerous data types. Some of the most basic types are double, integer, character and logical. We learned how to write \texttt{constants} in R. There are two functions \texttt{typeof()} and \texttt{class()} with which we can check out constants' type. \hypertarget{operators}{% \section{Operators}\label{operators}} \hypertarget{arithmetic-operators}{% \subsection{Arithmetic operators}\label{arithmetic-operators}} In its most basic form, R can be used as a simple calculator. We can use the following arithmetic operators: \begin{itemize} \tightlist \item Addition \item Subtraction \item Multiplication \item Division \item Exponentiation. \end{itemize} Let's put a basic addition, subtraction, multiplication, division and an extra expression, an exponentiation into our editor window. We use plus (\texttt{+}), minus (\texttt{-}), asterisk (\texttt{*}), slash (\texttt{/}) and double asterisks (\texttt{**}) or hat symbols (\texttt{\^{}}). Double asterisk \texttt{**} behaves exactly like \texttt{\^{}} (hat, caret), these are to-the-power-of, exponent operators. \begin{Shaded} \begin{Highlighting}[] \FloatTok{34.1} \SpecialCharTok{+} \FloatTok{2e4} \CommentTok{\# Addition} \CommentTok{\#\textgreater{} [1] 20034.1} \DecValTok{0xe4} \SpecialCharTok{{-}} \DecValTok{23} \CommentTok{\# Subtraction} \CommentTok{\#\textgreater{} [1] 205} \DecValTok{23} \SpecialCharTok{*} \DecValTok{45000} \CommentTok{\# Multiplication} \CommentTok{\#\textgreater{} [1] 1035000} \DecValTok{23}\SpecialCharTok{/}\DecValTok{12} \CommentTok{\# Division} \CommentTok{\#\textgreater{} [1] 1.916667} \DecValTok{23} \SpecialCharTok{**} \DecValTok{12} \CommentTok{\# Exponentiation} \CommentTok{\#\textgreater{} [1] 2.191462e+16} \DecValTok{23} \SpecialCharTok{\^{}} \DecValTok{12} \CommentTok{\# Exponentiation} \CommentTok{\#\textgreater{} [1] 2.191462e+16} \end{Highlighting} \end{Shaded} Additionally, the modulo (\texttt{\%\%}) returns the remainder of the division of the number to the left by the number on its right, for example 5 modulo 3 or \texttt{5\ \%\%\ 3} is 2. \begin{Shaded} \begin{Highlighting}[] \DecValTok{5} \SpecialCharTok{\%\%} \DecValTok{3} \CommentTok{\# modulo: remainder of 5 divided by 3 } \CommentTok{\#\textgreater{} [1] 2} \end{Highlighting} \end{Shaded} The integer division \texttt{x\ \%/\%\ y} x divided by y but rounded down. \begin{Shaded} \begin{Highlighting}[] \DecValTok{7} \SpecialCharTok{\%/\%} \DecValTok{3} \CommentTok{\# integer division} \CommentTok{\#\textgreater{} [1] 2} \end{Highlighting} \end{Shaded} \hypertarget{logical-operators}{% \subsection{Logical operators}\label{logical-operators}} R uses standard logical notation for OR and AND, and NOT. First of all, the exclamation mark (\texttt{!}) stands for NOT. So if I type in \texttt{!TRUE}, I'll get false. Or surprisingly if I type in \texttt{!FALSE} I'll get true. It just inverts the value. The ampersand (\texttt{\&}) is for AND. So I can type in \texttt{TRUE\ \&\ TRUE}, which means if true and true then I get true. If I type in \texttt{TRUE\ \&\ FALSE} then I'll receive false because both arguments have to be true in order for the result to be true. Let's talk about the pipeline symbol (\texttt{\textbar{}}). The pipeline symbol means OR. So in this case I can type in \texttt{TRUE\ \textbar{}\ TRUE} and I'm going to get back true. If I type in \texttt{TRUE\ \textbar{}\ FALSE}, I'll get back true because for OR it is enough for only one value to be true to evaluate it as true. \begin{Shaded} \begin{Highlighting}[] \SpecialCharTok{!}\ConstantTok{TRUE} \CommentTok{\#\textgreater{} [1] FALSE} \SpecialCharTok{!}\ConstantTok{FALSE} \CommentTok{\#\textgreater{} [1] TRUE} \ConstantTok{TRUE} \SpecialCharTok{\&} \ConstantTok{TRUE} \CommentTok{\#\textgreater{} [1] TRUE} \ConstantTok{TRUE} \SpecialCharTok{\&} \ConstantTok{FALSE} \CommentTok{\#\textgreater{} [1] FALSE} \ConstantTok{TRUE} \SpecialCharTok{|} \ConstantTok{TRUE} \CommentTok{\#\textgreater{} [1] TRUE} \ConstantTok{TRUE} \SpecialCharTok{|} \ConstantTok{FALSE} \CommentTok{\#\textgreater{} [1] TRUE} \end{Highlighting} \end{Shaded} \hypertarget{relational-operators}{% \subsection{Relational operators}\label{relational-operators}} Relational operators are used to compare between values. Here is a list of relational operators available in R. \begin{longtable}[]{@{}ll@{}} \toprule \begin{minipage}[b]{(\columnwidth - 1\tabcolsep) * \real{0.17}}\raggedright Operator\strut \end{minipage} & \begin{minipage}[b]{(\columnwidth - 1\tabcolsep) * \real{0.36}}\raggedright Description\strut \end{minipage}\tabularnewline \midrule \endhead \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.17}}\raggedright \texttt{\textless{}}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.36}}\raggedright Less than\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.17}}\raggedright \texttt{\textgreater{}}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.36}}\raggedright Greater than\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.17}}\raggedright \texttt{\textless{}=}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.36}}\raggedright Less than or equal to\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.17}}\raggedright \texttt{\textgreater{}=}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.36}}\raggedright Greater than or equal to\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.17}}\raggedright \texttt{==}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.36}}\raggedright Equal to\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.17}}\raggedright \texttt{!=}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.36}}\raggedright Not equal to\strut \end{minipage}\tabularnewline \bottomrule \end{longtable} \begin{Shaded} \begin{Highlighting}[] \DecValTok{2} \SpecialCharTok{\textless{}} \FloatTok{2.3} \CommentTok{\#\textgreater{} [1] TRUE} \DecValTok{2} \SpecialCharTok{\textless{}=} \FloatTok{2.3} \CommentTok{\#\textgreater{} [1] TRUE} \DecValTok{2} \SpecialCharTok{\textgreater{}} \FloatTok{2.3} \CommentTok{\#\textgreater{} [1] FALSE} \DecValTok{2} \SpecialCharTok{\textgreater{}=} \FloatTok{2.3} \CommentTok{\#\textgreater{} [1] FALSE} \DecValTok{2} \SpecialCharTok{==} \FloatTok{2.3} \CommentTok{\#\textgreater{} [1] FALSE} \DecValTok{2} \SpecialCharTok{!=} \FloatTok{2.3} \CommentTok{\#\textgreater{} [1] TRUE} \StringTok{"apple"} \SpecialCharTok{==} \StringTok{"Apple"} \CommentTok{\#\textgreater{} [1] FALSE} \StringTok{"apple"} \SpecialCharTok{!=} \StringTok{"Apple"} \CommentTok{\#\textgreater{} [1] TRUE} \ConstantTok{TRUE} \SpecialCharTok{==} \ConstantTok{FALSE} \CommentTok{\#\textgreater{} [1] FALSE} \ConstantTok{TRUE} \SpecialCharTok{!=} \ConstantTok{FALSE} \CommentTok{\#\textgreater{} [1] TRUE} \ConstantTok{TRUE} \SpecialCharTok{==} \DecValTok{1} \CommentTok{\#\textgreater{} [1] TRUE} \ConstantTok{TRUE} \SpecialCharTok{!=} \DecValTok{1} \CommentTok{\#\textgreater{} [1] FALSE} \NormalTok{(}\SpecialCharTok{{-}}\DecValTok{6} \SpecialCharTok{*} \DecValTok{14}\NormalTok{) }\SpecialCharTok{==}\NormalTok{ (}\DecValTok{17} \SpecialCharTok{{-}} \DecValTok{101}\NormalTok{)} \CommentTok{\#\textgreater{} [1] TRUE} \end{Highlighting} \end{Shaded} The result of comparison is a Boolean value (\texttt{TRUE} or \texttt{FALSE}). \hypertarget{assignment-operators}{% \subsection{Assignment operators}\label{assignment-operators}} We will use one of them, the ``left arrow'' (\texttt{\textless{}-}) operator. The ``left arrow'' assignment operator is actually two symbols, a „less than'' sign and a „minus''. Good to know, there is a shortcut for assignment operator, namely Alt+- in \emph{RStudio}. What is the assignment operator for? It is for \emph{objects}. Objects allow you to store a value in R. You can then later use this object's name to easily access the value that is stored within this object. Let's create our first object. You can assign the value 4 to an object \texttt{my\_object} with the command: \begin{Shaded} \begin{Highlighting}[] \NormalTok{my\_object }\OtherTok{\textless{}{-}} \DecValTok{4} \end{Highlighting} \end{Shaded} Then type in the name of the object my\_object, and execute it. Notice that when you ask R to print my\_object, the value 4 appears. \begin{Shaded} \begin{Highlighting}[] \NormalTok{my\_object} \CommentTok{\#\textgreater{} [1] 4} \end{Highlighting} \end{Shaded} If we use \texttt{class()} or \texttt{typeof()} functions, we'll get type of the object. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{typeof}\NormalTok{(my\_object)} \CommentTok{\#\textgreater{} [1] "double"} \FunctionTok{class}\NormalTok{(my\_object)} \CommentTok{\#\textgreater{} [1] "numeric"} \end{Highlighting} \end{Shaded} \hypertarget{miscellaneous-operators}{% \subsection{Miscellaneous operators}\label{miscellaneous-operators}} These operators are used to for specific purpose and not general mathematical or logical computation. Colon operator (\texttt{:}) creates the series of numbers in sequence for a vector. \begin{Shaded} \begin{Highlighting}[] \DecValTok{2}\SpecialCharTok{:}\DecValTok{8} \CommentTok{\#\textgreater{} [1] 2 3 4 5 6 7 8} \end{Highlighting} \end{Shaded} The \texttt{\%in\%} operator is used to identify if an element belongs to a vector. It returns a logical vector indicating if there is a match or not for all elements in the left operand in the right operand. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{c}\NormalTok{(}\DecValTok{3}\NormalTok{, }\DecValTok{4}\NormalTok{, }\DecValTok{5}\NormalTok{, }\DecValTok{7}\NormalTok{, }\DecValTok{10}\NormalTok{) }\SpecialCharTok{\%in\%} \FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{, }\DecValTok{4}\NormalTok{, }\DecValTok{6}\NormalTok{, }\DecValTok{8}\NormalTok{, }\DecValTok{10}\NormalTok{)} \CommentTok{\#\textgreater{} [1] FALSE TRUE FALSE FALSE TRUE} \end{Highlighting} \end{Shaded} The double colon operator (\texttt{::}) is a binary operator to access functions or datasets from packages. As we mentioned, packages extend R's knowledge. Every R package contains functions and/or dataset. Every R package has a name. For example we have an installed packages called \textbf{MASS}. In \textbf{MASS} packages, there is dataset called \texttt{survey}. So, we can type in \texttt{MASS::survey}, to reach the survey dataset from \textbf{MASS} package. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{str}\NormalTok{(MASS}\SpecialCharTok{::}\NormalTok{survey)} \CommentTok{\#\textgreater{} \textquotesingle{}data.frame\textquotesingle{}: 237 obs. of 12 variables:} \CommentTok{\#\textgreater{} $ Sex : Factor w/ 2 levels "Female","Male": 1 2 2 2 2 1 2 1 2 2 ...} \CommentTok{\#\textgreater{} $ Wr.Hnd: num 18.5 19.5 18 18.8 20 18 17.7 17 20 18.5 ...} \CommentTok{\#\textgreater{} $ NW.Hnd: num 18 20.5 13.3 18.9 20 17.7 17.7 17.3 19.5 18.5 ...} \CommentTok{\#\textgreater{} $ W.Hnd : Factor w/ 2 levels "Left","Right": 2 1 2 2 2 2 2 2 2 2 ...} \CommentTok{\#\textgreater{} $ Fold : Factor w/ 3 levels "L on R","Neither",..: 3 3 1 3 2 1 1 3 3 3 ...} \CommentTok{\#\textgreater{} $ Pulse : int 92 104 87 NA 35 64 83 74 72 90 ...} \CommentTok{\#\textgreater{} $ Clap : Factor w/ 3 levels "Left","Neither",..: 1 1 2 2 3 3 3 3 3 3 ...} \CommentTok{\#\textgreater{} $ Exer : Factor w/ 3 levels "Freq","None",..: 3 2 2 2 3 3 1 1 3 3 ...} \CommentTok{\#\textgreater{} $ Smoke : Factor w/ 4 levels "Heavy","Never",..: 2 4 3 2 2 2 2 2 2 2 ...} \CommentTok{\#\textgreater{} $ Height: num 173 178 NA 160 165 ...} \CommentTok{\#\textgreater{} $ M.I : Factor w/ 2 levels "Imperial","Metric": 2 1 NA 2 2 1 1 2 2 2 ...} \CommentTok{\#\textgreater{} $ Age : num 18.2 17.6 16.9 20.3 23.7 ...} \end{Highlighting} \end{Shaded} \hypertarget{operator-precedence-in-r}{% \subsection{Operator Precedence in R}\label{operator-precedence-in-r}} As we mentioned, R can be used as a powerful calculator. Simply type an arithmetic expression and press Ctrl+Enter. \begin{Shaded} \begin{Highlighting}[] \DecValTok{4} \SpecialCharTok{+} \DecValTok{8} \CommentTok{\# will return the result 12} \CommentTok{\#\textgreater{} [1] 12} \DecValTok{4} \SpecialCharTok{+} \DecValTok{5} \SpecialCharTok{+} \DecValTok{3} \CommentTok{\# will return the result 12} \CommentTok{\#\textgreater{} [1] 12} \end{Highlighting} \end{Shaded} But, there could be problems if you are not careful. R normally execute your arithmetic expression by evaluating each item from left to right. 4 plus 8 equals 12. 4 plus 5 plus 3 equals twelve. But good to know, operators have precedence in the order of evaluation. Let's start with more complex expressions that can cause problems if you are not careful. \begin{Shaded} \begin{Highlighting}[] \DecValTok{4} \SpecialCharTok{+} \DecValTok{5} \SpecialCharTok{*} \DecValTok{3} \CommentTok{\# will return the result 19} \CommentTok{\#\textgreater{} [1] 19} \end{Highlighting} \end{Shaded} Notice that the expression was not evaluated strictly left to right. R actually evaluated 5 times 3 and then added that result to 4. The R operator precedence rules caused this result. Multiplication and division have a higher precedence than the addition and subtraction operator so the multiplication is performed before the addition. We can arrange the operators in order from high precedence to low precedence. We can extend the list with exponentiation. Operators with higher precedence (nearer top of the list) are performed before those with lower precedence (nearer to the bottom). \begin{longtable}[]{@{}ll@{}} \toprule Operator & Description\tabularnewline \midrule \endhead \texttt{::} & access\tabularnewline \texttt{\$} & component\tabularnewline \texttt{{[}} \texttt{{[}{[}} & indexing\tabularnewline \texttt{\^{}} \texttt{**} & exponentiation\tabularnewline \texttt{-} \texttt{+} & unary minus, unary plus\tabularnewline \texttt{:} & sequence operator\tabularnewline \texttt{\%any\%} e.g.~\texttt{\%\%} \texttt{\%/\%} \texttt{\%in\%} & special operators\tabularnewline \texttt{*} \texttt{/} & multiplication, division\tabularnewline \texttt{+} \texttt{-} & addition, subtraction\tabularnewline \texttt{\textless{}} \texttt{\textgreater{}} \texttt{\textless{}=} \texttt{\textgreater{}=} \texttt{==} \texttt{!=} & comparisions\tabularnewline \texttt{!} & logical NOT\tabularnewline \texttt{\&} & logical AND\tabularnewline \texttt{\textbar{}} & logical OR\tabularnewline \texttt{\textless{}-} & assignment\tabularnewline \bottomrule \end{longtable} The exponentiation operator has a higher precedence than the multiplication, so the the exponentiation is the first that performed. Three squared multiplied by two: \begin{Shaded} \begin{Highlighting}[] \DecValTok{2} \SpecialCharTok{*} \DecValTok{3} \SpecialCharTok{**} \DecValTok{2} \CommentTok{\#\textgreater{} [1] 18} \end{Highlighting} \end{Shaded} Operator precedence can be overridden with explicit use of parentheses. In the case of this example, we could enter \begin{Shaded} \begin{Highlighting}[] \NormalTok{(}\DecValTok{4} \SpecialCharTok{+} \DecValTok{5}\NormalTok{) }\SpecialCharTok{*} \DecValTok{3} \CommentTok{\#\textgreater{} [1] 27} \NormalTok{(}\DecValTok{2} \SpecialCharTok{*} \DecValTok{3}\NormalTok{) }\SpecialCharTok{**} \DecValTok{2} \CommentTok{\#\textgreater{} [1] 36} \end{Highlighting} \end{Shaded} In practise, if you are at all unsure about the precedence of your operators, the simplest thing to do is to use parentheses to make the evaluation order explicit. \hypertarget{binary-and-unary-operators}{% \subsection{Binary and unary operators}\label{binary-and-unary-operators}} Another thing about operators. There are two different type of operators. Binary and unary operators. The question is, how many operands they require to work properly. A unary operator is an operator that operates on only one operand. Binary operators that we used earlier operates two operands. We've talked about binary operators so far. Addition, subtraction, multiplication, division, exponentiation are binary operators. They require two operands to work properly. For example \texttt{5\ –} (select these characters for executing) is wrong, we get a continuation prompt. We could complete the command. but we never do that. Click on console pane, and hit Esc. Complete the line in the script editor, \texttt{5–2}, hit Ctrl+Enter equals 3. Subtraction is a binary operator, it has two operands. Five and two. In R, there are a few unary operators, for example unary minus, and unary plus. They are the sign operators. They are used to indicate or change the sign of a value. The \texttt{+} and \texttt{-} signs indicate the sign of a value. The plus sign can be used to signal that we have a positive number. It can be omitted and it is mostly done so. We could type \texttt{5}, \texttt{+5}. The minus sign changes the sign of a value. To write negative five, we need type in \texttt{-5}. This is the unary minus operator. \begin{Shaded} \begin{Highlighting}[] \DecValTok{5} \CommentTok{\#\textgreater{} [1] 5} \SpecialCharTok{+}\DecValTok{5} \CommentTok{\#\textgreater{} [1] 5} \SpecialCharTok{{-}}\DecValTok{5} \CommentTok{\#\textgreater{} [1] {-}5} \end{Highlighting} \end{Shaded} To sum up, we really need another list in the script editor. R Terminology. We talked about constants, this is a language element which has fix value, we can not change. 5 means five, Friday in quotes means Friday. TRUE means logical TRUE. Operators perform mathematical or logical operations on values, on constants. Operators have precedence and operators can be unary or binary. Every constant has a basic type (double, integer, character, logical). And, we can build expressions with constants, operators and parentheses. We also talked about comment in R, that is everything after a \# (a hashtag). It will have no effect if you run it in R. \hypertarget{objects}{% \section{Objects}\label{objects}} We talked about object that is very important language element in R. We will be discussing everything that needs to be known about objects and data structures. Lets' start with objects. An object allows you to store data in R for later use. Suppose the height of a rectangle is 2. Let's assign this value 2 to an object. Let's call it \texttt{height}. \begin{Shaded} \begin{Highlighting}[] \NormalTok{height }\OtherTok{\textless{}{-}} \DecValTok{2} \end{Highlighting} \end{Shaded} This time, R does not print anything in the console, but we can not see error messages either. Command executing without error messages, even without any messages indicates everything is ok. The command evaluated successfully. Look at the top right pane. In the environment tab we have a new item in the list. \texttt{Height} and its value 2. All objects with name and value will appear in this list. We have only one object in this session, so this list has only one item. If you now simply type and execute height in the script window, R returns 2. \begin{Shaded} \begin{Highlighting}[] \NormalTok{height} \CommentTok{\#\textgreater{} [1] 2} \end{Highlighting} \end{Shaded} We can do a similar thing for the width of our imaginary rectangle. We assign the value 4 to an object called \texttt{width}. \begin{Shaded} \begin{Highlighting}[] \NormalTok{width }\OtherTok{\textless{}{-}} \DecValTok{4} \end{Highlighting} \end{Shaded} In the top right pane, we have two items in the list. Actually, this list in the environment tab shows the \emph{workspace}. Workspace is a special location in your computer's memory that temporarily stores data we just created using R. Workspace is the place where R objects `live'. You can list all objects with the \texttt{ls()} function. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{ls}\NormalTok{()} \CommentTok{\#\textgreater{} [1] "height" "my\_object" "width"} \end{Highlighting} \end{Shaded} This shows you a list of all the objects you have created up to now. There are two objects in your workspace at the moment, \texttt{height} and \texttt{width}. If we try to access object that's not in the workspace, \texttt{depth} for example, R throws an error. \begin{Shaded} \begin{Highlighting}[] \NormalTok{depth }\CommentTok{\# error} \end{Highlighting} \end{Shaded} Suppose you now want to find out the area of our imaginary rectangle, which is height multiplied by width. Height equals 2, and width equals 4, so the result is 8. We have two ways to calculate the area: \begin{Shaded} \begin{Highlighting}[] \DecValTok{2} \SpecialCharTok{*} \DecValTok{4} \CommentTok{\#\textgreater{} [1] 8} \NormalTok{height }\SpecialCharTok{*}\NormalTok{ width} \CommentTok{\#\textgreater{} [1] 8} \end{Highlighting} \end{Shaded} The second line with objects is more advanced than the first line with constants. Let's also assign this result to a new object, called area. \begin{Shaded} \begin{Highlighting}[] \NormalTok{area }\OtherTok{\textless{}{-}}\NormalTok{ height }\SpecialCharTok{*}\NormalTok{ width} \NormalTok{area } \CommentTok{\#\textgreater{} [1] 8} \end{Highlighting} \end{Shaded} We can print the value of object \texttt{area}, type and execute \texttt{area}. It's 8. Inspecting the workspace again with \texttt{ls()}, shows that the workspace contains three objects now: \texttt{area}, \texttt{height} and \texttt{width}. Now, this is all great, but what if you want to recalculate the area of your imaginary rectangle when the height is 3 and the width is 6? You'd have to reassign the objects width and height in the script window, and then recalculate the area. The value of area will change, executing area will return 18. \begin{Shaded} \begin{Highlighting}[] \NormalTok{height }\OtherTok{\textless{}{-}} \DecValTok{3} \NormalTok{width }\OtherTok{\textless{}{-}} \DecValTok{6} \NormalTok{area }\OtherTok{\textless{}{-}}\NormalTok{ height }\SpecialCharTok{*}\NormalTok{ width} \NormalTok{area } \CommentTok{\#\textgreater{} [1] 18} \end{Highlighting} \end{Shaded} How to find the perimeter of this rectangle. Let's create a new object called perimeter. \begin{Shaded} \begin{Highlighting}[] \NormalTok{perimeter }\OtherTok{\textless{}{-}} \DecValTok{2}\SpecialCharTok{*}\NormalTok{(width}\SpecialCharTok{+}\NormalTok{height)} \NormalTok{perimeter} \CommentTok{\#\textgreater{} [1] 18} \end{Highlighting} \end{Shaded} Let's sum up the objects. The general form of creating or modifying an object is object name, assignment operator and an expression. \begin{Shaded} \begin{Highlighting}[] \NormalTok{object\_name }\OtherTok{\textless{}{-}}\NormalTok{ expression} \end{Highlighting} \end{Shaded} First, we have to choose a valid object name. Object name can contain any letters from English alphabet, underscore, dot, or digit. We need to start with letter in an object name. Expression can be a simply constant, or an object name, or constant and object names with operators and parentheses. We can create logical object or double object, integer object and character object: \begin{Shaded} \begin{Highlighting}[] \NormalTok{x.logical }\OtherTok{\textless{}{-}} \ConstantTok{TRUE} \NormalTok{y.double }\OtherTok{\textless{}{-}} \FloatTok{12.3} \NormalTok{z.integer }\OtherTok{\textless{}{-}}\NormalTok{ 12L} \NormalTok{k.character }\OtherTok{\textless{}{-}} \StringTok{"Hello world!"} \end{Highlighting} \end{Shaded} and we can print their type, with \texttt{typeof()} or \texttt{class()} function. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{typeof}\NormalTok{(x.logical)} \CommentTok{\#\textgreater{} [1] "logical"} \FunctionTok{class}\NormalTok{(x.logical)} \CommentTok{\#\textgreater{} [1] "logical"} \FunctionTok{typeof}\NormalTok{(y.double)} \CommentTok{\#\textgreater{} [1] "double"} \FunctionTok{class}\NormalTok{(y.double)} \CommentTok{\#\textgreater{} [1] "numeric"} \FunctionTok{typeof}\NormalTok{(z.integer)} \CommentTok{\#\textgreater{} [1] "integer"} \FunctionTok{class}\NormalTok{(z.integer)} \CommentTok{\#\textgreater{} [1] "integer"} \FunctionTok{typeof}\NormalTok{(k.character)} \CommentTok{\#\textgreater{} [1] "character"} \FunctionTok{class}\NormalTok{(k.character)} \CommentTok{\#\textgreater{} [1] "character"} \end{Highlighting} \end{Shaded} \hypertarget{testing-the-type}{% \subsection{Testing the type}\label{testing-the-type}} Instead of asking for the type or class of an object, you can also use the is-dot-functions to see whether objects are actually of a certain type. To see if an object is a double, we can use the \texttt{is.double()} function. It returns a logical value. \texttt{TRUE} or \texttt{FALSE}. To see if an object is integer, we can use \texttt{is.integer()}. There is \texttt{is.numeric()} function to see whether objects are numeric. The integer and double are numerics. Let's try the \texttt{is.logical()} and the \texttt{is.character()} functions. \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# is.*() functions, test of types} \FunctionTok{is.double}\NormalTok{(x.logical)} \CommentTok{\#\textgreater{} [1] FALSE} \FunctionTok{is.double}\NormalTok{(y.double)} \CommentTok{\#\textgreater{} [1] TRUE} \FunctionTok{is.double}\NormalTok{(z.integer)} \CommentTok{\#\textgreater{} [1] FALSE} \FunctionTok{is.double}\NormalTok{(k.character)} \CommentTok{\#\textgreater{} [1] FALSE} \FunctionTok{is.integer}\NormalTok{(x.logical)} \CommentTok{\#\textgreater{} [1] FALSE} \FunctionTok{is.integer}\NormalTok{(y.double)} \CommentTok{\#\textgreater{} [1] FALSE} \FunctionTok{is.integer}\NormalTok{(z.integer)} \CommentTok{\#\textgreater{} [1] TRUE} \FunctionTok{is.integer}\NormalTok{(k.character)} \CommentTok{\#\textgreater{} [1] FALSE} \FunctionTok{is.numeric}\NormalTok{(x.logical)} \CommentTok{\#\textgreater{} [1] FALSE} \FunctionTok{is.numeric}\NormalTok{(y.double)} \CommentTok{\#\textgreater{} [1] TRUE} \FunctionTok{is.numeric}\NormalTok{(z.integer)} \CommentTok{\#\textgreater{} [1] TRUE} \FunctionTok{is.numeric}\NormalTok{(k.character)} \CommentTok{\#\textgreater{} [1] FALSE} \FunctionTok{is.logical}\NormalTok{(x.logical)} \CommentTok{\#\textgreater{} [1] TRUE} \FunctionTok{is.logical}\NormalTok{(y.double)} \CommentTok{\#\textgreater{} [1] FALSE} \FunctionTok{is.logical}\NormalTok{(z.integer)} \CommentTok{\#\textgreater{} [1] FALSE} \FunctionTok{is.logical}\NormalTok{(k.character)} \CommentTok{\#\textgreater{} [1] FALSE} \FunctionTok{is.character}\NormalTok{(x.logical)} \CommentTok{\#\textgreater{} [1] FALSE} \FunctionTok{is.character}\NormalTok{(y.double)} \CommentTok{\#\textgreater{} [1] FALSE} \FunctionTok{is.character}\NormalTok{(z.integer)} \CommentTok{\#\textgreater{} [1] FALSE} \FunctionTok{is.character}\NormalTok{(k.character)} \CommentTok{\#\textgreater{} [1] TRUE} \end{Highlighting} \end{Shaded} \hypertarget{coercion}{% \subsection{Coercion}\label{coercion}} There are cases in which you want to change the type of an object to another one. How would that work? This is where coercion comes into play! By using the as-dot-functions one can coerce the type of a variable to another type. Many ways of transformation between types are possible. Have a look at these examples. \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# as.*() functions, coercion} \FunctionTok{as.logical}\NormalTok{(y.double)} \CommentTok{\#\textgreater{} [1] TRUE} \FunctionTok{as.logical}\NormalTok{(z.integer)} \CommentTok{\#\textgreater{} [1] TRUE} \FunctionTok{as.logical}\NormalTok{(k.character)} \CommentTok{\#\textgreater{} [1] NA} \end{Highlighting} \end{Shaded} The first three commands here coerce three different objects to a logical. Every number except zero coerced to TRUE, so the first two commands return TRUE. The third command outputs an NA, a missing value. R doesn't understand how to transform ``Hello world'' into a logical, and decides to return a Not Available instead. We can try to convert zero to logical. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{as.logical}\NormalTok{(}\DecValTok{0}\NormalTok{)} \CommentTok{\#\textgreater{} [1] FALSE} \end{Highlighting} \end{Shaded} The result is \texttt{FALSE}. We can easily coerce logical, integer and double to character. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{as.character}\NormalTok{(y.double)} \CommentTok{\#\textgreater{} [1] "12.3"} \FunctionTok{as.character}\NormalTok{(z.integer)} \CommentTok{\#\textgreater{} [1] "12"} \FunctionTok{as.character}\NormalTok{(x.logical)} \CommentTok{\#\textgreater{} [1] "TRUE"} \end{Highlighting} \end{Shaded} Let's try to find out how to convert logical and character to number? What functions we have in R with which we can achieve this? Yes, \texttt{as.double()}, \texttt{as.integer()} and \texttt{as.numeric()}. \texttt{as.numeric()} is identical to \texttt{as.double()}. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{as.double}\NormalTok{(x.logical)} \CommentTok{\#\textgreater{} [1] 1} \FunctionTok{as.double}\NormalTok{(z.integer)} \CommentTok{\#\textgreater{} [1] 12} \FunctionTok{as.double}\NormalTok{(k.character)} \CommentTok{\#\textgreater{} [1] NA} \FunctionTok{as.integer}\NormalTok{(y.double)} \CommentTok{\#\textgreater{} [1] 12} \FunctionTok{as.integer}\NormalTok{(x.logical)} \CommentTok{\#\textgreater{} [1] 1} \FunctionTok{as.integer}\NormalTok{(k.character)} \CommentTok{\#\textgreater{} [1] NA} \FunctionTok{as.numeric}\NormalTok{(x.logical)} \CommentTok{\#\textgreater{} [1] 1} \FunctionTok{as.numeric}\NormalTok{(y.double)} \CommentTok{\#\textgreater{} [1] 12.3} \FunctionTok{as.numeric}\NormalTok{(z.integer)} \CommentTok{\#\textgreater{} [1] 12} \FunctionTok{as.numeric}\NormalTok{(k.character)} \CommentTok{\#\textgreater{} [1] NA} \end{Highlighting} \end{Shaded} Logical TRUE coerces to the numeric one (1). FALSE, however, coerces to the numeric zero (0). Valid number in a string coerces to number, invalid number in a string, for example ``hello'', coerces missing value. R doesn't understand how to transform ``hello'' into a numeric, and decides to return a Not Available (NA) instead. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{as.double}\NormalTok{(}\ConstantTok{TRUE}\NormalTok{)} \CommentTok{\#\textgreater{} [1] 1} \FunctionTok{as.numeric}\NormalTok{(}\ConstantTok{TRUE}\NormalTok{)} \CommentTok{\#\textgreater{} [1] 1} \FunctionTok{as.integer}\NormalTok{(}\ConstantTok{TRUE}\NormalTok{)} \CommentTok{\#\textgreater{} [1] 1} \FunctionTok{as.double}\NormalTok{(}\ConstantTok{FALSE}\NormalTok{)} \CommentTok{\#\textgreater{} [1] 0} \FunctionTok{as.numeric}\NormalTok{(}\ConstantTok{FALSE}\NormalTok{)} \CommentTok{\#\textgreater{} [1] 0} \FunctionTok{as.integer}\NormalTok{(}\ConstantTok{FALSE}\NormalTok{)} \CommentTok{\#\textgreater{} [1] 0} \FunctionTok{as.double}\NormalTok{(}\StringTok{"12.3"}\NormalTok{)} \CommentTok{\#\textgreater{} [1] 12.3} \FunctionTok{as.numeric}\NormalTok{(}\StringTok{"12.3"}\NormalTok{)} \CommentTok{\#\textgreater{} [1] 12.3} \FunctionTok{as.integer}\NormalTok{(}\StringTok{"12.3"}\NormalTok{)} \CommentTok{\#\textgreater{} [1] 12} \FunctionTok{as.double}\NormalTok{(}\StringTok{"hello"}\NormalTok{)} \CommentTok{\#\textgreater{} [1] NA} \FunctionTok{as.numeric}\NormalTok{(}\StringTok{"hello"}\NormalTok{)} \CommentTok{\#\textgreater{} [1] NA} \FunctionTok{as.integer}\NormalTok{(}\StringTok{"hello"}\NormalTok{)} \CommentTok{\#\textgreater{} [1] NA} \end{Highlighting} \end{Shaded} \hypertarget{data-structures}{% \section{Data structures}\label{data-structures}} \hypertarget{vectors}{% \subsection{Vectors}\label{vectors}} In R, we use data sets all the time. Data sets are a collection or group of values, double, integer, character or logical values. They are the result of a scientific measurements, a surveys or other data collection methods. For example, you may record the ages of each member of your family. In R, we have to use the \texttt{c()} function for this, which allows you to combine values into a vector. This is a four member family, 2 children, mother, father. We could combine the ages of each member of family. Execute this command. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{c}\NormalTok{(}\DecValTok{18}\NormalTok{, }\DecValTok{20}\NormalTok{, }\DecValTok{47}\NormalTok{, }\DecValTok{49}\NormalTok{)} \CommentTok{\#\textgreater{} [1] 18 20 47 49} \end{Highlighting} \end{Shaded} As you can see in the output, it is a vector. A vector is nothing more than a sequence of data elements of the same basic data type. This is a double vector. We can check it with \texttt{typeof()} or \texttt{is.double()} functions. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{typeof}\NormalTok{(}\FunctionTok{c}\NormalTok{(}\DecValTok{18}\NormalTok{, }\DecValTok{20}\NormalTok{, }\DecValTok{47}\NormalTok{, }\DecValTok{49}\NormalTok{))} \CommentTok{\#\textgreater{} [1] "double"} \FunctionTok{is.double}\NormalTok{(}\FunctionTok{c}\NormalTok{(}\DecValTok{18}\NormalTok{, }\DecValTok{20}\NormalTok{, }\DecValTok{47}\NormalTok{, }\DecValTok{49}\NormalTok{))} \CommentTok{\#\textgreater{} [1] TRUE} \end{Highlighting} \end{Shaded} Of course we could also assign this double vector to a new object, \texttt{age} for example. \begin{Shaded} \begin{Highlighting}[] \NormalTok{age }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\DecValTok{18}\NormalTok{, }\DecValTok{20}\NormalTok{, }\DecValTok{47}\NormalTok{, }\DecValTok{49}\NormalTok{)} \end{Highlighting} \end{Shaded} We can assert that it is a vector, by typing \begin{Shaded} \begin{Highlighting}[] \FunctionTok{is.vector}\NormalTok{(age)} \CommentTok{\#\textgreater{} [1] TRUE} \end{Highlighting} \end{Shaded} We can also check the top right pane, the workspace. Age is listed, and we can see, it is a double vector indicated by num with four elements. ``Num'' means double. We can print the value of this vector. \begin{Shaded} \begin{Highlighting}[] \NormalTok{age} \CommentTok{\#\textgreater{} [1] 18 20 47 49} \end{Highlighting} \end{Shaded} Please execute the \texttt{age} command. We can see, \texttt{age} contains four elements. The firs element is 18, the second 20, the third is 47, and the last element is 49. Every vector is a sequence of data elements. We can check the length of this vector with \texttt{length()} function. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{length}\NormalTok{(age)} \CommentTok{\#\textgreater{} [1] 4} \end{Highlighting} \end{Shaded} It tells us, \texttt{age} vector holds four elements. The length of this vector is 4. Good to know, the vector is the simplest data structure in R. Objects we've created in the previous topic, are also vectors. They're all just vectors of length 1. They contain a single number (for example object \texttt{height}) or a single character, object \texttt{k.character}. We can check this with \texttt{is.vector()} function. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{is.vector}\NormalTok{(height)} \CommentTok{\#\textgreater{} [1] TRUE} \FunctionTok{is.vector}\NormalTok{(k.character)} \CommentTok{\#\textgreater{} [1] TRUE} \end{Highlighting} \end{Shaded} So, to sum up, a vector is a sequence of data elements, so a vectors is a one-dimensional data structure. The last important thing is that in R, a vector can only hold elements of the same type. This means that you cannot have a vector that contains both logicals and numerics, for example. If you do try to build such a vector, R automatically performs coercion to make sure that you end up with a vector that contains elements of the same type. Let's see how that works with an example. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{c}\NormalTok{(}\DecValTok{12}\NormalTok{, }\ConstantTok{TRUE}\NormalTok{)} \CommentTok{\#\textgreater{} [1] 12 1} \FunctionTok{c}\NormalTok{(}\DecValTok{12}\NormalTok{, }\StringTok{"Hello"}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "12" "Hello"} \FunctionTok{c}\NormalTok{(}\StringTok{"Hello"}\NormalTok{, }\ConstantTok{TRUE}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "Hello" "TRUE"} \FunctionTok{c}\NormalTok{(}\StringTok{"Hello"}\NormalTok{, }\ConstantTok{TRUE}\NormalTok{, }\FloatTok{23.1}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "Hello" "TRUE" "23.1"} \end{Highlighting} \end{Shaded} If you now inspect these vectors, you'll see that logical value coerced to numeric in the first command. and the numeric or logical values coerced to characters otherwise. So, to sum up, vector is a one-dimensional and homogeneous data structure. Let's practise creating vector. Store the gender of family members. \begin{Shaded} \begin{Highlighting}[] \NormalTok{gender }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"male"}\NormalTok{, }\StringTok{"male"}\NormalTok{, }\StringTok{"female"}\NormalTok{, }\StringTok{"male"}\NormalTok{) } \NormalTok{gender} \CommentTok{\#\textgreater{} [1] "male" "male" "female" "male"} \end{Highlighting} \end{Shaded} Gender is a character vector that has length 4. You can check with the \texttt{length()} function, and the top right pane. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{length}\NormalTok{(gender)} \CommentTok{\#\textgreater{} [1] 4} \end{Highlighting} \end{Shaded} \hypertarget{factor}{% \subsection{Factor}\label{factor}} Factor is about categorical variables. Unlike numerical variables, categorical variables can only take on a limited number of different values. A categorical variable can only belong to a limited number of categories. If you want to store categorical data in R, you have to use factors. This is the only way that the statistical modelling techniques handle such data correctly. If we meet categorical variables, we need the factor data structure in R. A good example of a categorical variable is a person's gender. It can be male or female. Gender is a categorical variable in statistics. We've created the gender object as a character vector. But, as we mentioned, we are not ready. We need to convert this vector to factor. You can use the \texttt{factor()} function. \begin{Shaded} \begin{Highlighting}[] \NormalTok{gender.fact }\OtherTok{\textless{}{-}} \FunctionTok{factor}\NormalTok{(gender)} \NormalTok{gender.fact} \CommentTok{\#\textgreater{} [1] male male female male } \CommentTok{\#\textgreater{} Levels: female male} \end{Highlighting} \end{Shaded} The printout looks somewhat different than the original one: there are no double quotes anymore and also the factor levels, corresponding to the different categories, are printed. R basically does two things when you call the factor function on a character vector: first of all, it scans through the vector to see the different categories that are in there. In this case, that's ``female'' and ``male''. Notice here that R sorts the levels alphabetically. Next, it converts the character vector, \texttt{gender} in this example, to a vector of integer values. These integers correspond to a set of character values to use when the factor is displayed. These character values are called labels or levels. Inspecting the structure reveals this. We can use the \texttt{unclass()} to uncover the factor. You can see the underlying integer vector and the character vector of levels. We're dealing with a factor with 2 levels. The ``female'''s are encoded as 1, because it's the first level, ``male'' is encoded as 2, because it's the second level. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{unclass}\NormalTok{(gender.fact)} \CommentTok{\#\textgreater{} [1] 2 2 1 2} \CommentTok{\#\textgreater{} attr(,"levels")} \CommentTok{\#\textgreater{} [1] "female" "male"} \end{Highlighting} \end{Shaded} Why this conversion? Well, it can be that your categories are very long character strings. Each time repeating this string per observation can take up a lot of memory. By using this simple encoding, much less space is necessary. Just remember that factors are actually integer vectors, where each integer corresponds to a category, or a level. We can also use the \texttt{str()} function to display the internal structure of an R object, of a factor in this case. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{str}\NormalTok{(gender.fact)} \CommentTok{\#\textgreater{} Factor w/ 2 levels "female","male": 2 2 1 2} \end{Highlighting} \end{Shaded} Finally, we can check the type and the class of this factor with \texttt{typeof()} and \texttt{class()} functions. We can ask whether the factor is a vector or a factor. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{typeof}\NormalTok{(gender.fact)} \CommentTok{\#\textgreater{} [1] "integer"} \FunctionTok{class}\NormalTok{(gender.fact)} \CommentTok{\#\textgreater{} [1] "factor"} \FunctionTok{is.vector}\NormalTok{(gender.fact)} \CommentTok{\#\textgreater{} [1] FALSE} \FunctionTok{is.factor}\NormalTok{(gender.fact)} \CommentTok{\#\textgreater{} [1] TRUE} \end{Highlighting} \end{Shaded} To sum up the factors. Factor is one-dimensional and homogueonus as a vector. In fact, factor is stored as an integer vectors where each integer has a label. Factor elements can take on one of a specific set of values. Factor \texttt{gender.fact} will take on only the values ``male'' or ``female''. The set of values that elements of a factor can take are called its level. \hypertarget{matrix}{% \subsection{Matrix}\label{matrix}} A matrix is similar to a vector. Where a vector is a sequence of data elements, which is one-dimensional, a matrix is a similar collection of data elements, but this time arranged into a fixed number of rows and columns. Since you are only working with rows and columns, a matrix is called two-dimensional. As with the vector, the matrix can contain only one type. To build a matrix, you use the \texttt{matrix()} function. Most importantly, it needs a vector, containing the values you want to place in the matrix, and at least one matrix dimension: rows and/or columns. Have a look at the following example, that creates a 2-by-3 matrix containing the values 1 to 6, by specifying the vector and setting the \texttt{nrow=} argument to 2: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{matrix}\NormalTok{(}\DecValTok{1}\SpecialCharTok{:}\DecValTok{6}\NormalTok{, }\AttributeTok{nrow =} \DecValTok{2}\NormalTok{)} \CommentTok{\#\textgreater{} [,1] [,2] [,3]} \CommentTok{\#\textgreater{} [1,] 1 3 5} \CommentTok{\#\textgreater{} [2,] 2 4 6} \end{Highlighting} \end{Shaded} R sees that the input vector has length 6 and that there have to be two rows. It then infers that you'll probably want 3 columns, such that the number of matrix elements matches the number of input vector elements. You could just as well specify \texttt{ncol=} instead of \texttt{nrow=}; in this case, R infers the number of rows automatically. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{matrix}\NormalTok{(}\DecValTok{1}\SpecialCharTok{:}\DecValTok{6}\NormalTok{, }\AttributeTok{ncol =} \DecValTok{3}\NormalTok{)} \CommentTok{\#\textgreater{} [,1] [,2] [,3]} \CommentTok{\#\textgreater{} [1,] 1 3 5} \CommentTok{\#\textgreater{} [2,] 2 4 6} \end{Highlighting} \end{Shaded} In both these examples, R takes the vector containing the values 1 to 6, and fills it up, column by column. If you prefer to fill up the matrix in a row-wise fashion, such that the 1, 2 and 3 are in the first row, you can set the \texttt{byrow=} argument of matrix to \texttt{TRUE} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{matrix}\NormalTok{(}\DecValTok{1}\SpecialCharTok{:}\DecValTok{6}\NormalTok{, }\AttributeTok{nrow =} \DecValTok{2}\NormalTok{, }\AttributeTok{byrow =}\NormalTok{ T)} \CommentTok{\#\textgreater{} [,1] [,2] [,3]} \CommentTok{\#\textgreater{} [1,] 1 2 3} \CommentTok{\#\textgreater{} [2,] 4 5 6} \end{Highlighting} \end{Shaded} Suppose you pass a vector containing the values 1 to 3 to the matrix function, and explicitly say you want a matrix with 2 rows and 3 columns: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{matrix}\NormalTok{(}\DecValTok{1}\SpecialCharTok{:}\DecValTok{3}\NormalTok{, }\AttributeTok{nrow =} \DecValTok{2}\NormalTok{, }\AttributeTok{ncol =} \DecValTok{3}\NormalTok{)} \CommentTok{\#\textgreater{} [,1] [,2] [,3]} \CommentTok{\#\textgreater{} [1,] 1 3 2} \CommentTok{\#\textgreater{} [2,] 2 1 3} \end{Highlighting} \end{Shaded} R fills up the matrix column by column and simply repeats the vector. If you try to fill up the matrix with a vector whose multiple does not nicely fit in the matrix, for example when you want to put a 4-element vector in a 6-element matrix, R generates a warning message. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{matrix}\NormalTok{(}\DecValTok{1}\SpecialCharTok{:}\DecValTok{4}\NormalTok{, }\AttributeTok{nrow =} \DecValTok{2}\NormalTok{, }\AttributeTok{ncol =} \DecValTok{3}\NormalTok{)} \CommentTok{\#\textgreater{} [,1] [,2] [,3]} \CommentTok{\#\textgreater{} [1,] 1 3 1} \CommentTok{\#\textgreater{} [2,] 2 4 2} \end{Highlighting} \end{Shaded} Actually, apart from the \texttt{matrix()} function, there's yet another easy way to create matrices that is more intuitive in some cases. You can paste vectors together using the \texttt{cbind()} and \texttt{rbind()} functions. Have a look at these calls: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{cbind}\NormalTok{(}\DecValTok{1}\SpecialCharTok{:}\DecValTok{3}\NormalTok{, }\DecValTok{1}\SpecialCharTok{:}\DecValTok{3}\NormalTok{)} \CommentTok{\#\textgreater{} [,1] [,2]} \CommentTok{\#\textgreater{} [1,] 1 1} \CommentTok{\#\textgreater{} [2,] 2 2} \CommentTok{\#\textgreater{} [3,] 3 3} \FunctionTok{rbind}\NormalTok{(}\DecValTok{1}\SpecialCharTok{:}\DecValTok{3}\NormalTok{, }\DecValTok{1}\SpecialCharTok{:}\DecValTok{3}\NormalTok{)} \CommentTok{\#\textgreater{} [,1] [,2] [,3]} \CommentTok{\#\textgreater{} [1,] 1 2 3} \CommentTok{\#\textgreater{} [2,] 1 2 3} \end{Highlighting} \end{Shaded} \texttt{cbind()}, short for column bind, takes the vectors you pass it, and sticks them together as if they were columns of a matrix. The \texttt{rbind()} function, short for row bind, does the same thing but takes the input as rows and makes a matrix out of them. These functions can come in pretty handy, because they're often more easy to use than the \texttt{matrix()} function. The bind functions I just introduced can also handle matrices actually, so you can easily use them to paste another row or another column to an already existing matrix. Suppose you have a matrix \texttt{m}, containing the elements 1 to 6: \begin{Shaded} \begin{Highlighting}[] \NormalTok{m }\OtherTok{\textless{}{-}} \FunctionTok{matrix}\NormalTok{(}\DecValTok{1}\SpecialCharTok{:}\DecValTok{6}\NormalTok{, }\AttributeTok{byrow =}\NormalTok{ T, }\AttributeTok{nrow=}\DecValTok{2}\NormalTok{)} \NormalTok{m} \CommentTok{\#\textgreater{} [,1] [,2] [,3]} \CommentTok{\#\textgreater{} [1,] 1 2 3} \CommentTok{\#\textgreater{} [2,] 4 5 6} \end{Highlighting} \end{Shaded} If you want to add another row to it, containing the values 7, 8, 9, you could simply run this command: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{rbind}\NormalTok{(m, }\FunctionTok{c}\NormalTok{(}\DecValTok{7}\NormalTok{, }\DecValTok{8}\NormalTok{, }\DecValTok{9}\NormalTok{))} \CommentTok{\#\textgreater{} [,1] [,2] [,3]} \CommentTok{\#\textgreater{} [1,] 1 2 3} \CommentTok{\#\textgreater{} [2,] 4 5 6} \CommentTok{\#\textgreater{} [3,] 7 8 9} \end{Highlighting} \end{Shaded} You can do a similar thing with \texttt{cbind()}: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{cbind}\NormalTok{(m, }\FunctionTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{2}\NormalTok{))} \CommentTok{\#\textgreater{} [,1] [,2] [,3] [,4]} \CommentTok{\#\textgreater{} [1,] 1 2 3 1} \CommentTok{\#\textgreater{} [2,] 4 5 6 2} \end{Highlighting} \end{Shaded} Next up is naming the matrix. You could assign names to both columns and rows. That's why R came up with the \texttt{rownames()} and \texttt{colnames()} functions. Their use is pretty straightforward. Retaking the matrix \texttt{m} from before, \begin{Shaded} \begin{Highlighting}[] \FunctionTok{rownames}\NormalTok{(m) }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"row.1"}\NormalTok{, }\StringTok{"row.2"}\NormalTok{)} \NormalTok{m} \CommentTok{\#\textgreater{} [,1] [,2] [,3]} \CommentTok{\#\textgreater{} row.1 1 2 3} \CommentTok{\#\textgreater{} row.2 4 5 6} \FunctionTok{colnames}\NormalTok{(m) }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"col.1"}\NormalTok{, }\StringTok{"col.2"}\NormalTok{, }\StringTok{"col.3"}\NormalTok{)} \NormalTok{m} \CommentTok{\#\textgreater{} col.1 col.2 col.3} \CommentTok{\#\textgreater{} row.1 1 2 3} \CommentTok{\#\textgreater{} row.2 4 5 6} \end{Highlighting} \end{Shaded} Printing m shows that it worked. Just as with vectors, there are also one-liner ways of naming matrices while you're building it. You use the \texttt{dimnames=} argument of the matrix function for this. Check this out. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{matrix}\NormalTok{(}\DecValTok{1}\SpecialCharTok{:}\DecValTok{6}\NormalTok{, }\AttributeTok{byrow =}\NormalTok{ T, }\AttributeTok{nrow=}\DecValTok{2}\NormalTok{, } \AttributeTok{dimnames =} \FunctionTok{list}\NormalTok{(} \AttributeTok{rows=}\FunctionTok{c}\NormalTok{(}\StringTok{"row.1"}\NormalTok{, }\StringTok{"row.2"}\NormalTok{), } \AttributeTok{cols=}\FunctionTok{c}\NormalTok{(}\StringTok{"col.1"}\NormalTok{, }\StringTok{"col.2"}\NormalTok{, }\StringTok{"col.3"}\NormalTok{)))} \CommentTok{\#\textgreater{} cols} \CommentTok{\#\textgreater{} rows col.1 col.2 col.3} \CommentTok{\#\textgreater{} row.1 1 2 3} \CommentTok{\#\textgreater{} row.2 4 5 6} \end{Highlighting} \end{Shaded} You can create logical or character matrices as well. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{matrix}\NormalTok{(}\FunctionTok{c}\NormalTok{(T, F), }\AttributeTok{nrow=}\DecValTok{3}\NormalTok{, }\AttributeTok{ncol=}\DecValTok{4}\NormalTok{)} \CommentTok{\#\textgreater{} [,1] [,2] [,3] [,4]} \CommentTok{\#\textgreater{} [1,] TRUE FALSE TRUE FALSE} \CommentTok{\#\textgreater{} [2,] FALSE TRUE FALSE TRUE} \CommentTok{\#\textgreater{} [3,] TRUE FALSE TRUE FALSE} \FunctionTok{matrix}\NormalTok{(}\FunctionTok{c}\NormalTok{(}\StringTok{"Jane"}\NormalTok{, }\StringTok{"Mark"}\NormalTok{), }\AttributeTok{nrow=}\DecValTok{3}\NormalTok{, }\AttributeTok{ncol=}\DecValTok{4}\NormalTok{)} \CommentTok{\#\textgreater{} [,1] [,2] [,3] [,4] } \CommentTok{\#\textgreater{} [1,] "Jane" "Mark" "Jane" "Mark"} \CommentTok{\#\textgreater{} [2,] "Mark" "Jane" "Mark" "Jane"} \CommentTok{\#\textgreater{} [3,] "Jane" "Mark" "Jane" "Mark"} \end{Highlighting} \end{Shaded} \hypertarget{array}{% \subsection{Array}\label{array}} In R an array is a vector two or more dimensions. A matrix is actually a two dimensional array. Array is like a stacked matrix. So let's build some data that we can use to demonstrate that. First of all, let's build up a character vector. \begin{Shaded} \begin{Highlighting}[] \NormalTok{vector.chr }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"twas"}\NormalTok{,}\StringTok{"brillig"}\NormalTok{,}\StringTok{"and"}\NormalTok{,}\StringTok{"the"}\NormalTok{,}\StringTok{"slithey"}\NormalTok{,}\StringTok{"toves"}\NormalTok{,}\StringTok{"did"}\NormalTok{,}\StringTok{"gyre"}\NormalTok{,}\StringTok{"and"}\NormalTok{,}\StringTok{"gimble"}\NormalTok{,}\StringTok{"in"}\NormalTok{,}\StringTok{"wabe"}\NormalTok{)} \end{Highlighting} \end{Shaded} Now let's create an array out of that with \texttt{array()} functions. \begin{Shaded} \begin{Highlighting}[] \NormalTok{array.chr }\OtherTok{\textless{}{-}} \FunctionTok{array}\NormalTok{(}\AttributeTok{data =}\NormalTok{ vector.chr, }\AttributeTok{dim =} \FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{, }\DecValTok{3}\NormalTok{, }\DecValTok{2}\NormalTok{))} \NormalTok{array.chr} \CommentTok{\#\textgreater{} , , 1} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} [,1] [,2] [,3] } \CommentTok{\#\textgreater{} [1,] "twas" "and" "slithey"} \CommentTok{\#\textgreater{} [2,] "brillig" "the" "toves" } \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} , , 2} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} [,1] [,2] [,3] } \CommentTok{\#\textgreater{} [1,] "did" "and" "in" } \CommentTok{\#\textgreater{} [2,] "gyre" "gimble" "wabe"} \end{Highlighting} \end{Shaded} With \texttt{dim=} argument we can give it some dimensions. And in this case, we are going to concatenate three values, two rows, three columns, and two levels. We have an array, and you can see that there are three dimensions. So you can see that I have two tables. Actually looks like two matrices. And then there's a second level. And again it has two rows and three columns. It's the second half. Of course, we can create logical or numeric arrays. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{array}\NormalTok{(}\DecValTok{1}\SpecialCharTok{:}\DecValTok{30}\NormalTok{, }\AttributeTok{dim =} \FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{, }\DecValTok{3}\NormalTok{, }\DecValTok{5}\NormalTok{))} \CommentTok{\#\textgreater{} , , 1} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} [,1] [,2] [,3]} \CommentTok{\#\textgreater{} [1,] 1 3 5} \CommentTok{\#\textgreater{} [2,] 2 4 6} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} , , 2} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} [,1] [,2] [,3]} \CommentTok{\#\textgreater{} [1,] 7 9 11} \CommentTok{\#\textgreater{} [2,] 8 10 12} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} , , 3} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} [,1] [,2] [,3]} \CommentTok{\#\textgreater{} [1,] 13 15 17} \CommentTok{\#\textgreater{} [2,] 14 16 18} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} , , 4} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} [,1] [,2] [,3]} \CommentTok{\#\textgreater{} [1,] 19 21 23} \CommentTok{\#\textgreater{} [2,] 20 22 24} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} , , 5} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} [,1] [,2] [,3]} \CommentTok{\#\textgreater{} [1,] 25 27 29} \CommentTok{\#\textgreater{} [2,] 26 28 30} \FunctionTok{array}\NormalTok{(}\FunctionTok{c}\NormalTok{(T, F, T), }\AttributeTok{dim =} \FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{, }\DecValTok{3}\NormalTok{, }\DecValTok{5}\NormalTok{))} \CommentTok{\#\textgreater{} , , 1} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} [,1] [,2] [,3]} \CommentTok{\#\textgreater{} [1,] TRUE TRUE FALSE} \CommentTok{\#\textgreater{} [2,] FALSE TRUE TRUE} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} , , 2} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} [,1] [,2] [,3]} \CommentTok{\#\textgreater{} [1,] TRUE TRUE FALSE} \CommentTok{\#\textgreater{} [2,] FALSE TRUE TRUE} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} , , 3} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} [,1] [,2] [,3]} \CommentTok{\#\textgreater{} [1,] TRUE TRUE FALSE} \CommentTok{\#\textgreater{} [2,] FALSE TRUE TRUE} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} , , 4} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} [,1] [,2] [,3]} \CommentTok{\#\textgreater{} [1,] TRUE TRUE FALSE} \CommentTok{\#\textgreater{} [2,] FALSE TRUE TRUE} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} , , 5} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} [,1] [,2] [,3]} \CommentTok{\#\textgreater{} [1,] TRUE TRUE FALSE} \CommentTok{\#\textgreater{} [2,] FALSE TRUE TRUE} \end{Highlighting} \end{Shaded} \hypertarget{list}{% \subsection{List}\label{list}} List is a one-dimensional and heterogeneous data structure. A list can contain all kinds of R objects, such as vectors and matrices, but also other R objects, such as data frames, factors and even an other list. Let's build a lists. We will store information about a family. We can create a list with \texttt{list()} function, we want to store the address, how many cars does the family have, name and age of the family members. \begin{Shaded} \begin{Highlighting}[] \NormalTok{my.family }\OtherTok{\textless{}{-}} \FunctionTok{list}\NormalTok{(}\AttributeTok{address=}\StringTok{"10 Downing Street"}\NormalTok{, }\AttributeTok{cars=}\DecValTok{5}\NormalTok{, }\AttributeTok{age=}\FunctionTok{c}\NormalTok{(}\DecValTok{12}\NormalTok{, }\DecValTok{15}\NormalTok{), }\AttributeTok{name=}\FunctionTok{c}\NormalTok{(}\StringTok{"Hermione"}\NormalTok{, }\StringTok{"Harry"}\NormalTok{))} \NormalTok{my.family} \CommentTok{\#\textgreater{} $address} \CommentTok{\#\textgreater{} [1] "10 Downing Street"} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} $cars} \CommentTok{\#\textgreater{} [1] 5} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} $age} \CommentTok{\#\textgreater{} [1] 12 15} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} $name} \CommentTok{\#\textgreater{} [1] "Hermione" "Harry"} \end{Highlighting} \end{Shaded} \hypertarget{data-frame}{% \subsection{Data frame}\label{data-frame}} The data frame is the most important data structure in R. R is a statistical programming language, and in statistics we are working with data sets. Vectors and factors are good examples for a minimal data sets. Data sets are typically comprised of observations (cases, instances), and all these observations have some variables associated with them. We can have for example, a data set of 4 people. Each person is an instance, and the properties about these people, such as for example their age, and their gender. How could you store such information in R? We have done it, in a numeric vector and a factor, called \texttt{age} and \texttt{gender.fact}. One-dimensional structures is not really useful to work with. We have to keep together the observations. We need a two-dimensional structure. We need a data frame. Let's create a data frame. We need to use \texttt{data.frame()} function. \begin{Shaded} \begin{Highlighting}[] \NormalTok{age }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\DecValTok{18}\NormalTok{, }\DecValTok{20}\NormalTok{, }\DecValTok{47}\NormalTok{, }\DecValTok{49}\NormalTok{)} \NormalTok{gender }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"male"}\NormalTok{, }\StringTok{"male"}\NormalTok{, }\StringTok{"female"}\NormalTok{, }\StringTok{"male"}\NormalTok{) } \NormalTok{gender.fact }\OtherTok{\textless{}{-}} \FunctionTok{factor}\NormalTok{(gender)} \NormalTok{df }\OtherTok{\textless{}{-}} \FunctionTok{data.frame}\NormalTok{(gender.fact, age)} \end{Highlighting} \end{Shaded} We'used the \texttt{age} vector and \texttt{gender.fact} factor to create the new data frame object. Executing \texttt{df}, we can see the value of the data frame. \begin{Shaded} \begin{Highlighting}[] \NormalTok{df} \CommentTok{\#\textgreater{} gender.fact age} \CommentTok{\#\textgreater{} 1 male 18} \CommentTok{\#\textgreater{} 2 male 20} \CommentTok{\#\textgreater{} 3 female 47} \CommentTok{\#\textgreater{} 4 male 49} \end{Highlighting} \end{Shaded} This a two-dimensional structure. It has rows and columns. The rows correspond to the observations, the people in our example, while the columns correspond to the variables, or the properties of each of these people. We can see that a data frame can contain elements of different types. The first column contains factor labels, and the second one is numerics. We can see the names of the coloumns: gender.fact, age, that come from the function call, from the vector's name. We can specify the names explicitly, for example \begin{Shaded} \begin{Highlighting}[] \NormalTok{df }\OtherTok{\textless{}{-}} \FunctionTok{data.frame}\NormalTok{(}\AttributeTok{gender=}\NormalTok{gender.fact, age)} \NormalTok{df} \CommentTok{\#\textgreater{} gender age} \CommentTok{\#\textgreater{} 1 male 18} \CommentTok{\#\textgreater{} 2 male 20} \CommentTok{\#\textgreater{} 3 female 47} \CommentTok{\#\textgreater{} 4 male 49} \end{Highlighting} \end{Shaded} If we print the value of this data frame we can see the new name of the first column. We can also see the names of the rows, which are simply number from 1 to 4. There still is a restriction on the data frame data types. Elements in the same column should be of the same type. That's not really a problem, because in one column, the \texttt{age} column for example, you'll always want a numeric, because an \texttt{age} is always a number, regardless of the observation. Data frame is a two-dimensional and heterogeneous data structure to store small or big data sets. Typically a data frame contains numeric vectors or factors with the same length. Rows correspond to observations to the four member of a family, columns correspond to variables, the properties of the members of the family. We can check the type and a class of the data frame. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{typeof}\NormalTok{(df)} \CommentTok{\#\textgreater{} [1] "list"} \FunctionTok{class}\NormalTok{(df)} \CommentTok{\#\textgreater{} [1] "data.frame"} \FunctionTok{is.vector}\NormalTok{(df)} \CommentTok{\#\textgreater{} [1] FALSE} \FunctionTok{is.factor}\NormalTok{(df)} \CommentTok{\#\textgreater{} [1] FALSE} \FunctionTok{is.data.frame}\NormalTok{(df)} \CommentTok{\#\textgreater{} [1] TRUE} \end{Highlighting} \end{Shaded} Finally, practise creating data frame. We also know the heights of the members of the family. How could we store this data in a data frame. \begin{Shaded} \begin{Highlighting}[] \NormalTok{height }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\DecValTok{172}\NormalTok{, }\DecValTok{180}\NormalTok{, }\DecValTok{167}\NormalTok{, }\DecValTok{183}\NormalTok{)} \NormalTok{df}\FloatTok{.2} \OtherTok{\textless{}{-}} \FunctionTok{data.frame}\NormalTok{(}\AttributeTok{gender =}\NormalTok{ gender.fact, age, height)} \NormalTok{df}\FloatTok{.2} \CommentTok{\#\textgreater{} gender age height} \CommentTok{\#\textgreater{} 1 male 18 172} \CommentTok{\#\textgreater{} 2 male 20 180} \CommentTok{\#\textgreater{} 3 female 47 167} \CommentTok{\#\textgreater{} 4 male 49 183} \FunctionTok{str}\NormalTok{(df}\FloatTok{.2}\NormalTok{)} \CommentTok{\#\textgreater{} \textquotesingle{}data.frame\textquotesingle{}: 4 obs. of 3 variables:} \CommentTok{\#\textgreater{} $ gender: Factor w/ 2 levels "female","male": 2 2 1 2} \CommentTok{\#\textgreater{} $ age : num 18 20 47 49} \CommentTok{\#\textgreater{} $ height: num 172 180 167 183} \end{Highlighting} \end{Shaded} To sum up, you can find the data structures corresponding types and classes below: \begin{longtable}[]{@{}lll@{}} \toprule Data structure & \texttt{typeof()} & \texttt{class()}\tabularnewline \midrule \endhead double vector & \texttt{"double"} & \texttt{"numeric"}\tabularnewline integer vector & \texttt{"integer"} & \texttt{"integer"}\tabularnewline logical vector & \texttt{"logical"} & \texttt{"logical"}\tabularnewline character vector & \texttt{"character"} & \texttt{"character"}\tabularnewline double matrix & \texttt{"double"} & \texttt{"matrix"}\tabularnewline integer matrix & \texttt{"integer"} & \texttt{"matrix"}\tabularnewline logical matrix & \texttt{"logical"} & \texttt{"matrix"}\tabularnewline character matrix & \texttt{"character"} & \texttt{"matrix"}\tabularnewline double array & \texttt{"double"} & \texttt{"matrix"\ "array"}\tabularnewline integer array & \texttt{"integer"} & \texttt{"matrix"\ "array"}\tabularnewline logical array & \texttt{"logical"} & \texttt{"matrix"\ "array"}\tabularnewline character array & \texttt{"character"} & \texttt{"matrix"\ "array"}\tabularnewline factor & \texttt{"integer"} & \texttt{"factor"}\tabularnewline list & \texttt{"list"} & \texttt{"list"}\tabularnewline data frame & \texttt{"list"} & \texttt{"data.frame"}\tabularnewline \bottomrule \end{longtable} Table above is based on following code: \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# double vector {-}{-}{-}{-}} \NormalTok{x }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{, }\DecValTok{2}\NormalTok{)} \FunctionTok{typeof}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "double"} \FunctionTok{class}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "numeric"} \CommentTok{\# integer vector {-}{-}{-}{-}} \NormalTok{x }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(1L, 2L)} \FunctionTok{typeof}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "integer"} \FunctionTok{class}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "integer"} \CommentTok{\# logical vector {-}{-}{-}{-}} \NormalTok{x }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\ConstantTok{TRUE}\NormalTok{, }\ConstantTok{FALSE}\NormalTok{)} \FunctionTok{typeof}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "logical"} \FunctionTok{class}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "logical"} \CommentTok{\# character vector {-}{-}{-}{-}} \NormalTok{x }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"Paul"}\NormalTok{, }\StringTok{"Jane"}\NormalTok{)} \FunctionTok{typeof}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "character"} \FunctionTok{class}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "character"} \CommentTok{\# double matrix {-}{-}{-}{-}} \NormalTok{x }\OtherTok{\textless{}{-}} \FunctionTok{matrix}\NormalTok{(}\FunctionTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{, }\DecValTok{2}\NormalTok{), }\AttributeTok{nrow=}\DecValTok{2}\NormalTok{, }\AttributeTok{ncol=}\DecValTok{2}\NormalTok{)} \FunctionTok{typeof}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "double"} \FunctionTok{class}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "matrix" "array"} \CommentTok{\# integer matrix {-}{-}{-}{-}} \NormalTok{x }\OtherTok{\textless{}{-}} \FunctionTok{matrix}\NormalTok{(}\FunctionTok{c}\NormalTok{(1L, 2L), }\AttributeTok{nrow=}\DecValTok{2}\NormalTok{, }\AttributeTok{ncol=}\DecValTok{2}\NormalTok{)} \FunctionTok{typeof}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "integer"} \FunctionTok{class}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "matrix" "array"} \CommentTok{\# logical matrix {-}{-}{-}{-}} \NormalTok{x }\OtherTok{\textless{}{-}} \FunctionTok{matrix}\NormalTok{(}\FunctionTok{c}\NormalTok{(}\ConstantTok{TRUE}\NormalTok{, }\ConstantTok{FALSE}\NormalTok{), }\AttributeTok{nrow=}\DecValTok{2}\NormalTok{, }\AttributeTok{ncol=}\DecValTok{2}\NormalTok{)} \FunctionTok{typeof}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "logical"} \FunctionTok{class}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "matrix" "array"} \CommentTok{\# character matrix {-}{-}{-}{-}} \NormalTok{x }\OtherTok{\textless{}{-}} \FunctionTok{matrix}\NormalTok{(}\FunctionTok{c}\NormalTok{(}\StringTok{"Paul"}\NormalTok{, }\StringTok{"Jane"}\NormalTok{), }\AttributeTok{nrow=}\DecValTok{2}\NormalTok{, }\AttributeTok{ncol=}\DecValTok{2}\NormalTok{)} \FunctionTok{typeof}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "character"} \FunctionTok{class}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "matrix" "array"} \CommentTok{\# double array {-}{-}{-}{-}} \NormalTok{x }\OtherTok{\textless{}{-}} \FunctionTok{array}\NormalTok{(}\FunctionTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{2}\NormalTok{), }\AttributeTok{dim =} \FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{,}\DecValTok{3}\NormalTok{,}\DecValTok{2}\NormalTok{))} \FunctionTok{typeof}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "double"} \FunctionTok{class}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "array"} \CommentTok{\# integer array {-}{-}{-}{-}} \NormalTok{x }\OtherTok{\textless{}{-}} \FunctionTok{array}\NormalTok{(}\FunctionTok{c}\NormalTok{(1L,2L), }\AttributeTok{dim =} \FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{,}\DecValTok{3}\NormalTok{,}\DecValTok{2}\NormalTok{))} \FunctionTok{typeof}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "integer"} \FunctionTok{class}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "array"} \CommentTok{\# logical array {-}{-}{-}{-}} \NormalTok{x }\OtherTok{\textless{}{-}} \FunctionTok{array}\NormalTok{(}\FunctionTok{c}\NormalTok{(T,F), }\AttributeTok{dim =} \FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{,}\DecValTok{3}\NormalTok{,}\DecValTok{2}\NormalTok{))} \FunctionTok{typeof}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "logical"} \FunctionTok{class}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "array"} \CommentTok{\# character array {-}{-}{-}{-}} \NormalTok{x }\OtherTok{\textless{}{-}} \FunctionTok{array}\NormalTok{(}\FunctionTok{c}\NormalTok{(}\StringTok{"Paul"}\NormalTok{, }\StringTok{"Jane"}\NormalTok{), }\AttributeTok{dim =} \FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{,}\DecValTok{3}\NormalTok{,}\DecValTok{2}\NormalTok{))} \FunctionTok{typeof}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "character"} \FunctionTok{class}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "array"} \CommentTok{\# factor {-}{-}{-}{-}} \NormalTok{x }\OtherTok{\textless{}{-}} \FunctionTok{factor}\NormalTok{(}\FunctionTok{c}\NormalTok{(}\StringTok{"Paul"}\NormalTok{, }\StringTok{"Jane"}\NormalTok{))} \FunctionTok{typeof}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "integer"} \FunctionTok{class}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "factor"} \CommentTok{\# list {-}{-}{-}{-}} \NormalTok{x }\OtherTok{\textless{}{-}} \FunctionTok{list}\NormalTok{(}\FunctionTok{c}\NormalTok{(}\StringTok{"Paul"}\NormalTok{, }\StringTok{"Jane"}\NormalTok{), }\FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{,}\DecValTok{3}\NormalTok{,}\DecValTok{2}\NormalTok{))} \FunctionTok{typeof}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "list"} \FunctionTok{class}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "list"} \CommentTok{\# data frame {-}{-}{-}{-}} \NormalTok{x }\OtherTok{\textless{}{-}} \FunctionTok{data.frame}\NormalTok{(}\AttributeTok{name=}\FunctionTok{c}\NormalTok{(}\StringTok{"Paul"}\NormalTok{, }\StringTok{"Jane"}\NormalTok{), }\AttributeTok{score=}\FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{,}\DecValTok{3}\NormalTok{))} \FunctionTok{typeof}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "list"} \FunctionTok{class}\NormalTok{(x)} \CommentTok{\#\textgreater{} [1] "data.frame"} \end{Highlighting} \end{Shaded} \hypertarget{functions}{% \section{Functions}\label{functions}} You have already used a number of functions in the previous chapters, including \texttt{c()}, \texttt{str()}, \texttt{matrix()}, \texttt{length()}, and \texttt{factor()}. However, before we look many more useful functions, it is handy to know how to work with functions in R. When you call a function in R, you use the function name with a number of arguments, which you give inside parentheses to pass information to that function about how it should run and what data it should use. So how do you know what the arguments to a function are? You can either look in the help file---using \texttt{?functionName} or \texttt{help("functionName")} or you can use a function called \texttt{args()}, which will print the arguments to a function in the console. As an example of using a function, we will look at \texttt{sample()}. This function allows us to randomly sample a number of values from a vector of given values (this is the R way of selecting balls from an urn). So let's take a look at the arguments to this function: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{args}\NormalTok{(sample)} \CommentTok{\#\textgreater{} function (x, size, replace = FALSE, prob = NULL) } \CommentTok{\#\textgreater{} NULL} \end{Highlighting} \end{Shaded} You can see that we have four arguments to this function. You will notice that the first two are simply given as \texttt{x=} and \texttt{size=}, whereas the second two are followed by \texttt{=} value. This indicates that they have a default value, so we don't need to supply an alternative. Because \texttt{x=} and \texttt{size=} do not have a default, we have to tell R what value we want them to take. To know the purpose of the arguments, you will need to take a look at the help files, which will tell you more. \begin{Shaded} \begin{Highlighting}[] \NormalTok{?sample} \end{Highlighting} \end{Shaded} In this case, \texttt{x=} is the vector that we want to sample from and \texttt{size=} is the number of samples we want to take, whereas replace allows us to put values back and we can set the probability of each value with prob. When it comes to calling the function, we can supply the arguments in a number of ways. To start with, we can name all the arguments in full: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{sample}\NormalTok{(}\AttributeTok{x =} \FunctionTok{c}\NormalTok{(}\StringTok{"red"}\NormalTok{, }\StringTok{"yellow"}\NormalTok{, }\StringTok{"green"}\NormalTok{, }\StringTok{"blue"}\NormalTok{), }\AttributeTok{size =} \DecValTok{2}\NormalTok{, }\AttributeTok{replace =} \ConstantTok{FALSE}\NormalTok{, }\AttributeTok{prob =} \ConstantTok{NULL}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "blue" "yellow"} \end{Highlighting} \end{Shaded} Because \texttt{replace=} and \texttt{prob=} have default values, this is the same as the following: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{sample}\NormalTok{(}\AttributeTok{x =} \FunctionTok{c}\NormalTok{(}\StringTok{"red"}\NormalTok{, }\StringTok{"yellow"}\NormalTok{, }\StringTok{"green"}\NormalTok{, }\StringTok{"blue"}\NormalTok{), }\AttributeTok{size =} \DecValTok{2}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "yellow" "red"} \end{Highlighting} \end{Shaded} Using this form of complete naming of arguments, we can actually supply them in any order we like. Therefore, the preceding would do the same as this: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{sample}\NormalTok{(}\AttributeTok{size =} \DecValTok{2}\NormalTok{, }\AttributeTok{x =} \FunctionTok{c}\NormalTok{(}\StringTok{"red"}\NormalTok{, }\StringTok{"yellow"}\NormalTok{, }\StringTok{"green"}\NormalTok{, }\StringTok{"blue"}\NormalTok{))} \CommentTok{\#\textgreater{} [1] "blue" "yellow"} \end{Highlighting} \end{Shaded} It's worth remembering that when you actually run each of these lines, you will most likely get a different result because the function is randomly sampling from the vector \texttt{x}. If you provide all the arguments in the same order as the \texttt{args()} function gives them, you do not actually need to give the names of the arguments. Therefore, we can also say this: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{sample}\NormalTok{(}\FunctionTok{c}\NormalTok{(}\StringTok{"red"}\NormalTok{, }\StringTok{"yellow"}\NormalTok{, }\StringTok{"green"}\NormalTok{, }\StringTok{"blue"}\NormalTok{), }\DecValTok{2}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "red" "blue"} \end{Highlighting} \end{Shaded} In reality, you will often see, and use, a combination of naming and ordering of arguments because you will tend to remember what should come first but not the order of other arguments. Therefore, you might see something like the following: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{sample}\NormalTok{(}\FunctionTok{c}\NormalTok{(}\StringTok{"red"}\NormalTok{, }\StringTok{"yellow"}\NormalTok{, }\StringTok{"green"}\NormalTok{, }\StringTok{"blue"}\NormalTok{), }\AttributeTok{size =} \DecValTok{2}\NormalTok{, }\AttributeTok{replace =} \ConstantTok{TRUE}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "red" "red"} \end{Highlighting} \end{Shaded} \hypertarget{vectorized-operations}{% \section{Vectorized operations}\label{vectorized-operations}} Vectorized operations, is one of the features of the R language that make it, that makes it easy to use. It makes very, kind of, nice to write code, without having to do lots of looping, and things like that. As we've seen earlier, we can add two numeric constants: \begin{Shaded} \begin{Highlighting}[] \DecValTok{2} \SpecialCharTok{+} \DecValTok{3} \CommentTok{\#\textgreater{} [1] 5} \NormalTok{x }\OtherTok{\textless{}{-}} \DecValTok{2} \NormalTok{y }\OtherTok{\textless{}{-}} \DecValTok{3} \NormalTok{x }\SpecialCharTok{+}\NormalTok{ y} \CommentTok{\#\textgreater{} [1] 5} \end{Highlighting} \end{Shaded} The idea with vectorized operations is that things can happen in parallel. For example, suppose we got two vectors here \texttt{x} and \texttt{y}. \texttt{x} is the sequence 1 through 4 and \texttt{y} is the sequence 11 through 14. \begin{Shaded} \begin{Highlighting}[] \NormalTok{x }\OtherTok{\textless{}{-}} \DecValTok{1}\SpecialCharTok{:}\DecValTok{4} \NormalTok{y }\OtherTok{\textless{}{-}} \DecValTok{11}\SpecialCharTok{:}\DecValTok{14} \NormalTok{x} \CommentTok{\#\textgreater{} [1] 1 2 3 4} \NormalTok{y} \CommentTok{\#\textgreater{} [1] 11 12 13 14} \NormalTok{x }\SpecialCharTok{+}\NormalTok{ y} \CommentTok{\#\textgreater{} [1] 12 14 16 18} \end{Highlighting} \end{Shaded} And we want to add the two vectors together. Now, when we say we want to add them, what we mean is we want to add the first element of \texttt{x} to the first element of \texttt{y}, the second element of \texttt{x} to the second element of \texttt{y}, etc., the third element to the third element. It adds 1 to 11, 2 to 12, 3 to 13, and 4 to 14, so you get the vector 12, 14, 16, 18. Similarly, you can use the greater than (\texttt{\textgreater{}}), or less than symbols (\texttt{\textless{}}) to, give you logical vectors. \begin{Shaded} \begin{Highlighting}[] \NormalTok{x }\SpecialCharTok{\textgreater{}}\NormalTok{ y} \CommentTok{\#\textgreater{} [1] FALSE FALSE FALSE FALSE} \NormalTok{x }\SpecialCharTok{\textless{}}\NormalTok{ y} \CommentTok{\#\textgreater{} [1] TRUE TRUE TRUE TRUE} \end{Highlighting} \end{Shaded} Suppose, we have a new \texttt{y} vector with only two elements, and we want to add them together. \begin{Shaded} \begin{Highlighting}[] \NormalTok{x }\OtherTok{\textless{}{-}} \DecValTok{1}\SpecialCharTok{:}\DecValTok{4} \NormalTok{y }\OtherTok{\textless{}{-}} \DecValTok{11}\SpecialCharTok{:}\DecValTok{12} \NormalTok{x} \CommentTok{\#\textgreater{} [1] 1 2 3 4} \NormalTok{y} \CommentTok{\#\textgreater{} [1] 11 12} \NormalTok{x }\SpecialCharTok{+}\NormalTok{ y} \CommentTok{\#\textgreater{} [1] 12 14 14 16} \end{Highlighting} \end{Shaded} There is an important rule in R, \emph{recycling rule}: if two vectors are of unequal length, the shorter one will be recycled in order to match the longer vector. For example, our two vectors \texttt{x} and \texttt{y} have different lengths, and their sum is computed by recycling values of the shorter vector \texttt{y}. In this case, when we say we want to add them, what we mean is we want to add 1 to 11, 2 to 12, 3 to 11, and 4 to 12, so you get the vector 12, 14, 14, 16. Another example is \texttt{x\ \textgreater{}\ 2}. So well \texttt{x} is actually a vector of 4 numbers. So, which number are you comparing to 2? According to the recycling rule, the vectorized operation compares all the numbers to 2, and it gives you a vector of falses and trues depending on which numbers happen to be bigger than 2. \begin{Shaded} \begin{Highlighting}[] \NormalTok{x }\SpecialCharTok{\textgreater{}} \DecValTok{2} \CommentTok{\#\textgreater{} [1] FALSE FALSE TRUE TRUE} \end{Highlighting} \end{Shaded} Finally, suppose, we have a \texttt{y} vector with three elements. \begin{Shaded} \begin{Highlighting}[] \NormalTok{x }\OtherTok{\textless{}{-}} \DecValTok{1}\SpecialCharTok{:}\DecValTok{4} \NormalTok{y }\OtherTok{\textless{}{-}} \DecValTok{11}\SpecialCharTok{:}\DecValTok{13} \NormalTok{x} \CommentTok{\#\textgreater{} [1] 1 2 3 4} \NormalTok{y} \CommentTok{\#\textgreater{} [1] 11 12 13} \NormalTok{x }\SpecialCharTok{+}\NormalTok{ y} \CommentTok{\#\textgreater{} [1] 12 14 16 15} \end{Highlighting} \end{Shaded} As you can see, we get a warning message: the length of vector \texttt{x} is not multiple of length of \texttt{y}. In this case, when we say we want to add them, what we mean is we want to add 1 to 11, 2 to 12, 3 to 13, and 4 to 11, so you get the vector 12, 14, 16, 15. \hypertarget{creating-date-sequences}{% \section{Creating date sequences}\label{creating-date-sequences}} \hypertarget{creating-a-sequence-of-numeric-values}{% \subsection{Creating a sequence of numeric values}\label{creating-a-sequence-of-numeric-values}} As we have seen earlier, the colon (\texttt{:}) operator in syntax \texttt{from:to} generates a sequence from \texttt{from=} to \texttt{to=} in steps of 1 or -1. \begin{Shaded} \begin{Highlighting}[] \DecValTok{1}\SpecialCharTok{:}\DecValTok{10} \CommentTok{\#\textgreater{} [1] 1 2 3 4 5 6 7 8 9 10} \DecValTok{11}\SpecialCharTok{:{-}}\DecValTok{2} \CommentTok{\#\textgreater{} [1] 11 10 9 8 7 6 5 4 3 2 1 0 {-}1 {-}2} \FloatTok{1.2}\SpecialCharTok{:}\DecValTok{10} \CommentTok{\#\textgreater{} [1] 1.2 2.2 3.2 4.2 5.2 6.2 7.2 8.2 9.2} \end{Highlighting} \end{Shaded} A more general way of performing the same operation is with the \texttt{seq()} function. The first two arguments to \texttt{seq()} are the starting and ending values, and the default gap is one. Therefore, the following lines are equivalent: \begin{Shaded} \begin{Highlighting}[] \DecValTok{1}\SpecialCharTok{:}\DecValTok{10} \CommentTok{\#\textgreater{} [1] 1 2 3 4 5 6 7 8 9 10} \FunctionTok{seq}\NormalTok{(}\AttributeTok{from =} \DecValTok{1}\NormalTok{, }\AttributeTok{to =} \DecValTok{10}\NormalTok{)} \CommentTok{\#\textgreater{} [1] 1 2 3 4 5 6 7 8 9 10} \end{Highlighting} \end{Shaded} The advantage of using the \texttt{seq()} function is that it has an additional argument, \texttt{by=}, that allows you to specify the gap between consecutive sequence values, as shown in the following examples: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{seq}\NormalTok{(}\AttributeTok{from =} \DecValTok{1}\NormalTok{, }\AttributeTok{to =} \DecValTok{10}\NormalTok{, }\AttributeTok{by =} \FloatTok{0.5}\NormalTok{) }\CommentTok{\# Sequence from 1 to 10 by 0.5} \CommentTok{\#\textgreater{} [1] 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5} \CommentTok{\#\textgreater{} [13] 7.0 7.5 8.0 8.5 9.0 9.5 10.0} \FunctionTok{seq}\NormalTok{(}\AttributeTok{from =} \DecValTok{2}\NormalTok{, }\AttributeTok{to =} \DecValTok{20}\NormalTok{, }\AttributeTok{by =} \DecValTok{2}\NormalTok{) }\CommentTok{\# Sequence from 2 to 20 by 2} \CommentTok{\#\textgreater{} [1] 2 4 6 8 10 12 14 16 18 20} \FunctionTok{seq}\NormalTok{(}\AttributeTok{from =} \DecValTok{5}\NormalTok{, }\AttributeTok{to =} \SpecialCharTok{{-}}\DecValTok{5}\NormalTok{, }\AttributeTok{by =} \SpecialCharTok{{-}}\DecValTok{2}\NormalTok{) }\CommentTok{\# Sequence from 5 to {-}5 by {-}2} \CommentTok{\#\textgreater{} [1] 5 3 1 {-}1 {-}3 {-}5} \end{Highlighting} \end{Shaded} These examples illustrate some simple sequences of values. However, let's consider the following examples, where we create a sequence of values from 1.3 to 8.4 by 0.3: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{seq}\NormalTok{(}\AttributeTok{from =} \FloatTok{1.3}\NormalTok{, }\AttributeTok{to =} \FloatTok{8.4}\NormalTok{, }\AttributeTok{by =} \FloatTok{0.3}\NormalTok{) }\CommentTok{\# Sequence from 1.3 to 8.4 by 0.3} \CommentTok{\#\textgreater{} [1] 1.3 1.6 1.9 2.2 2.5 2.8 3.1 3.4 3.7 4.0 4.3 4.6 4.9 5.2 5.5} \CommentTok{\#\textgreater{} [16] 5.8 6.1 6.4 6.7 7.0 7.3 7.6 7.9 8.2} \end{Highlighting} \end{Shaded} In this example, note that the last value in the vector is 8.2, whereas we requested a sequence from 1.3 to 8.4. Of course, the reason that the last value is not precisely 8.4 is that the difference between the start and end of the sequence is not divisible by 0.3 (the specified ``gap''). If instead we wanted to create a sequence of values from a start point to a particular end point, we could specify a length of the output vector instead of the gap in consecutive sequence values: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{seq}\NormalTok{(}\AttributeTok{from =} \FloatTok{1.3}\NormalTok{, }\AttributeTok{to =} \FloatTok{8.4}\NormalTok{, }\AttributeTok{length.out =} \DecValTok{10}\NormalTok{) }\CommentTok{\# Sequence of 10 values from 1.3 to 8.4} \CommentTok{\#\textgreater{} [1] 1.300000 2.088889 2.877778 3.666667 4.455556 5.244444} \CommentTok{\#\textgreater{} [7] 6.033333 6.822222 7.611111 8.400000} \end{Highlighting} \end{Shaded} To sum it up, to create a sequence of element we can leverage the \texttt{seq()} function. As with numeric vectors, you have to specify at least three of the four arguments (\texttt{from=}, \texttt{to=}, \texttt{by=}, and \texttt{length.out=}). \hypertarget{creating-a-sequence-of-repeated-values}{% \subsection{Creating a Sequence of Repeated Values}\label{creating-a-sequence-of-repeated-values}} We can use the \texttt{rep()} function in R to create a vector containing repeated values. The first two arguments to the \texttt{rep()} function are the value(s) to repeat and the number of times to repeat the value(s), as shown here: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{rep}\NormalTok{(}\AttributeTok{x =} \StringTok{"Hello"}\NormalTok{, }\AttributeTok{times =} \DecValTok{5}\NormalTok{) }\CommentTok{\# Repeat “Hello” 5 times} \CommentTok{\#\textgreater{} [1] "Hello" "Hello" "Hello" "Hello" "Hello"} \end{Highlighting} \end{Shaded} In the last example, we are repeating a single value, but the first argument to \texttt{rep()} could be a vector of values. \begin{Shaded} \begin{Highlighting}[] \NormalTok{x }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{, }\DecValTok{2}\NormalTok{, }\DecValTok{3}\NormalTok{)} \FunctionTok{rep}\NormalTok{(x, }\AttributeTok{times =} \DecValTok{5}\NormalTok{) }\CommentTok{\# Repeat the x vector 5 times} \CommentTok{\#\textgreater{} [1] 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3} \end{Highlighting} \end{Shaded} We can further simplify this example as follows: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{rep}\NormalTok{(}\DecValTok{1}\SpecialCharTok{:}\DecValTok{3}\NormalTok{, }\AttributeTok{times =} \DecValTok{5}\NormalTok{) }\CommentTok{\# Repeat the x vector 5 times} \CommentTok{\#\textgreater{} [1] 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3} \end{Highlighting} \end{Shaded} In these examples, we repeat a series of values a specific number of times. Alternatively, we can repeat each of the values a specified number of times by supplying a vector value for the second argument the same length as that in the first argument: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{rep}\NormalTok{(}\AttributeTok{x =} \FunctionTok{c}\NormalTok{(}\StringTok{"A"}\NormalTok{, }\StringTok{"B"}\NormalTok{, }\StringTok{"C"}\NormalTok{), }\AttributeTok{times =} \FunctionTok{c}\NormalTok{(}\DecValTok{4}\NormalTok{, }\DecValTok{1}\NormalTok{, }\DecValTok{3}\NormalTok{))} \CommentTok{\#\textgreater{} [1] "A" "A" "A" "A" "B" "C" "C" "C"} \end{Highlighting} \end{Shaded} In this example, we repeat ``A'' four times, ``B'' once, and ``C'' three times. Using this same approach, we can replace each value of a vector a specific number of times, as shown here: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{rep}\NormalTok{(}\AttributeTok{x =} \FunctionTok{c}\NormalTok{(}\StringTok{"A"}\NormalTok{, }\StringTok{"B"}\NormalTok{, }\StringTok{"C"}\NormalTok{), }\AttributeTok{times =} \FunctionTok{c}\NormalTok{(}\DecValTok{3}\NormalTok{, }\DecValTok{3}\NormalTok{, }\DecValTok{3}\NormalTok{))} \CommentTok{\#\textgreater{} [1] "A" "A" "A" "B" "B" "B" "C" "C" "C"} \end{Highlighting} \end{Shaded} Alternatively, because the second input is a repeated set of values, this could be written as follows: Click here to view code image \begin{Shaded} \begin{Highlighting}[] \FunctionTok{rep}\NormalTok{(}\AttributeTok{x =} \FunctionTok{c}\NormalTok{(}\StringTok{"A"}\NormalTok{, }\StringTok{"B"}\NormalTok{, }\StringTok{"C"}\NormalTok{), }\AttributeTok{each =} \DecValTok{3}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "A" "A" "A" "B" "B" "B" "C" "C" "C"} \end{Highlighting} \end{Shaded} As you can see, the \texttt{rep()} function can be used to create a variety of vectors with repeated sequences. Let's quickly recap the three ways of using rep, as illustrated in this section: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{rep}\NormalTok{(}\AttributeTok{x =} \FunctionTok{c}\NormalTok{(}\StringTok{"A"}\NormalTok{, }\StringTok{"B"}\NormalTok{, }\StringTok{"C"}\NormalTok{), }\AttributeTok{times =} \DecValTok{3}\NormalTok{) }\CommentTok{\# Repeat the vector 3 times} \CommentTok{\#\textgreater{} [1] "A" "B" "C" "A" "B" "C" "A" "B" "C"} \FunctionTok{rep}\NormalTok{(}\AttributeTok{x =} \FunctionTok{c}\NormalTok{(}\StringTok{"A"}\NormalTok{, }\StringTok{"B"}\NormalTok{, }\StringTok{"C"}\NormalTok{), }\AttributeTok{times =} \FunctionTok{c}\NormalTok{(}\DecValTok{4}\NormalTok{, }\DecValTok{1}\NormalTok{, }\DecValTok{3}\NormalTok{)) }\CommentTok{\# Repeat each value a specific number} \CommentTok{\#\textgreater{} [1] "A" "A" "A" "A" "B" "C" "C" "C"} \FunctionTok{rep}\NormalTok{(}\AttributeTok{x =} \FunctionTok{c}\NormalTok{(}\StringTok{"A"}\NormalTok{, }\StringTok{"B"}\NormalTok{, }\StringTok{"C"}\NormalTok{), }\AttributeTok{each =} \DecValTok{3}\NormalTok{) }\CommentTok{\# Repeat each value 3 times} \CommentTok{\#\textgreater{} [1] "A" "A" "A" "B" "B" "B" "C" "C" "C"} \end{Highlighting} \end{Shaded} \hypertarget{sequential-names}{% \subsection{Sequential names}\label{sequential-names}} Finally, you can create sequential names from a series of strings or numeric values using the \texttt{paste()} function. Let's say we have 10 survey questions, or items, and we want the names of the items to be sequential so they reflect their order in which the respondents were exposed to them. We can create the prefix of the names and a sequence of values to be the suffix: \begin{Shaded} \begin{Highlighting}[] \NormalTok{prefix }\OtherTok{\textless{}{-}} \StringTok{"survey.item"} \NormalTok{suffix }\OtherTok{\textless{}{-}} \DecValTok{1}\SpecialCharTok{:}\DecValTok{10} \end{Highlighting} \end{Shaded} We can create a vector which takes the prefix and attaches the suffix as a character string. Note; there are two examples below. The first contains no separator (\texttt{sep\ =\ ""}) between the prefix and suffix; the second example contains a period as the separator (\texttt{sep\ =\ "."}). \begin{Shaded} \begin{Highlighting}[] \FunctionTok{paste}\NormalTok{(prefix, suffix, }\AttributeTok{sep=}\StringTok{""}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "survey.item1" "survey.item2" "survey.item3" } \CommentTok{\#\textgreater{} [4] "survey.item4" "survey.item5" "survey.item6" } \CommentTok{\#\textgreater{} [7] "survey.item7" "survey.item8" "survey.item9" } \CommentTok{\#\textgreater{} [10] "survey.item10"} \FunctionTok{paste}\NormalTok{(prefix, suffix, }\AttributeTok{sep=}\StringTok{"."}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "survey.item.1" "survey.item.2" "survey.item.3" } \CommentTok{\#\textgreater{} [4] "survey.item.4" "survey.item.5" "survey.item.6" } \CommentTok{\#\textgreater{} [7] "survey.item.7" "survey.item.8" "survey.item.9" } \CommentTok{\#\textgreater{} [10] "survey.item.10"} \end{Highlighting} \end{Shaded} We can simplify this example as follows: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{paste}\NormalTok{(}\StringTok{"survey.item"}\NormalTok{, }\DecValTok{1}\SpecialCharTok{:}\DecValTok{10}\NormalTok{, }\AttributeTok{sep=}\StringTok{"."}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "survey.item.1" "survey.item.2" "survey.item.3" } \CommentTok{\#\textgreater{} [4] "survey.item.4" "survey.item.5" "survey.item.6" } \CommentTok{\#\textgreater{} [7] "survey.item.7" "survey.item.8" "survey.item.9" } \CommentTok{\#\textgreater{} [10] "survey.item.10"} \end{Highlighting} \end{Shaded} We can concatenate two or more vectors: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{paste}\NormalTok{(}\DecValTok{5}\SpecialCharTok{:}\DecValTok{1}\NormalTok{, }\StringTok{"cell"}\NormalTok{, }\DecValTok{1}\SpecialCharTok{:}\DecValTok{5}\NormalTok{ , }\AttributeTok{sep=}\StringTok{"."}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "5.cell.1" "4.cell.2" "3.cell.3" "2.cell.4" "1.cell.5"} \end{Highlighting} \end{Shaded} \hypertarget{subsetting}{% \section{Subsetting}\label{subsetting}} In this section, we look at the ways in which to extract subsets of data from an object. We can achieve this using square brackets (\texttt{{[}\ {]}}), double square brackets (\texttt{{[}{[}\ {]}{]}}) and dollar sign (\texttt{\$}). There are three operators that can be used to extract subsets of R objects. \begin{longtable}[]{@{}ll@{}} \toprule \begin{minipage}[b]{(\columnwidth - 1\tabcolsep) * \real{0.21}}\raggedright Operator\strut \end{minipage} & \begin{minipage}[b]{(\columnwidth - 1\tabcolsep) * \real{0.79}}\raggedright Description\strut \end{minipage}\tabularnewline \midrule \endhead \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.21}}\raggedright \texttt{{[}}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.79}}\raggedright Always returns an object of the same class as the original. It can be used to select multiple elements of an object.\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.21}}\raggedright \texttt{{[}{[}}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.79}}\raggedright Extracts elements of a list or a data frame. It can only be used to extract a single element and the class of the returned object will not necessarily be a list or data frame.\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.21}}\raggedright \texttt{\$}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.79}}\raggedright Extract elements of a list or data frame by literal name.\strut \end{minipage}\tabularnewline \bottomrule \end{longtable} Different data structures cab be used vary index operators and index vectors, as shown below: \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# subsetting with []} \NormalTok{obj.vector[index.vector]} \NormalTok{obj.factor[index.vector]} \NormalTok{obj.list[index.vector]} \NormalTok{obj.matrix[index.vector}\FloatTok{.1}\NormalTok{, index.vector}\FloatTok{.2}\NormalTok{]} \NormalTok{obj.array}\FloatTok{.3}\NormalTok{D[index.vector}\FloatTok{.1}\NormalTok{, index.vector}\FloatTok{.2}\NormalTok{, index.vector}\FloatTok{.3}\NormalTok{]} \NormalTok{obj.data.frame[index.vector]} \NormalTok{obj.data.frame[index.vector}\FloatTok{.1}\NormalTok{, index.vector}\FloatTok{.2}\NormalTok{]} \CommentTok{\# subsetting with [[]]} \NormalTok{obj.vector[[single.index]]} \NormalTok{obj.factor[[single.index]]} \NormalTok{obj.list[[single.index]]} \NormalTok{obj.data.frame[[single.index]]} \CommentTok{\# subsetting with $} \NormalTok{obj.list}\SpecialCharTok{$}\NormalTok{element.name} \NormalTok{obj.data.frame}\SpecialCharTok{$}\NormalTok{element.name} \end{Highlighting} \end{Shaded} As with index vectors, you can put one of five input types in the square brackets (\texttt{{[}\ {]}}), as shown below: \begin{longtable}[]{@{}ll@{}} \toprule \begin{minipage}[b]{(\columnwidth - 1\tabcolsep) * \real{0.40}}\raggedright Index vectors\strut \end{minipage} & \begin{minipage}[b]{(\columnwidth - 1\tabcolsep) * \real{0.60}}\raggedright Effect\strut \end{minipage}\tabularnewline \midrule \endhead \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.40}}\raggedright Blank\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.60}}\raggedright All values are returned\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.40}}\raggedright A vector of positive integers\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.60}}\raggedright Used as an index to return\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.40}}\raggedright A vector of negative integers\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.60}}\raggedright Used as an index to omit\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.40}}\raggedright A vector of logical values\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.60}}\raggedright Only corresponding \texttt{TRUE} elements are returned\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.40}}\raggedright A vector of character values\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.60}}\raggedright Refers to the names of element to return\strut \end{minipage}\tabularnewline \bottomrule \end{longtable} Single index can be an integer or a string. To illustrate the subsetting of objects, we will discuss the vector and data frame subsetting. \hypertarget{subsetting-vector}{% \subsection{Subsetting vector}\label{subsetting-vector}} We can index any object in R. We start the vector, then we are moving on to data frame. Subsetting basically comes down to selecting parts of your vector to end up with a new vector, which is a subset of the original vector. We have a \texttt{name} vector with names of observed people. \begin{Shaded} \begin{Highlighting}[] \NormalTok{name }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"Paul"}\NormalTok{, }\StringTok{"Jane"}\NormalTok{, }\StringTok{"Mark"}\NormalTok{, }\StringTok{"Ann"}\NormalTok{)} \end{Highlighting} \end{Shaded} After Ctrl+Enter, we can get the whole vector. Suppose you want to select the first element from this vector, corresponding to the first person's name. You can use square brackets \texttt{{[}{]}} for this. \begin{Shaded} \begin{Highlighting}[] \NormalTok{name[}\DecValTok{1}\NormalTok{]} \CommentTok{\#\textgreater{} [1] "Paul"} \end{Highlighting} \end{Shaded} The number one inside the square brackets indicates that you want to get the first element from the \texttt{name} vector. The result is again a vector, because a single string is actually a vector of length 1. This new vector contains the string \texttt{"Paul"}. If you instead want to select the third element, corresponding to third person's name, you could code remain followed by 3 in square brackets. \begin{Shaded} \begin{Highlighting}[] \NormalTok{name[}\DecValTok{3}\NormalTok{]} \CommentTok{\#\textgreater{} [1] "Mark"} \end{Highlighting} \end{Shaded} Suppose now you want to select the elements in the vector that give the first three people's names. Instead of using a single number inside the square brackets, you can use a vector to specify which indices you want to select. You use vector containing 1, 2 and 3 inside the square brackets. \begin{Shaded} \begin{Highlighting}[] \NormalTok{name[}\FunctionTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{, }\DecValTok{2}\NormalTok{, }\DecValTok{3}\NormalTok{)] }\CommentTok{\# or name[1:3]} \CommentTok{\#\textgreater{} [1] "Paul" "Jane" "Mark"} \end{Highlighting} \end{Shaded} How the resulting vector is ordered depends on the order of the indices inside the selection vector. If you change \texttt{c(1,\ 2,\ 3)} to \texttt{c(2,\ 3,\ 1)}, you will get a vector where the second person comes first. \begin{Shaded} \begin{Highlighting}[] \NormalTok{name[}\FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{, }\DecValTok{3}\NormalTok{, }\DecValTok{1}\NormalTok{)]} \CommentTok{\#\textgreater{} [1] "Jane" "Mark" "Paul"} \end{Highlighting} \end{Shaded} As we mentioned, we can create regular sequences. For example the colon (\texttt{:}) operator can create the \texttt{c(1,\ 2,\ 3)} with \texttt{1:3}. Or construction \texttt{3:1} may be used to generate a sequence backwards. So, we can use these operation inside square brackets. \begin{Shaded} \begin{Highlighting}[] \NormalTok{name[}\DecValTok{1}\SpecialCharTok{:}\DecValTok{3}\NormalTok{]} \CommentTok{\#\textgreater{} [1] "Paul" "Jane" "Mark"} \NormalTok{name[}\DecValTok{3}\SpecialCharTok{:}\DecValTok{1}\NormalTok{]} \CommentTok{\#\textgreater{} [1] "Mark" "Jane" "Paul"} \end{Highlighting} \end{Shaded} \hypertarget{subsetting-data-frames}{% \subsection{Subsetting data frames}\label{subsetting-data-frames}} In data frames we can use single brackets with two indices inside, because data frame is two dimensional. First, print the whole data frame. \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# creating inline data frame} \NormalTok{d }\OtherTok{\textless{}{-}} \FunctionTok{data.frame}\NormalTok{(}\AttributeTok{name=}\FunctionTok{c}\NormalTok{(}\StringTok{"Paul"}\NormalTok{, }\StringTok{"Jane"}\NormalTok{, }\StringTok{"Mark"}\NormalTok{, }\StringTok{"Ann"}\NormalTok{), } \AttributeTok{gender=}\FunctionTok{c}\NormalTok{(}\StringTok{"male"}\NormalTok{, }\StringTok{"female"}\NormalTok{, }\StringTok{"male"}\NormalTok{, }\StringTok{"female"}\NormalTok{),} \AttributeTok{height=}\FunctionTok{c}\NormalTok{(}\DecValTok{184}\NormalTok{, }\DecValTok{167}\NormalTok{, }\DecValTok{111}\NormalTok{, }\DecValTok{172}\NormalTok{), } \AttributeTok{age=}\FunctionTok{c}\NormalTok{(32L, 19L, 13L, 78L), } \AttributeTok{child=}\FunctionTok{c}\NormalTok{(T, F, F, F), } \AttributeTok{cars=}\FunctionTok{c}\NormalTok{(}\DecValTok{0}\NormalTok{, }\DecValTok{2}\NormalTok{, }\DecValTok{1}\NormalTok{, }\DecValTok{2}\NormalTok{))} \NormalTok{d} \CommentTok{\#\textgreater{} name gender height age child cars} \CommentTok{\#\textgreater{} 1 Paul male 184 32 TRUE 0} \CommentTok{\#\textgreater{} 2 Jane female 167 19 FALSE 2} \CommentTok{\#\textgreater{} 3 Mark male 111 13 FALSE 1} \CommentTok{\#\textgreater{} 4 Ann female 172 78 FALSE 2} \end{Highlighting} \end{Shaded} To select the height of Jane, who is on row 2 in the data frame, you can use the single brackets with two indices inside. The row, index 2, comes first, and the column, index 3, comes second. They will be separated by comma. \begin{Shaded} \begin{Highlighting}[] \NormalTok{d[}\DecValTok{2}\NormalTok{, }\DecValTok{3}\NormalTok{]} \CommentTok{\#\textgreater{} [1] 167} \end{Highlighting} \end{Shaded} Indeed, Jane is 167 cm tall. You can also use the column names to refer to the columns of data frame. \begin{Shaded} \begin{Highlighting}[] \NormalTok{d[}\DecValTok{2}\NormalTok{, }\StringTok{"height"}\NormalTok{]} \CommentTok{\#\textgreater{} [1] 167} \end{Highlighting} \end{Shaded} Of course we select the height and age information on Jane and Mark. \begin{Shaded} \begin{Highlighting}[] \NormalTok{d[}\FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{,}\DecValTok{3}\NormalTok{), }\StringTok{"height"}\NormalTok{]} \CommentTok{\#\textgreater{} [1] 167 111} \end{Highlighting} \end{Shaded} And Of course we do the same on Paul and Ann. Additionally, we can add the name of the people to the end. \begin{Shaded} \begin{Highlighting}[] \NormalTok{d[}\FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{,}\DecValTok{3}\NormalTok{), }\FunctionTok{c}\NormalTok{(}\StringTok{"height"}\NormalTok{, }\StringTok{"name"}\NormalTok{)]} \CommentTok{\#\textgreater{} height name} \CommentTok{\#\textgreater{} 2 167 Jane} \CommentTok{\#\textgreater{} 3 111 Mark} \end{Highlighting} \end{Shaded} We can also choose to omit one of the two indices, to end up with an entire row or an entire column. If you want to have all information on Jane, you can use this command: \begin{Shaded} \begin{Highlighting}[] \NormalTok{d[}\DecValTok{2}\NormalTok{, ]} \CommentTok{\#\textgreater{} name gender height age child cars} \CommentTok{\#\textgreater{} 2 Jane female 167 19 FALSE 2} \end{Highlighting} \end{Shaded} The result is a data frame with a single observation, because there has to be a way to store the different types. On the other hand, to get the entire age column, you could use this command: \begin{Shaded} \begin{Highlighting}[] \NormalTok{d[, }\DecValTok{4}\NormalTok{]} \CommentTok{\#\textgreater{} [1] 32 19 13 78} \end{Highlighting} \end{Shaded} Here, the result is a vector, because columns contain elements of the same type. We can prevent drop dimensions with \texttt{drop=F}. \begin{Shaded} \begin{Highlighting}[] \NormalTok{d[, }\DecValTok{4}\NormalTok{, drop}\OtherTok{=}\NormalTok{F]} \CommentTok{\#\textgreater{} age} \CommentTok{\#\textgreater{} 1 32} \CommentTok{\#\textgreater{} 2 19} \CommentTok{\#\textgreater{} 3 13} \CommentTok{\#\textgreater{} 4 78} \end{Highlighting} \end{Shaded} Another way to select only one columns is the \texttt{\$} (dollar sing) operator. \begin{Shaded} \begin{Highlighting}[] \NormalTok{d}\SpecialCharTok{$}\NormalTok{cars} \CommentTok{\#\textgreater{} [1] 0 2 1 2} \end{Highlighting} \end{Shaded} \hypertarget{string-data}{% \section{String data}\label{string-data}} We can often find ourselves having to perform string manipulation tasks in R, including creation of character strings and searching for patterns in character strings. In this section, we look at some of the functions in the \emph{Base R} installation. \hypertarget{simple-character-manipulation}{% \subsection{Simple Character Manipulation}\label{simple-character-manipulation}} Some of the basic manipulations you'll want to perform are counting characters, extracting substrings, and combining elements to create or update a string. Let's start with counting characters. You do this using the \texttt{nchar()} function, simply providing the string that you are interested in: \begin{Shaded} \begin{Highlighting}[] \NormalTok{fruits }\OtherTok{\textless{}{-}} \StringTok{"apples oranges pears"} \FunctionTok{nchar}\NormalTok{(fruits)} \CommentTok{\#\textgreater{} [1] 20} \end{Highlighting} \end{Shaded} Notice that all characters are counted, including the spaces. To extract substrings, you use the \texttt{substring()} function. Here, you need to give the string along with the start and end points for the substring. You can extract multiple substrings by giving the vectors of the start and end points. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{substring}\NormalTok{(}\AttributeTok{text =}\NormalTok{ fruits, }\AttributeTok{first =} \DecValTok{1}\NormalTok{, }\AttributeTok{last =} \DecValTok{6}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "apples"} \NormalTok{fruits}\FloatTok{.2} \OtherTok{\textless{}{-}} \FunctionTok{substring}\NormalTok{(}\AttributeTok{text =}\NormalTok{ fruits, }\AttributeTok{first =} \FunctionTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{, }\DecValTok{8}\NormalTok{, }\DecValTok{16}\NormalTok{), }\AttributeTok{last =} \FunctionTok{c}\NormalTok{(}\DecValTok{6}\NormalTok{, }\DecValTok{14}\NormalTok{, }\DecValTok{20}\NormalTok{))} \NormalTok{fruits}\FloatTok{.2} \CommentTok{\#\textgreater{} [1] "apples" "oranges" "pears"} \end{Highlighting} \end{Shaded} Finally, you can create a character string from a series of strings or numeric values using the \texttt{paste()} function. You can provide as many strings and objects as you wish to the paste function and they will all be converted to character data and pasted together. Like with many R functions, you can pass vectors to the paste function. Here's an example: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{paste}\NormalTok{(}\DecValTok{5}\NormalTok{, }\StringTok{"apples"}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "5 apples"} \NormalTok{nfruits }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\DecValTok{5}\NormalTok{, }\DecValTok{9}\NormalTok{, }\DecValTok{2}\NormalTok{)} \FunctionTok{paste}\NormalTok{(nfruits, fruits}\FloatTok{.2}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "5 apples" "9 oranges" "2 pears"} \end{Highlighting} \end{Shaded} You can use the argument \texttt{sep=} to change the separator between the pasted strings, which as you can see in the preceding example is a space by default, like so: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{paste}\NormalTok{(fruits}\FloatTok{.2}\NormalTok{, nfruits, }\AttributeTok{sep =} \StringTok{" = "}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "apples = 5" "oranges = 9" "pears = 2"} \end{Highlighting} \end{Shaded} \hypertarget{searching-and-replacing}{% \subsection{Searching and Replacing}\label{searching-and-replacing}} Two of the most useful functions for working with character data are the functions \texttt{grep()} and \texttt{gsub()}. These functions allow you to search elements of a vector for a particular pattern (\texttt{grep()}) and replace a particular pattern with a given string (\texttt{gsub()}). You search for patterns using regular expressions (that is, a pattern that describes the character string). Much more information on regular expressions can be found in the R help pages for the function \texttt{regex()}. If you are familiar with Perl expressions, you can use these along with the argument \texttt{perl\ =\ TRUE}. Let's start by looking at the function \texttt{grep()}. The first argument that we are going to give is the pattern to search for, which can be as simple as the string ``red''. The second argument will be the vector to search. \begin{Shaded} \begin{Highlighting}[] \NormalTok{colourStrings }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"green"}\NormalTok{, }\StringTok{"blue"}\NormalTok{, }\StringTok{"orange"}\NormalTok{, }\StringTok{"red"}\NormalTok{, }\StringTok{"yellow"}\NormalTok{, }\StringTok{"lightblue"}\NormalTok{, }\StringTok{"navyblue"}\NormalTok{, }\StringTok{"indianred"}\NormalTok{)} \FunctionTok{grep}\NormalTok{(}\AttributeTok{pattern =} \StringTok{"red"}\NormalTok{, }\AttributeTok{x =}\NormalTok{ colourStrings, }\AttributeTok{value =} \ConstantTok{TRUE}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "red" "indianred"} \end{Highlighting} \end{Shaded} In this example, we have used an additional argument, \texttt{value=}. This allows us to return the actual values of the vector that include the pattern rather than simply the index of their position in the vector. Alternatives to \texttt{grep(values=TRUE)}: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{grep}\NormalTok{(}\AttributeTok{pattern =} \StringTok{"red"}\NormalTok{, }\AttributeTok{x =}\NormalTok{ colourStrings) }\CommentTok{\# returns a vector of the indices of the elements of x that yielded a match} \CommentTok{\#\textgreater{} [1] 4 8} \FunctionTok{grepl}\NormalTok{(}\AttributeTok{pattern =} \StringTok{"red"}\NormalTok{, }\AttributeTok{x =}\NormalTok{ colourStrings) }\CommentTok{\# returns a logical vector (match or not for each element of x)} \CommentTok{\#\textgreater{} [1] FALSE FALSE FALSE TRUE FALSE FALSE FALSE TRUE} \end{Highlighting} \end{Shaded} Some more examples of using the \texttt{grep()} function, with a variety of regular expressions, are shown below: \begin{Shaded} \begin{Highlighting}[] \NormalTok{colourStrings }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"green"}\NormalTok{, }\StringTok{"blue"}\NormalTok{, }\StringTok{"orange"}\NormalTok{, }\StringTok{"red"}\NormalTok{, }\StringTok{"yellow"}\NormalTok{, }\StringTok{"lightblue"}\NormalTok{, }\StringTok{"navyblue"}\NormalTok{, }\StringTok{"indianred"}\NormalTok{)} \FunctionTok{grep}\NormalTok{(}\StringTok{"\^{}red"}\NormalTok{, colourStrings, }\AttributeTok{value =} \ConstantTok{TRUE}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "red"} \FunctionTok{grep}\NormalTok{(}\StringTok{"red$"}\NormalTok{, colourStrings, }\AttributeTok{value =} \ConstantTok{TRUE}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "red" "indianred"} \FunctionTok{grep}\NormalTok{(}\StringTok{"r+"}\NormalTok{, colourStrings, }\AttributeTok{value =} \ConstantTok{TRUE}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "green" "orange" "red" "indianred"} \FunctionTok{grep}\NormalTok{(}\StringTok{"e\{2\}"}\NormalTok{, colourStrings, }\AttributeTok{value =} \ConstantTok{TRUE}\NormalTok{)} \CommentTok{\#\textgreater{} [1] "green"} \end{Highlighting} \end{Shaded} You can see how the symbols \texttt{\^{}} and \texttt{\$} have been used to mark the start and end of the string. In the example in line 2, we are specifying that immediately following the start of the string is the pattern \texttt{"red"}, whereas in line 3 the string ends straight after the pattern \texttt{"red"}. The examples in lines 4 and 5 show how to specify that something must appear a given number of times. In line 4, the \texttt{+} indicates that the letter \texttt{r} should appear at least once in the string. In line 5, the \texttt{\{2\}} following the e indicates that there should be two occurrences of the letter. The \texttt{gsub()} function, which allows you to substitute a pattern for a value, is very similar, because you also use regular expressions to search for the pattern. The only additional information you need to give is what to substitute in its place. Here is an example: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{gsub}\NormalTok{(}\AttributeTok{pattern =} \StringTok{"red"}\NormalTok{, }\AttributeTok{replacement =} \StringTok{"brown"}\NormalTok{, }\AttributeTok{x =}\NormalTok{ colourStrings)} \CommentTok{\#\textgreater{} [1] "green" "blue" "orange" "brown" } \CommentTok{\#\textgreater{} [5] "yellow" "lightblue" "navyblue" "indianbrown"} \end{Highlighting} \end{Shaded} As with grep, you can use any regular expression to match the pattern you wish to replace. \hypertarget{packages-in-r}{% \section{Packages in R}\label{packages-in-r}} R's functionality is distributed among many \emph{packages}. Each has a certain focus; for example, the \textbf{stats} package contains functions that apply common statistical methods, and the \textbf{graphics} package has functions concerning plotting. When you download R, you automatically get a set of \emph{base} and \emph{recommended} packages, which can be seen in the ``library'' subdirectories of the R installation. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{.libPaths}\NormalTok{() }\CommentTok{\# paths to packages} \CommentTok{\#\textgreater{} [1] "C:/Users/RStudio/Documents/R/win{-}library/4.0"} \CommentTok{\#\textgreater{} [2] "C:/Program Files/R/R{-}4.0.4/library"} \end{Highlighting} \end{Shaded} File paths of \texttt{.libPaths()} are used for getting or setting the library trees that R knows about (and hence uses when looking for packages). These core R packages represent a small subset of all the packages you can use with R. In fact, at the time of writing, there are more than 17000. These other packages we call \emph{other} packages, because you have to add them to R, from CRAN, Bioconductor or GitHub yourself. \includegraphics[width=1\linewidth]{img/baser_packages} \begin{Shaded} \begin{Highlighting}[] \NormalTok{pkg }\OtherTok{\textless{}{-}} \FunctionTok{installed.packages}\NormalTok{() } \FunctionTok{table}\NormalTok{(pkg[,}\StringTok{"Priority"}\NormalTok{], }\AttributeTok{useNA =} \StringTok{"ifany"}\NormalTok{) }\CommentTok{\# number of installed packages} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} base recommended \textless{}NA\textgreater{} } \CommentTok{\#\textgreater{} 14 15 1684} \end{Highlighting} \end{Shaded} As we can see, there are 14 base packages, 15 recommended packages in R, and I have 1684 other packages installed before. We can print the name of base and recommended packages: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{rownames}\NormalTok{(pkg)[pkg[,}\StringTok{"Priority"}\NormalTok{] }\SpecialCharTok{\%in\%} \StringTok{"base"}\NormalTok{] }\CommentTok{\# base packages} \CommentTok{\#\textgreater{} [1] "base" "compiler" "datasets" "graphics" "grDevices"} \CommentTok{\#\textgreater{} [6] "grid" "methods" "parallel" "splines" "stats" } \CommentTok{\#\textgreater{} [11] "stats4" "tcltk" "tools" "utils"} \FunctionTok{rownames}\NormalTok{(pkg)[pkg[,}\StringTok{"Priority"}\NormalTok{] }\SpecialCharTok{\%in\%} \StringTok{"recommended"}\NormalTok{] }\CommentTok{\# recommended packages} \CommentTok{\#\textgreater{} [1] "boot" "class" "cluster" "codetools" } \CommentTok{\#\textgreater{} [5] "foreign" "KernSmooth" "lattice" "MASS" } \CommentTok{\#\textgreater{} [9] "Matrix" "mgcv" "nlme" "nnet" } \CommentTok{\#\textgreater{} [13] "rpart" "spatial" "survival"} \end{Highlighting} \end{Shaded} Only a small subset of the installed packages is actually loaded when you start an R session. This helps reduce the start-up time and avoid a behavior known as masking. The \texttt{search()} function shows you which packages are loaded on your machine. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{search}\NormalTok{() }\CommentTok{\# loaded packages (with "package:" prefix)} \CommentTok{\#\textgreater{} [1] ".GlobalEnv" "package:dplyr" "package:MASS" } \CommentTok{\#\textgreater{} [4] "package:stats" "package:graphics" "package:grDevices"} \CommentTok{\#\textgreater{} [7] "package:utils" "package:datasets" "package:methods" } \CommentTok{\#\textgreater{} [10] "Autoloads" "package:base"} \end{Highlighting} \end{Shaded} During starting up the R, for examle the \textbf{base}, \textbf{methods}, \textbf{datasets}, and \textbf{utils} packages are loaded automatically. \hypertarget{load-packages}{% \subsection{Load packages}\label{load-packages}} To load any of installed packages, call the \texttt{library()} function. If R cannot find the specified package library, it will produce an error. For example \textbf{MASS} is a pre-installed package, part of the recommended packages. We can load it successfully. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(MASS) }\CommentTok{\# load MASS package } \end{Highlighting} \end{Shaded} We can check the loaded packages, the return value of \texttt{search()} contains the \texttt{"package:MASS"} string. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{search}\NormalTok{()} \CommentTok{\#\textgreater{} [1] ".GlobalEnv" "package:dplyr" "package:MASS" } \CommentTok{\#\textgreater{} [4] "package:stats" "package:graphics" "package:grDevices"} \CommentTok{\#\textgreater{} [7] "package:utils" "package:datasets" "package:methods" } \CommentTok{\#\textgreater{} [10] "Autoloads" "package:base"} \end{Highlighting} \end{Shaded} But, \textbf{psych} or \textbf{DescTools} packages are part of \emph{other} packages, the \texttt{library()} function calls may cause error message (in that case we did not install them before). \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(psych) }\CommentTok{\# load psych package} \FunctionTok{library}\NormalTok{(DescTools) }\CommentTok{\# load DescTools package} \end{Highlighting} \end{Shaded} \hypertarget{install-packages}{% \subsection{Install packages}\label{install-packages}} To load these packages successfully, we need to install them. These packages are on CRAN, so we type in: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{install.packages}\NormalTok{(}\StringTok{"psych"}\NormalTok{) }\CommentTok{\# installing from CRAN} \FunctionTok{install.packages}\NormalTok{(}\StringTok{"DescTools"}\NormalTok{) } \end{Highlighting} \end{Shaded} To install packages from Bioconductor, first type the following: \begin{Shaded} \begin{Highlighting}[] \ControlFlowTok{if}\NormalTok{ (}\SpecialCharTok{!}\FunctionTok{requireNamespace}\NormalTok{(}\StringTok{"BiocManager"}\NormalTok{, }\AttributeTok{quietly =} \ConstantTok{TRUE}\NormalTok{))} \FunctionTok{install.packages}\NormalTok{(}\StringTok{"BiocManager"}\NormalTok{)} \NormalTok{BiocManager}\SpecialCharTok{::}\FunctionTok{install}\NormalTok{()} \end{Highlighting} \end{Shaded} Install specific packages, e.g., \textbf{GenomicFeatures} and \textbf{AnnotationDbi}, with \begin{Shaded} \begin{Highlighting}[] \NormalTok{BiocManager}\SpecialCharTok{::}\FunctionTok{install}\NormalTok{(}\FunctionTok{c}\NormalTok{(}\StringTok{"GenomicFeatures"}\NormalTok{, }\StringTok{"AnnotationDbi"}\NormalTok{))} \end{Highlighting} \end{Shaded} The third case is installing from GitHub. You can install \textbf{emo} from GitHub with: \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# install.packages("devtools")} \NormalTok{devtools}\SpecialCharTok{::}\FunctionTok{install\_github}\NormalTok{(}\StringTok{"hadley/emo"}\NormalTok{)} \end{Highlighting} \end{Shaded} So we can insert: 😄. It is worth to see \href{https://github.com/trinker/pacman}{pacman: A package management tools for R} or \href{https://github.com/r-lib/remotes}{remotes} if you find an elegant way to handle packages. Finally, we can check repository of our installed packages: \begin{Shaded} \begin{Highlighting}[] \NormalTok{inst.pkg }\OtherTok{\textless{}{-}} \FunctionTok{installed.packages}\NormalTok{()[,}\DecValTok{1}\NormalTok{] }\CommentTok{\# all installed packages} \NormalTok{cran.pkg }\OtherTok{\textless{}{-}} \FunctionTok{available.packages}\NormalTok{(} \FunctionTok{contrib.url}\NormalTok{(}\AttributeTok{repos =} \StringTok{"https://cran.rstudio.com/"}\NormalTok{, } \AttributeTok{type =} \StringTok{"both"}\NormalTok{)) }\CommentTok{\# all CRAN packages} \NormalTok{bioc.pkg }\OtherTok{\textless{}{-}}\NormalTok{ BiocManager}\SpecialCharTok{::}\FunctionTok{available}\NormalTok{() }\CommentTok{\# all CRAN \& Bioconductor packages} \FunctionTok{library}\NormalTok{(dplyr)} \NormalTok{repos }\OtherTok{\textless{}{-}} \FunctionTok{case\_when}\NormalTok{(} \NormalTok{ inst.pkg }\SpecialCharTok{\%in\%}\NormalTok{ cran.pkg }\SpecialCharTok{\textasciitilde{}} \StringTok{"CRAN"}\NormalTok{,} \SpecialCharTok{!}\NormalTok{(inst.pkg }\SpecialCharTok{\%in\%}\NormalTok{ cran.pkg) }\SpecialCharTok{\&}\NormalTok{ (inst.pkg }\SpecialCharTok{\%in\%}\NormalTok{ bioc.pkg) }\SpecialCharTok{\textasciitilde{}} \StringTok{"Bioconductor"}\NormalTok{,} \ConstantTok{TRUE} \SpecialCharTok{\textasciitilde{}} \StringTok{"GitHub?"} \NormalTok{)} \NormalTok{df.pkg }\OtherTok{\textless{}{-}} \FunctionTok{data.frame}\NormalTok{(inst.pkg, repos)} \FunctionTok{table}\NormalTok{(df.pkg}\SpecialCharTok{$}\NormalTok{repos)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} Bioconductor CRAN GitHub? } \CommentTok{\#\textgreater{} 42 1646 25} \end{Highlighting} \end{Shaded} \hypertarget{masking}{% \subsection{Masking}\label{masking}} Masking occurs when two or more ``environments'' on the search path contain one or more objects with the same name. Whenever we refer to an object by typing its name, R looks in each of the loaded environments on the search path for that object in turn, starting with the \emph{Global Environment}. If R finds an object with the name it is looking for, it stops searching. Any objects it doesn't find have been hidden, or ``masked.'' To avoid any potential masking issues, it is possible to reference an object within a package directly by using the \texttt{packageName::objectName} syntax, for example, \begin{Shaded} \begin{Highlighting}[] \NormalTok{base}\SpecialCharTok{::}\NormalTok{pi} \CommentTok{\#\textgreater{} [1] 3.141593} \FunctionTok{str}\NormalTok{(MASS}\SpecialCharTok{::}\NormalTok{survey)} \CommentTok{\#\textgreater{} \textquotesingle{}data.frame\textquotesingle{}: 237 obs. of 12 variables:} \CommentTok{\#\textgreater{} $ Sex : Factor w/ 2 levels "Female","Male": 1 2 2 2 2 1 2 1 2 2 ...} \CommentTok{\#\textgreater{} $ Wr.Hnd: num 18.5 19.5 18 18.8 20 18 17.7 17 20 18.5 ...} \CommentTok{\#\textgreater{} $ NW.Hnd: num 18 20.5 13.3 18.9 20 17.7 17.7 17.3 19.5 18.5 ...} \CommentTok{\#\textgreater{} $ W.Hnd : Factor w/ 2 levels "Left","Right": 2 1 2 2 2 2 2 2 2 2 ...} \CommentTok{\#\textgreater{} $ Fold : Factor w/ 3 levels "L on R","Neither",..: 3 3 1 3 2 1 1 3 3 3 ...} \CommentTok{\#\textgreater{} $ Pulse : int 92 104 87 NA 35 64 83 74 72 90 ...} \CommentTok{\#\textgreater{} $ Clap : Factor w/ 3 levels "Left","Neither",..: 1 1 2 2 3 3 3 3 3 3 ...} \CommentTok{\#\textgreater{} $ Exer : Factor w/ 3 levels "Freq","None",..: 3 2 2 2 3 3 1 1 3 3 ...} \CommentTok{\#\textgreater{} $ Smoke : Factor w/ 4 levels "Heavy","Never",..: 2 4 3 2 2 2 2 2 2 2 ...} \CommentTok{\#\textgreater{} $ Height: num 173 178 NA 160 165 ...} \CommentTok{\#\textgreater{} $ M.I : Factor w/ 2 levels "Imperial","Metric": 2 1 NA 2 2 1 1 2 2 2 ...} \CommentTok{\#\textgreater{} $ Age : num 18.2 17.6 16.9 20.3 23.7 ...} \end{Highlighting} \end{Shaded} \hypertarget{internal-help}{% \section{Internal help}\label{internal-help}} The \texttt{help()} function can be used to display help on a function or indeed any R object. If you know the name of the object you require help with, you can use a function \texttt{help()} or its shorthand, \texttt{?}. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{help}\NormalTok{(mean)} \NormalTok{?mean} \end{Highlighting} \end{Shaded} A general search of all help files can be achieved using either the \texttt{help.search()} function or the shorthand version, \texttt{??}. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{help.search}\NormalTok{(}\StringTok{"test"}\NormalTok{)} \NormalTok{??test} \end{Highlighting} \end{Shaded} You can also read about any packages with \texttt{help()}function. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{help}\NormalTok{(}\AttributeTok{package=}\StringTok{"MASS"}\NormalTok{)} \end{Highlighting} \end{Shaded} \hypertarget{getting-started-with-a-data-analysis}{% \chapter{Getting started with a data analysis}\label{getting-started-with-a-data-analysis}} \hypertarget{terminology}{% \section{Terminology}\label{terminology}} Before we begin data management tasks, we need a few vocabulary terms would be useful to discuss. Data scientists were usually interested in the characteristics and behaviors of humans and organizations. To understand these things, scientists often measured and recorded information about people or organizations. \textbf{Dataset 1 - Marijuana legalization} For example, a data scientist working on social science might be interested in understanding whether age is related to votes for marijuana legalization. To get this information, in several years, including 2016, the \href{https://gssdataexplorer.norc.org}{GSS} survey included a question asking the survey participants whether they support marijuana legalization. The GSS question was worded as follows: \begin{itemize} \tightlist \item Do you think the use of marijuana should be legal or not? \end{itemize} Below the question, the different response options were listed: \begin{itemize} \tightlist \item legal, \item not legal, \item don't know (\texttt{DK}), \item no answer (\texttt{NA}), \item not applicable (\texttt{IAP}). \end{itemize} The GSS Data Explorer (\url{https://gssdataexplorer.norc.org}) allows people to create a free account and browse the data that have been collected in the surveys. We used the Data Explorer to select the marijuana legalization question and a question about age. The age is important, since marijuana legalization had been primarily up to voters so far, the success of ballot initiatives in the future will depend on the support of people of voting age. If younger people are more supportive, this suggests that over time, the electorate will become more supportive as the old electorate decreases. We saved age and vote data from the GSS and made the data file with the file name \texttt{legal\_weed\_age\_GSS2016\_ch1.csv}. You can see the first 6 rows from this data file: \begin{longtable}[]{@{}ll@{}} \caption{\label{tab:unnamed-chunk-2}Data set for marijuana legalization}\tabularnewline \toprule grass & age\tabularnewline \midrule \endfirsthead \toprule grass & age\tabularnewline \midrule \endhead IAP & 47\tabularnewline LEGAL & 61\tabularnewline NOT LEGAL & 72\tabularnewline IAP & 43\tabularnewline LEGAL & 55\tabularnewline LEGAL & 53\tabularnewline \bottomrule \end{longtable} As you can see, each person is an \emph{observation}, and there are two \emph{variables}, voting behavior (\texttt{grass}) and \texttt{age}. In a typical \emph{dataset}, observations are the rows and variables are the columns. \textbf{Example 2 - Student survey} Another example a data frame contains the responses of 237 Statistics I. students at the University of Adelaide to a number of questions. It contains a lot of variables: \begin{itemize} \tightlist \item \texttt{Sex} - The sex of the student. (Factor with levels ``Male'' and ``Female''.) \item \texttt{Wr.Hnd} - span (distance from tip of thumb to tip of little finger of spread hand) of writing hand, in centimetres. \item \texttt{NW.Hnd} - span of non-writing hand. \item \texttt{W.Hnd} - writing hand of student. (Factor, with levels ``Left'' and ``Right''.) \item \texttt{Fold} - ``Fold your arms! Which is on top'' (Factor, with levels ``R on L'', ``L on R'', ``Neither''.) \item \texttt{Pulse} - pulse rate of student (beats per minute). \item \texttt{Clap} - `Clap your hands! Which hand is on top?' (Factor, with levels ``Right'', ``Left'', ``Neither''.) \item \texttt{Exer} - how often the student exercises. (Factor, with levels ``Freq'' (frequently), ``Some'', ``None''.) \item \texttt{Smoke} - how much the student smokes. (Factor, levels ``Heavy'', ``Regul'' (regularly), ``Occas'' (occasionally), ``Never''.) \item \texttt{Height} - height of the student in centimetres. \item \texttt{M.I} - whether the student expressed height in imperial (feet/inches) or metric (centimetres/metres) units. (Factor, levels ``Metric'', ``Imperial''.) \item \texttt{Age} - age of the student in years. \end{itemize} \begin{longtable}[]{@{}lrrllrlllrlr@{}} \caption{\label{tab:unnamed-chunk-3}Student survey}\tabularnewline \toprule Sex & Wr.Hnd & NW.Hnd & W.Hnd & Fold & Pulse & Clap & Exer & Smoke & Height & M.I & Age\tabularnewline \midrule \endfirsthead \toprule Sex & Wr.Hnd & NW.Hnd & W.Hnd & Fold & Pulse & Clap & Exer & Smoke & Height & M.I & Age\tabularnewline \midrule \endhead Female & 18.5 & 18.0 & Right & R on L & 92 & Left & Some & Never & 173.00 & Metric & 18.250\tabularnewline Male & 19.5 & 20.5 & Left & R on L & 104 & Left & None & Regul & 177.80 & Imperial & 17.583\tabularnewline Male & 18.0 & 13.3 & Right & L on R & 87 & Neither & None & Occas & NA & NA & 16.917\tabularnewline Male & 18.8 & 18.9 & Right & R on L & NA & Neither & None & Never & 160.00 & Metric & 20.333\tabularnewline Male & 20.0 & 20.0 & Right & Neither & 35 & Right & Some & Never & 165.00 & Metric & 23.667\tabularnewline Female & 18.0 & 17.7 & Right & L on R & 64 & Right & Some & Never & 172.72 & Imperial & 21.000\tabularnewline \bottomrule \end{longtable} In statistics data are organized in what we call a \emph{data matrix} or \emph{dataset}, where each row represents an observation or a case and each column represents a variable. If you ever use spreadsheets, for example an Excel spreadsheet, this representation should be familiar to you as well. There are two types of variables, numerical and categorical. Numerical, in other words, quantitative variables, take on numerical values. It is sensible to add, subtract, take averages, etc., with these values. Categorical, or qualitative variables, take on a limited number of distinct categories. These categories can be identified with numbers or labels, but it wouldn't be sensible to do arithmetic operations with these values. Numerical variables can further be categorized as continuous or discrete. Continuous numerical variables are usually measured, such as height, and they can take on any numerical value. While we tend to round our height when we record it, it's actually measured on a continuous scale. Discrete numerical variables are generally counted, such as the number of cars a house. These can only be whole, non-negative numbers. Categorical variables that have ordered levels are called ordinal. Think about a survey question where you're asked how satisfied you are with the customer service you received, and the options are ``very unsatisfied'', ``unsatisfied'', ``neutral'', ``satisfied'', or ``very satisfied''. These levels have an inherent ordering, and hence the variable would be called ordinal. If the levels of a categorical variable do not have an inherent ordering to them, then the variable is simply called nominal. \textbf{These terms in statistics have a pair in R}. Dataset corresponds to data frame. Variable corresponds to columns of data frame. Discrete variables must be an integer or double vector in R. Continuous variables must be an integer or double vector in R, as well. Nominal or ordinal variables must be factor in R. To sum it up, study the list below. Terms in statistics - \textbf{terms in R} - \emph{example} : \begin{itemize} \item Data matrix, dataset - \textbf{data frame} - \emph{marijuana legalization dataset and survey dataset} \item Variable - \textbf{columns of data frame} - \emph{each column of two dataset: grass, age, Sex, Wr.Hnd, etc.} \begin{itemize} \item numerical / quantitative \begin{itemize} \tightlist \item discrete - \textbf{integer or double vector} - \emph{Pulse} \item continuous - \textbf{integer or double vector} - \emph{age, Wr.Hnd, NW.Hnd, Height, Age} \end{itemize} \item categorical / qualitative \begin{itemize} \tightlist \item ordinal - \textbf{factor} - \emph{Exer, Smoke} \item nominal - \textbf{factor} - \emph{grass, Sex, W.Hnd, Fold, Clap, M.I} \end{itemize} \end{itemize} \end{itemize} \hypertarget{read-and-write-data}{% \section{Read and write data}\label{read-and-write-data}} R has an extensive range of functions to import many types of data files. For example, R can import data from from text files, from Microsoft Excel, from popular statistical packages, and from web sites. \hypertarget{importing-data-from-a-delimited-text-file}{% \subsection{Importing data from a delimited text file}\label{importing-data-from-a-delimited-text-file}} You can import data from delimited text files using \texttt{read.table()}, a function that reads a file in table format and saves it as a data frame. Each row of the table appears as one line in the file. The syntax is \begin{Shaded} \begin{Highlighting}[] \NormalTok{mydataframe }\OtherTok{\textless{}{-}} \FunctionTok{read.table}\NormalTok{(file, options)} \end{Highlighting} \end{Shaded} where file is a delimited file and the options are parameters controlling how data is processed. The most common options are listed below: \begin{itemize} \tightlist \item \texttt{header=} - A logical value indicating whether the file contains the variable names in the first line. \item \texttt{sep=} - The delimiter separating data values. The default is \texttt{sep=""}, which denotes one or more spaces, tabs, new lines, or carriage returns. Use \texttt{sep=","} to read comma-delimited files, \texttt{sep="\textbackslash{}t"} to read tab-delimited files, and \texttt{sep=";"} to read semicolon-delimited files \item \texttt{dec=} - The character \texttt{","} or \texttt{"."} used in the file for decimal points \item \texttt{quote=} - Character(s) used to delimit strings that contain special characters. By default this is either double (\texttt{"}) or single (\texttt{\textquotesingle{}}) quotes. \item \texttt{comment.char=} - A character vector of length one containing a single character or an empty string. Use "" to turn off the interpretation of comments altogether. \item \texttt{fileEncoding=} - Character string for encoding name, e.g.~\texttt{"UTF-8"}, \texttt{"UTF-8-BOM"} or \texttt{"latin2"}. \end{itemize} Consider a text file named \texttt{legal\_weed\_age\_GSS2016\_ch1.csv} containing voters' response for marijuana legalization question and age. Each line of the file represents a student. The first line contains the variable names, separated with commas. Each subsequent line contains a voter's information, also separated with commas. The first few lines of the file are as follows: \begin{Shaded} \begin{Highlighting}[] \NormalTok{grass,age} \NormalTok{IAP,}\DecValTok{47} \NormalTok{LEGAL,}\DecValTok{61} \NormalTok{NOT LEGAL,}\DecValTok{72} \NormalTok{IAP,}\DecValTok{43} \NormalTok{LEGAL,}\DecValTok{55} \NormalTok{LEGAL,}\DecValTok{53} \NormalTok{IAP,}\DecValTok{50} \NormalTok{NOT LEGAL,}\DecValTok{23} \end{Highlighting} \end{Shaded} The file can be imported into a data frame using the following code: \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# read the GSS 2016 data } \NormalTok{gss}\FloatTok{.2016} \OtherTok{\textless{}{-}} \FunctionTok{read.table}\NormalTok{(}\AttributeTok{file =} \StringTok{"data/legal\_weed\_age\_GSS2016\_ch1.csv"}\NormalTok{, } \AttributeTok{header =}\NormalTok{ T, }\AttributeTok{sep =} \StringTok{","}\NormalTok{, }\AttributeTok{fileEncoding =} \StringTok{"UTF{-}8{-}BOM"}\NormalTok{)} \end{Highlighting} \end{Shaded} The results are as follows: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{head}\NormalTok{(gss}\FloatTok{.2016}\NormalTok{) }\CommentTok{\# first 6 rows} \CommentTok{\#\textgreater{} grass age} \CommentTok{\#\textgreater{} 1 IAP 47} \CommentTok{\#\textgreater{} 2 LEGAL 61} \CommentTok{\#\textgreater{} 3 NOT LEGAL 72} \CommentTok{\#\textgreater{} 4 IAP 43} \CommentTok{\#\textgreater{} 5 LEGAL 55} \CommentTok{\#\textgreater{} 6 LEGAL 53} \FunctionTok{str}\NormalTok{(gss}\FloatTok{.2016}\NormalTok{)} \CommentTok{\#\textgreater{} \textquotesingle{}data.frame\textquotesingle{}: 2867 obs. of 2 variables:} \CommentTok{\#\textgreater{} $ grass: chr "IAP" "LEGAL" "NOT LEGAL" "IAP" ...} \CommentTok{\#\textgreater{} $ age : chr "47" "61" "72" "43" ...} \end{Highlighting} \end{Shaded} There are several interesting things to note about how the data is imported. By default, \texttt{read.table()} do not convert character variables to factors. You can suppress this behaviour in a number of ways. Including the option \texttt{stringsAsFactors=TRUE} turns off this behaviour for all character variables. The variable \texttt{age} is a character vector, which is not desirable. Age is a continuous variable, so it must be a numeric in R. We will discuss this issue in detail later. \hypertarget{importing-data-from-excel}{% \subsection{Importing data from Excel}\label{importing-data-from-excel}} The best way to read an Excel file is to import Excel worksheets directly using the \textbf{rio} package. Be sure to download and install it before you first use it. Alternatively, export it to a comma-delimited file from Excel and import it into R using the method described earlier. The \textbf{rio} package can be used to read, write, many file formats. The \texttt{import()} function imports a worksheet into a data frame. The simplest format is \begin{Shaded} \begin{Highlighting}[] \FunctionTok{import}\NormalTok{(file)} \end{Highlighting} \end{Shaded} where \texttt{file=} is the path to an Excel workbook. Let's import the student survey: imports the first worksheet from the workbook \texttt{survey.xlsx} stored on project \texttt{data} directory and saves it as the data frame \texttt{survey}. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(rio)} \NormalTok{survey }\OtherTok{\textless{}{-}} \FunctionTok{import}\NormalTok{(}\AttributeTok{file =} \StringTok{"data/survey.xlsx"}\NormalTok{)} \end{Highlighting} \end{Shaded} The results are as follows: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{head}\NormalTok{(survey) }\CommentTok{\# first 6 rows} \CommentTok{\#\textgreater{} Sex Wr.Hnd NW.Hnd W.Hnd Fold Pulse Clap Exer Smoke} \CommentTok{\#\textgreater{} 1 Female 18.5 18.0 Right R on L 92 Left Some Never} \CommentTok{\#\textgreater{} 2 Male 19.5 20.5 Left R on L 104 Left None Regul} \CommentTok{\#\textgreater{} 3 Male 18.0 13.3 Right L on R 87 Neither None Occas} \CommentTok{\#\textgreater{} 4 Male 18.8 18.9 Right R on L NA Neither None Never} \CommentTok{\#\textgreater{} 5 Male 20.0 20.0 Right Neither 35 Right Some Never} \CommentTok{\#\textgreater{} 6 Female 18.0 17.7 Right L on R 64 Right Some Never} \CommentTok{\#\textgreater{} Height M.I Age} \CommentTok{\#\textgreater{} 1 173.00 Metric 18.250} \CommentTok{\#\textgreater{} 2 177.80 Imperial 17.583} \CommentTok{\#\textgreater{} 3 NA \textless{}NA\textgreater{} 16.917} \CommentTok{\#\textgreater{} 4 160.00 Metric 20.333} \CommentTok{\#\textgreater{} 5 165.00 Metric 23.667} \CommentTok{\#\textgreater{} 6 172.72 Imperial 21.000} \FunctionTok{str}\NormalTok{(survey)} \CommentTok{\#\textgreater{} \textquotesingle{}data.frame\textquotesingle{}: 237 obs. of 12 variables:} \CommentTok{\#\textgreater{} $ Sex : chr "Female" "Male" "Male" "Male" ...} \CommentTok{\#\textgreater{} $ Wr.Hnd: num 18.5 19.5 18 18.8 20 18 17.7 17 20 18.5 ...} \CommentTok{\#\textgreater{} $ NW.Hnd: num 18 20.5 13.3 18.9 20 17.7 17.7 17.3 19.5 18.5 ...} \CommentTok{\#\textgreater{} $ W.Hnd : chr "Right" "Left" "Right" "Right" ...} \CommentTok{\#\textgreater{} $ Fold : chr "R on L" "R on L" "L on R" "R on L" ...} \CommentTok{\#\textgreater{} $ Pulse : num 92 104 87 NA 35 64 83 74 72 90 ...} \CommentTok{\#\textgreater{} $ Clap : chr "Left" "Left" "Neither" "Neither" ...} \CommentTok{\#\textgreater{} $ Exer : chr "Some" "None" "None" "None" ...} \CommentTok{\#\textgreater{} $ Smoke : chr "Never" "Regul" "Occas" "Never" ...} \CommentTok{\#\textgreater{} $ Height: num 173 178 NA 160 165 ...} \CommentTok{\#\textgreater{} $ M.I : chr "Metric" "Imperial" NA "Metric" ...} \CommentTok{\#\textgreater{} $ Age : num 18.2 17.6 16.9 20.3 23.7 ...} \end{Highlighting} \end{Shaded} As you can see, the variable \texttt{Sex}, \texttt{W.Hnd}, etc. are a character vectors, which is not desirable. They are a categorical variables, so they must be factors in R. \hypertarget{exporting-data-from-r}{% \subsection{Exporting data from R}\label{exporting-data-from-r}} So far, we reviewed a wide range of methods for importing data into R. But sometimes you'll want to go the other way - exporting data from R - so that data can be archived or imported into external applications. Now, you'll learn how to output an R object to a delimited text file, an Excel spreadsheet, or a statistical application (such as SPSS, SAS, or Stata). You can use the \texttt{write.table()} function to output an R object to a delimited text file. The format is \begin{Shaded} \begin{Highlighting}[] \FunctionTok{write.table}\NormalTok{(x, outfile, }\AttributeTok{sep=}\NormalTok{delimiter, }\AttributeTok{quote=}\ConstantTok{TRUE}\NormalTok{, }\AttributeTok{na=}\StringTok{"NA"}\NormalTok{)} \end{Highlighting} \end{Shaded} where \texttt{x} is the object and \texttt{outfile} is the target file. For example, the statement \begin{Shaded} \begin{Highlighting}[] \FunctionTok{write.table}\NormalTok{(}\AttributeTok{x =}\NormalTok{ survey, }\AttributeTok{file =} \StringTok{"output/data/survey.txt"}\NormalTok{, }\AttributeTok{sep =} \StringTok{"}\SpecialCharTok{\textbackslash{}t}\StringTok{"}\NormalTok{, } \AttributeTok{dec =} \StringTok{","}\NormalTok{, }\AttributeTok{row.names =} \ConstantTok{FALSE}\NormalTok{, }\AttributeTok{quote =} \ConstantTok{FALSE}\NormalTok{)} \end{Highlighting} \end{Shaded} saves the dataset \texttt{survey} to a tab-delimited file named \texttt{survey.txt} in the project \texttt{output/data} directory. Replacing \texttt{sep="\textbackslash{}t"} with \texttt{sep=";"} saves the data in a semicolon-delimited file. By default, strings are enclosed in quotes (\texttt{""}) and missing values are written as \texttt{NA}. We will not print row names (\texttt{row.names\ =\ FALSE}) and quote (\texttt{quote\ =\ FALSE}) in the output text file. The \texttt{export()} function in the \textbf{rio} package can be used to save an R data frame to an Excel workbook. For example, the statements \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(rio)} \FunctionTok{export}\NormalTok{(}\AttributeTok{x =}\NormalTok{ gss}\FloatTok{.2016}\NormalTok{, }\AttributeTok{file =} \StringTok{"output/data/gss.xlsx"}\NormalTok{)} \end{Highlighting} \end{Shaded} export the data frame \texttt{gss.2016} to a worksheet (Sheet 1 by default) in an Excel workbook named \texttt{gss.xlsx} in the project \texttt{output/data} directory. By default, the variable names in the dataset are used to create column headings in the spreadsheet, and row names are placed in the first column of the spreadsheet. If \texttt{gss.xlsx} already exists, it's overwritten. The \texttt{export()} function in the \textbf{rio} package can be used to export a data frame to an external statistical application. For example, the code \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(rio)} \FunctionTok{export}\NormalTok{(}\AttributeTok{x =}\NormalTok{ survey, }\AttributeTok{file =} \StringTok{"output/data/survey.sav"}\NormalTok{)} \end{Highlighting} \end{Shaded} exports the data frame \texttt{survey} into an SPSS data file named \texttt{survey.sav}. Please study carefully the following codes and outputs: \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# Export (tab or semicolon) delimited text files with and withot encoding} \FunctionTok{write.table}\NormalTok{(}\AttributeTok{x =}\NormalTok{ survey, }\AttributeTok{file =} \StringTok{"output/data/survey.txt"}\NormalTok{, }\AttributeTok{sep =} \StringTok{"}\SpecialCharTok{\textbackslash{}t}\StringTok{"}\NormalTok{, } \AttributeTok{dec =} \StringTok{","}\NormalTok{, }\AttributeTok{row.names =} \ConstantTok{FALSE}\NormalTok{, }\AttributeTok{quote =} \ConstantTok{FALSE}\NormalTok{)} \FunctionTok{write.table}\NormalTok{(}\AttributeTok{x =}\NormalTok{ survey, }\AttributeTok{file =} \StringTok{"output/data/survey.csv"}\NormalTok{, }\AttributeTok{sep =} \StringTok{";"}\NormalTok{, } \AttributeTok{dec =} \StringTok{","}\NormalTok{, }\AttributeTok{row.names =} \ConstantTok{FALSE}\NormalTok{, }\AttributeTok{quote =} \ConstantTok{FALSE}\NormalTok{)} \FunctionTok{write.table}\NormalTok{(}\AttributeTok{x =}\NormalTok{ survey, }\AttributeTok{file =} \StringTok{"output/data/survey\_utf{-}8.txt"}\NormalTok{, }\AttributeTok{sep =} \StringTok{"}\SpecialCharTok{\textbackslash{}t}\StringTok{"}\NormalTok{, } \AttributeTok{dec =} \StringTok{","}\NormalTok{, }\AttributeTok{row.names =} \ConstantTok{FALSE}\NormalTok{, }\AttributeTok{quote =} \ConstantTok{FALSE}\NormalTok{, }\AttributeTok{fileEncoding =} \StringTok{"UTF{-}8"}\NormalTok{)} \FunctionTok{write.table}\NormalTok{(}\AttributeTok{x =}\NormalTok{ survey, }\AttributeTok{file =} \StringTok{"output/data/survey\_latin2.txt"}\NormalTok{, }\AttributeTok{sep =} \StringTok{"}\SpecialCharTok{\textbackslash{}t}\StringTok{"}\NormalTok{, } \AttributeTok{dec =} \StringTok{","}\NormalTok{, }\AttributeTok{row.names =} \ConstantTok{FALSE}\NormalTok{, }\AttributeTok{quote =} \ConstantTok{FALSE}\NormalTok{, }\AttributeTok{fileEncoding =} \StringTok{"latin2"}\NormalTok{)} \FunctionTok{write.table}\NormalTok{(}\AttributeTok{x =}\NormalTok{ survey, }\AttributeTok{file =} \StringTok{"output/data/survey\_utf{-}8.csv"}\NormalTok{, }\AttributeTok{sep =} \StringTok{";"}\NormalTok{, } \AttributeTok{dec =} \StringTok{","}\NormalTok{, }\AttributeTok{row.names =} \ConstantTok{FALSE}\NormalTok{, }\AttributeTok{quote =} \ConstantTok{FALSE}\NormalTok{, }\AttributeTok{fileEncoding =} \StringTok{"UTF{-}8"}\NormalTok{)} \FunctionTok{write.table}\NormalTok{(}\AttributeTok{x =}\NormalTok{ survey, }\AttributeTok{file =} \StringTok{"output/data/survey\_latin2.csv"}\NormalTok{, }\AttributeTok{sep =} \StringTok{";"}\NormalTok{, } \AttributeTok{dec =} \StringTok{","}\NormalTok{, }\AttributeTok{row.names =} \ConstantTok{FALSE}\NormalTok{, }\AttributeTok{quote =} \ConstantTok{FALSE}\NormalTok{, }\AttributeTok{fileEncoding =} \StringTok{"latin2"}\NormalTok{)} \CommentTok{\# Export Excel and SPSS files} \FunctionTok{library}\NormalTok{(rio)} \FunctionTok{export}\NormalTok{(}\AttributeTok{x =}\NormalTok{ survey, }\AttributeTok{file =} \StringTok{"output/data/survey.xlsx"}\NormalTok{)} \FunctionTok{export}\NormalTok{(}\AttributeTok{x =}\NormalTok{ survey, }\AttributeTok{file =} \StringTok{"output/data/survey.sav"}\NormalTok{)} \FunctionTok{export}\NormalTok{(}\AttributeTok{x =}\NormalTok{ gss}\FloatTok{.2016}\NormalTok{, }\AttributeTok{file =} \StringTok{"output/data/gss.xlsx"}\NormalTok{)} \FunctionTok{export}\NormalTok{(}\AttributeTok{x =}\NormalTok{ gss}\FloatTok{.2016}\NormalTok{, }\AttributeTok{file =} \StringTok{"output/data/gss.sav"}\NormalTok{)} \end{Highlighting} \end{Shaded} \hypertarget{data-manipulation}{% \section{Data manipulation}\label{data-manipulation}} In the previous chapter, we covered a variety of methods for importing data into R. Unfortunately, getting your data in the rectangular arrangement of a matrix or data frame is only the first step in preparing it for analysis. In this early stage we try to get as much information as we can. \hypertarget{get-information}{% \subsection{Get information}\label{get-information}} When working with (large) data frames, you must first develop a clear understanding of the structure and main elements of the data set. Therefore, it can often be useful to show only a small part of the entire data set. To do this in R, you can use the functions \texttt{head()} or \texttt{tail()}. The \texttt{head()} function shows the first part of the data frame. The \texttt{tail()} function shows the last part. Both functions print a top line called the \emph{header} which contains the names of the different variables in the data set. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{head}\NormalTok{(gss}\FloatTok{.2016}\NormalTok{)} \CommentTok{\#\textgreater{} grass age} \CommentTok{\#\textgreater{} 1 IAP 47} \CommentTok{\#\textgreater{} 2 LEGAL 61} \CommentTok{\#\textgreater{} 3 NOT LEGAL 72} \CommentTok{\#\textgreater{} 4 IAP 43} \CommentTok{\#\textgreater{} 5 LEGAL 55} \CommentTok{\#\textgreater{} 6 LEGAL 53} \FunctionTok{tail}\NormalTok{(gss}\FloatTok{.2016}\NormalTok{, }\AttributeTok{n =} \DecValTok{3}\NormalTok{)} \CommentTok{\#\textgreater{} grass age} \CommentTok{\#\textgreater{} 2865 LEGAL 87} \CommentTok{\#\textgreater{} 2866 IAP 55} \CommentTok{\#\textgreater{} 2867 NOT LEGAL 72} \end{Highlighting} \end{Shaded} Another method to get a rapid overview of the data is the \texttt{str()} function. The \texttt{str()} function shows the structure of the data set. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{str}\NormalTok{(gss}\FloatTok{.2016}\NormalTok{)} \CommentTok{\#\textgreater{} \textquotesingle{}data.frame\textquotesingle{}: 2867 obs. of 2 variables:} \CommentTok{\#\textgreater{} $ grass: chr "IAP" "LEGAL" "NOT LEGAL" "IAP" ...} \CommentTok{\#\textgreater{} $ age : chr "47" "61" "72" "43" ...} \end{Highlighting} \end{Shaded} For a data frame it gives the following information: \begin{itemize} \tightlist \item The total number of observations (e.g.~2867 voters) \item The total number of variables (e.g.~2 variables) \item A full list of the variables names (\texttt{grass}, \texttt{age}) \item The data type of each variable (\texttt{chr} ) \item The first observations \end{itemize} When you receive a new data frame, applying the \texttt{str()} function is often the first step. It is a great way to get more insight into the data set before deeper analysis. Please study carefully the following codes and outputs: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{str}\NormalTok{(gss}\FloatTok{.2016}\NormalTok{) }\CommentTok{\# Structure of an Arbitrary R Object} \CommentTok{\#\textgreater{} \textquotesingle{}data.frame\textquotesingle{}: 2867 obs. of 2 variables:} \CommentTok{\#\textgreater{} $ grass: chr "IAP" "LEGAL" "NOT LEGAL" "IAP" ...} \CommentTok{\#\textgreater{} $ age : chr "47" "61" "72" "43" ...} \FunctionTok{head}\NormalTok{(gss}\FloatTok{.2016}\NormalTok{) }\CommentTok{\# Return the First Parts of an Object} \CommentTok{\#\textgreater{} grass age} \CommentTok{\#\textgreater{} 1 IAP 47} \CommentTok{\#\textgreater{} 2 LEGAL 61} \CommentTok{\#\textgreater{} 3 NOT LEGAL 72} \CommentTok{\#\textgreater{} 4 IAP 43} \CommentTok{\#\textgreater{} 5 LEGAL 55} \CommentTok{\#\textgreater{} 6 LEGAL 53} \FunctionTok{dim}\NormalTok{(gss}\FloatTok{.2016}\NormalTok{) }\CommentTok{\# Dimensions of an Object} \CommentTok{\#\textgreater{} [1] 2867 2} \FunctionTok{ncol}\NormalTok{(gss}\FloatTok{.2016}\NormalTok{) }\CommentTok{\# The Number of Rows of a data frame} \CommentTok{\#\textgreater{} [1] 2} \FunctionTok{nrow}\NormalTok{(gss}\FloatTok{.2016}\NormalTok{) }\CommentTok{\# The Number of Columns of a data frame} \CommentTok{\#\textgreater{} [1] 2867} \FunctionTok{names}\NormalTok{(gss}\FloatTok{.2016}\NormalTok{) }\CommentTok{\# The Column Names of an Object} \CommentTok{\#\textgreater{} [1] "grass" "age"} \FunctionTok{typeof}\NormalTok{(gss}\FloatTok{.2016}\NormalTok{) }\CommentTok{\# Type of an Object} \CommentTok{\#\textgreater{} [1] "list"} \FunctionTok{class}\NormalTok{(gss}\FloatTok{.2016}\NormalTok{) }\CommentTok{\# Class of an Object} \CommentTok{\#\textgreater{} [1] "data.frame"} \FunctionTok{str}\NormalTok{(survey)} \CommentTok{\#\textgreater{} \textquotesingle{}data.frame\textquotesingle{}: 237 obs. of 12 variables:} \CommentTok{\#\textgreater{} $ Sex : chr "Female" "Male" "Male" "Male" ...} \CommentTok{\#\textgreater{} $ Wr.Hnd: num 18.5 19.5 18 18.8 20 18 17.7 17 20 18.5 ...} \CommentTok{\#\textgreater{} $ NW.Hnd: num 18 20.5 13.3 18.9 20 17.7 17.7 17.3 19.5 18.5 ...} \CommentTok{\#\textgreater{} $ W.Hnd : chr "Right" "Left" "Right" "Right" ...} \CommentTok{\#\textgreater{} $ Fold : chr "R on L" "R on L" "L on R" "R on L" ...} \CommentTok{\#\textgreater{} $ Pulse : num 92 104 87 NA 35 64 83 74 72 90 ...} \CommentTok{\#\textgreater{} $ Clap : chr "Left" "Left" "Neither" "Neither" ...} \CommentTok{\#\textgreater{} $ Exer : chr "Some" "None" "None" "None" ...} \CommentTok{\#\textgreater{} $ Smoke : chr "Never" "Regul" "Occas" "Never" ...} \CommentTok{\#\textgreater{} $ Height: num 173 178 NA 160 165 ...} \CommentTok{\#\textgreater{} $ M.I : chr "Metric" "Imperial" NA "Metric" ...} \CommentTok{\#\textgreater{} $ Age : num 18.2 17.6 16.9 20.3 23.7 ...} \FunctionTok{head}\NormalTok{(survey)} \CommentTok{\#\textgreater{} Sex Wr.Hnd NW.Hnd W.Hnd Fold Pulse Clap Exer Smoke} \CommentTok{\#\textgreater{} 1 Female 18.5 18.0 Right R on L 92 Left Some Never} \CommentTok{\#\textgreater{} 2 Male 19.5 20.5 Left R on L 104 Left None Regul} \CommentTok{\#\textgreater{} 3 Male 18.0 13.3 Right L on R 87 Neither None Occas} \CommentTok{\#\textgreater{} 4 Male 18.8 18.9 Right R on L NA Neither None Never} \CommentTok{\#\textgreater{} 5 Male 20.0 20.0 Right Neither 35 Right Some Never} \CommentTok{\#\textgreater{} 6 Female 18.0 17.7 Right L on R 64 Right Some Never} \CommentTok{\#\textgreater{} Height M.I Age} \CommentTok{\#\textgreater{} 1 173.00 Metric 18.250} \CommentTok{\#\textgreater{} 2 177.80 Imperial 17.583} \CommentTok{\#\textgreater{} 3 NA \textless{}NA\textgreater{} 16.917} \CommentTok{\#\textgreater{} 4 160.00 Metric 20.333} \CommentTok{\#\textgreater{} 5 165.00 Metric 23.667} \CommentTok{\#\textgreater{} 6 172.72 Imperial 21.000} \FunctionTok{dim}\NormalTok{(survey)} \CommentTok{\#\textgreater{} [1] 237 12} \FunctionTok{ncol}\NormalTok{(survey)} \CommentTok{\#\textgreater{} [1] 12} \FunctionTok{nrow}\NormalTok{(survey)} \CommentTok{\#\textgreater{} [1] 237} \FunctionTok{names}\NormalTok{(survey)} \CommentTok{\#\textgreater{} [1] "Sex" "Wr.Hnd" "NW.Hnd" "W.Hnd" "Fold" "Pulse" } \CommentTok{\#\textgreater{} [7] "Clap" "Exer" "Smoke" "Height" "M.I" "Age"} \FunctionTok{typeof}\NormalTok{(survey)} \CommentTok{\#\textgreater{} [1] "list"} \FunctionTok{class}\NormalTok{(survey)} \CommentTok{\#\textgreater{} [1] "data.frame"} \end{Highlighting} \end{Shaded} \hypertarget{data-type-conversions}{% \subsection{Data type conversions}\label{data-type-conversions}} As you have known, in R, you use numeric vectors to represent quantitative variables, and you use factors to represent categorical variables. In the data frame \texttt{gss.2016}, the variable \texttt{grass} is character vector, but it should be a factor. R provides a set of functions to identify an object's data type and convert it to a different data type. You can use the function \texttt{factor()} to convert from character or numeric to factor. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{str}\NormalTok{(gss}\FloatTok{.2016}\NormalTok{) }\CommentTok{\# grass is character} \CommentTok{\#\textgreater{} \textquotesingle{}data.frame\textquotesingle{}: 2867 obs. of 2 variables:} \CommentTok{\#\textgreater{} $ grass: chr "IAP" "LEGAL" "NOT LEGAL" "IAP" ...} \CommentTok{\#\textgreater{} $ age : chr "47" "61" "72" "43" ...} \NormalTok{gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{grass }\OtherTok{\textless{}{-}} \FunctionTok{factor}\NormalTok{(gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{grass) }\CommentTok{\# convert} \FunctionTok{str}\NormalTok{(gss}\FloatTok{.2016}\NormalTok{) }\CommentTok{\# grass is factor} \CommentTok{\#\textgreater{} \textquotesingle{}data.frame\textquotesingle{}: 2867 obs. of 2 variables:} \CommentTok{\#\textgreater{} $ grass: Factor w/ 4 levels "DK","IAP","LEGAL",..: 2 3 4 2 3 3 2 4 2 4 ...} \CommentTok{\#\textgreater{} $ age : chr "47" "61" "72" "43" ...} \end{Highlighting} \end{Shaded} The continuous variable \texttt{age} is also character, but it should be a numeric. What is the problem with age variable? Use the \texttt{unique()} and \texttt{table()} functions. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{unique}\NormalTok{(gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{age)} \CommentTok{\#\textgreater{} [1] "47" "61" "72" "43" } \CommentTok{\#\textgreater{} [5] "55" "53" "50" "23" } \CommentTok{\#\textgreater{} [9] "45" "71" "33" "86" } \CommentTok{\#\textgreater{} [13] "32" "60" "76" "56" } \CommentTok{\#\textgreater{} [17] "62" "31" "58" "37" } \CommentTok{\#\textgreater{} [21] "25" "22" "74" "75" } \CommentTok{\#\textgreater{} [25] "68" "46" "35" "59" } \CommentTok{\#\textgreater{} [29] "79" "40" "44" "36" } \CommentTok{\#\textgreater{} [33] "70" "28" "20" "41" } \CommentTok{\#\textgreater{} [37] "42" "57" "26" "51" } \CommentTok{\#\textgreater{} [41] "39" "27" "30" "29" } \CommentTok{\#\textgreater{} [45] "80" "49" "78" "52" } \CommentTok{\#\textgreater{} [49] "66" "89 OR OLDER" "54" "48" } \CommentTok{\#\textgreater{} [53] "81" "69" "21" "64" } \CommentTok{\#\textgreater{} [57] "38" "65" "67" "84" } \CommentTok{\#\textgreater{} [61] "34" "77" "19" NA } \CommentTok{\#\textgreater{} [65] "83" "73" "63" "24" } \CommentTok{\#\textgreater{} [69] "82" "85" "87" "18" } \CommentTok{\#\textgreater{} [73] "88"} \FunctionTok{table}\NormalTok{(gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{age, }\AttributeTok{useNA =} \StringTok{"ifany"}\NormalTok{)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} 18 19 20 21 22 } \CommentTok{\#\textgreater{} 7 33 26 33 44 } \CommentTok{\#\textgreater{} 23 24 25 26 27 } \CommentTok{\#\textgreater{} 49 35 56 42 58 } \CommentTok{\#\textgreater{} 28 29 30 31 32 } \CommentTok{\#\textgreater{} 42 56 54 57 42 } \CommentTok{\#\textgreater{} 33 34 35 36 37 } \CommentTok{\#\textgreater{} 54 49 56 52 58 } \CommentTok{\#\textgreater{} 38 39 40 41 42 } \CommentTok{\#\textgreater{} 44 42 46 36 50 } \CommentTok{\#\textgreater{} 43 44 45 46 47 } \CommentTok{\#\textgreater{} 45 52 27 45 55 } \CommentTok{\#\textgreater{} 48 49 50 51 52 } \CommentTok{\#\textgreater{} 46 41 48 49 65 } \CommentTok{\#\textgreater{} 53 54 55 56 57 } \CommentTok{\#\textgreater{} 60 53 48 48 70 } \CommentTok{\#\textgreater{} 58 59 60 61 62 } \CommentTok{\#\textgreater{} 67 58 53 56 56 } \CommentTok{\#\textgreater{} 63 64 65 66 67 } \CommentTok{\#\textgreater{} 43 34 44 47 49 } \CommentTok{\#\textgreater{} 68 69 70 71 72 } \CommentTok{\#\textgreater{} 43 42 32 27 26 } \CommentTok{\#\textgreater{} 73 74 75 76 77 } \CommentTok{\#\textgreater{} 22 24 19 25 23 } \CommentTok{\#\textgreater{} 78 79 80 81 82 } \CommentTok{\#\textgreater{} 26 21 25 21 11 } \CommentTok{\#\textgreater{} 83 84 85 86 87 } \CommentTok{\#\textgreater{} 22 11 11 12 9 } \CommentTok{\#\textgreater{} 88 89 OR OLDER \textless{}NA\textgreater{} } \CommentTok{\#\textgreater{} 3 22 10} \end{Highlighting} \end{Shaded} \texttt{unique(x)} returns an object of the same type of \texttt{x}, but with only one copy of each duplicated element. \texttt{table(x)} returns the same, plus the number of times a particular value of \texttt{x} occurs. Age appears to be measured in years up to age 88, and then \texttt{"89\ OR\ OLDER"} represents people who are 89 years old or older. Since \texttt{"89\ OR\ OLDER"} can not be a number, trying to force the age variable with \texttt{"89\ OR\ OLDER"} in it into a numeric variable would result in an error. Before converting \texttt{age} into a numeric variable, you should first recode anyone who has a value of \texttt{"89\ OR\ OLDER"} to instead have a value 89. \begin{Shaded} \begin{Highlighting}[] \NormalTok{gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{age[gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{age }\SpecialCharTok{\%in\%} \StringTok{"89 OR OLDER"}\NormalTok{] }\OtherTok{\textless{}{-}} \StringTok{"89"} \CommentTok{\# recoding} \NormalTok{gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{age }\OtherTok{\textless{}{-}} \FunctionTok{as.numeric}\NormalTok{(gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{age) }\CommentTok{\# data type conversion} \FunctionTok{str}\NormalTok{(gss}\FloatTok{.2016}\NormalTok{) } \CommentTok{\#\textgreater{} \textquotesingle{}data.frame\textquotesingle{}: 2867 obs. of 2 variables:} \CommentTok{\#\textgreater{} $ grass: Factor w/ 4 levels "DK","IAP","LEGAL",..: 2 3 4 2 3 3 2 4 2 4 ...} \CommentTok{\#\textgreater{} $ age : num 47 61 72 43 55 53 50 23 45 71 ...} \end{Highlighting} \end{Shaded} By now, the data frame \texttt{gss.2016} is in the desired structure. What about the data frame \texttt{survey}? As we mentioned, there are a few variables, namely \texttt{Sex}, \texttt{W.Hnd}, \texttt{Fold}, \texttt{Clap}, \texttt{Exer},\texttt{Smoke}, and\texttt{M.I}, that are categorical, so you need to convert to factor. We can use the \texttt{factor()} function: \begin{Shaded} \begin{Highlighting}[] \NormalTok{survey}\SpecialCharTok{$}\NormalTok{Sex }\OtherTok{\textless{}{-}} \FunctionTok{factor}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Sex)} \NormalTok{survey}\SpecialCharTok{$}\NormalTok{W.Hnd }\OtherTok{\textless{}{-}} \FunctionTok{factor}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{W.Hnd)} \NormalTok{survey}\SpecialCharTok{$}\NormalTok{Fold }\OtherTok{\textless{}{-}} \FunctionTok{factor}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Fold)} \NormalTok{survey}\SpecialCharTok{$}\NormalTok{Clap }\OtherTok{\textless{}{-}} \FunctionTok{factor}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Clap)} \NormalTok{survey}\SpecialCharTok{$}\NormalTok{Exer }\OtherTok{\textless{}{-}} \FunctionTok{factor}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Exer)} \NormalTok{survey}\SpecialCharTok{$}\NormalTok{Smoke }\OtherTok{\textless{}{-}} \FunctionTok{factor}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Smoke)} \NormalTok{survey}\SpecialCharTok{$}\NormalTok{M.I }\OtherTok{\textless{}{-}} \FunctionTok{factor}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{M.I)} \FunctionTok{str}\NormalTok{(survey)} \CommentTok{\#\textgreater{} \textquotesingle{}data.frame\textquotesingle{}: 237 obs. of 12 variables:} \CommentTok{\#\textgreater{} $ Sex : Factor w/ 2 levels "Female","Male": 1 2 2 2 2 1 2 1 2 2 ...} \CommentTok{\#\textgreater{} $ Wr.Hnd: num 18.5 19.5 18 18.8 20 18 17.7 17 20 18.5 ...} \CommentTok{\#\textgreater{} $ NW.Hnd: num 18 20.5 13.3 18.9 20 17.7 17.7 17.3 19.5 18.5 ...} \CommentTok{\#\textgreater{} $ W.Hnd : Factor w/ 2 levels "Left","Right": 2 1 2 2 2 2 2 2 2 2 ...} \CommentTok{\#\textgreater{} $ Fold : Factor w/ 3 levels "L on R","Neither",..: 3 3 1 3 2 1 1 3 3 3 ...} \CommentTok{\#\textgreater{} $ Pulse : num 92 104 87 NA 35 64 83 74 72 90 ...} \CommentTok{\#\textgreater{} $ Clap : Factor w/ 3 levels "Left","Neither",..: 1 1 2 2 3 3 3 3 3 3 ...} \CommentTok{\#\textgreater{} $ Exer : Factor w/ 3 levels "Freq","None",..: 3 2 2 2 3 3 1 1 3 3 ...} \CommentTok{\#\textgreater{} $ Smoke : Factor w/ 4 levels "Heavy","Never",..: 2 4 3 2 2 2 2 2 2 2 ...} \CommentTok{\#\textgreater{} $ Height: num 173 178 NA 160 165 ...} \CommentTok{\#\textgreater{} $ M.I : Factor w/ 2 levels "Imperial","Metric": 2 1 NA 2 2 1 1 2 2 2 ...} \CommentTok{\#\textgreater{} $ Age : num 18.2 17.6 16.9 20.3 23.7 ...} \end{Highlighting} \end{Shaded} Our dataset \texttt{survey} contains only numeric and factor variables. But two variables (\texttt{Exer} an\texttt{Smoke}) are ordinal categorical variable, so you need to check the levels. Sometimes it's useful to know the number of levels of a factor. The convenience function \texttt{nlevels()} extracts the number of levels from a factor: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{nlevels}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Exer)} \CommentTok{\#\textgreater{} [1] 3} \end{Highlighting} \end{Shaded} To look at the levels of a factor, you use the \texttt{levels()} function. For example, to extract the factor levels of \texttt{Exer}, use the following: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{levels}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Exer)} \CommentTok{\#\textgreater{} [1] "Freq" "None" "Some"} \end{Highlighting} \end{Shaded} As you can see, each student has a status of exercise (None, Some, Freq), how often the student exercises. Notice, in the output above the levels are ordered alphabetically. However, we need to sort in the order None, Some, Freq: \begin{Shaded} \begin{Highlighting}[] \NormalTok{survey}\SpecialCharTok{$}\NormalTok{Exer }\OtherTok{\textless{}{-}} \FunctionTok{factor}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Exer, }\AttributeTok{levels=}\FunctionTok{c}\NormalTok{(}\StringTok{"None"}\NormalTok{, }\StringTok{"Some"}\NormalTok{, }\StringTok{"Freq"}\NormalTok{))} \FunctionTok{levels}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Exer)} \CommentTok{\#\textgreater{} [1] "None" "Some" "Freq"} \end{Highlighting} \end{Shaded} In R, there is a really big practical advantage to order factor's level. A great many R functions recognize and treat ordered factors differently by printing results in the order that you expect. For example, \begin{Shaded} \begin{Highlighting}[] \FunctionTok{table}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Exer, }\AttributeTok{useNA =} \StringTok{"ifany"}\NormalTok{)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} None Some Freq } \CommentTok{\#\textgreater{} 24 98 115} \end{Highlighting} \end{Shaded} We need to order the levels in \texttt{Smoke} variable. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{levels}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Smoke)} \CommentTok{\#\textgreater{} [1] "Heavy" "Never" "Occas" "Regul"} \FunctionTok{table}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Smoke, }\AttributeTok{useNA =} \StringTok{"ifany"}\NormalTok{)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} Heavy Never Occas Regul \textless{}NA\textgreater{} } \CommentTok{\#\textgreater{} 11 189 19 17 1} \NormalTok{survey}\SpecialCharTok{$}\NormalTok{Smoke }\OtherTok{\textless{}{-}} \FunctionTok{factor}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Smoke, }\AttributeTok{levels=}\FunctionTok{c}\NormalTok{(}\StringTok{"Never"}\NormalTok{, }\StringTok{"Occas"}\NormalTok{, }\StringTok{"Regul"}\NormalTok{,}\StringTok{"Heavy"}\NormalTok{))} \FunctionTok{levels}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Smoke)} \CommentTok{\#\textgreater{} [1] "Never" "Occas" "Regul" "Heavy"} \FunctionTok{table}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Smoke, }\AttributeTok{useNA =} \StringTok{"ifany"}\NormalTok{)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} Never Occas Regul Heavy \textless{}NA\textgreater{} } \CommentTok{\#\textgreater{} 189 19 17 11 1} \end{Highlighting} \end{Shaded} \hypertarget{transformation}{% \subsection{Transformation}\label{transformation}} \hypertarget{identifying-and-treating-missing-values}{% \subsection{Identifying and treating missing values}\label{identifying-and-treating-missing-values}} In addition to making sure the variables used are an appropriate type, it was also important to make sure that missing values were treated appropriately by R. In R, missing values are recorded as \texttt{NA}, which stands for not available. Researchers code missing values in many different ways when collecting and storing data. Some of the more common ways to denote missing values are the following: \begin{itemize} \tightlist \item blank \item 777, -777, 888, -888, 999, -999, or something similar \item a single period \item -1 \item NULL. \end{itemize} Other responses, such as ``Don't know'' or ``Inapplicable,'' may sometimes be treated as missing or as response categories depending on what is most appropriate given the characteristics of the data and the analysis goals. In the summary of the \texttt{gss.2016} data, \begin{Shaded} \begin{Highlighting}[] \FunctionTok{summary}\NormalTok{(gss}\FloatTok{.2016}\NormalTok{)} \CommentTok{\#\textgreater{} grass age } \CommentTok{\#\textgreater{} DK : 110 Min. :18.00 } \CommentTok{\#\textgreater{} IAP : 911 1st Qu.:34.00 } \CommentTok{\#\textgreater{} LEGAL :1126 Median :49.00 } \CommentTok{\#\textgreater{} NOT LEGAL: 717 Mean :49.16 } \CommentTok{\#\textgreater{} NA\textquotesingle{}s : 3 3rd Qu.:62.00 } \CommentTok{\#\textgreater{} Max. :89.00 } \CommentTok{\#\textgreater{} NA\textquotesingle{}s :10} \end{Highlighting} \end{Shaded} the \texttt{grass} variable has five possible values: \texttt{DK} (don't know), \texttt{IAP} (inapplicable), \texttt{LEGAL}, \texttt{NOT\ LEGAL}, and \texttt{NA} (not available). The \texttt{DK}, \texttt{IAP}, and \texttt{NA} could all be considered missing values. However, R treats only \texttt{NA} as missing. Before conducting any analyses, the \texttt{DK} and \texttt{IAP} values could be converted to \texttt{NA} to be treated as missing in any analyses. That is, the \texttt{grass} variable could be recoded so that these values are all \texttt{NA}. Note that \texttt{NA} is a reserved ``word'' in R. In order to use \texttt{NA}, both letters must be uppercase (\texttt{Na} or \texttt{na} does not work), and there can be no quotation marks (R will treat \texttt{"NA"} as a character rather than a true missing value). There are many ways to recode variables in R. For example, \begin{Shaded} \begin{Highlighting}[] \FunctionTok{table}\NormalTok{(gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{grass, }\AttributeTok{useNA =} \StringTok{"ifany"}\NormalTok{) }\CommentTok{\# before recoding} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} DK IAP LEGAL NOT LEGAL \textless{}NA\textgreater{} } \CommentTok{\#\textgreater{} 110 911 1126 717 3} \FunctionTok{library}\NormalTok{(car)} \NormalTok{gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{grass }\OtherTok{\textless{}{-}}\NormalTok{ car}\SpecialCharTok{::}\FunctionTok{recode}\NormalTok{(}\AttributeTok{var =}\NormalTok{ gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{grass, }\AttributeTok{recodes =} \StringTok{\textquotesingle{}c("DK", "IAP")=NA\textquotesingle{}}\NormalTok{)} \FunctionTok{table}\NormalTok{(gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{grass, }\AttributeTok{useNA =} \StringTok{"ifany"}\NormalTok{) }\CommentTok{\# after recoding} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} LEGAL NOT LEGAL \textless{}NA\textgreater{} } \CommentTok{\#\textgreater{} 1126 717 1024} \end{Highlighting} \end{Shaded} \hypertarget{numeric-to-factor}{% \subsection{Numeric to factor}\label{numeric-to-factor}} In addition to solving the \texttt{age} and \texttt{grass} recoding, the final plan to create the age categories shown below. The \texttt{age} variable currently holds the age in years rather than age categories. The age can be in four categories: \begin{itemize} \tightlist \item 18-29 \item 30-59 \item 60-74 \item 75+ \end{itemize} The function \texttt{cut()} can be used to divide a continuous variable into categories by cutting it into pieces and adding a label to each piece. \begin{Shaded} \begin{Highlighting}[] \NormalTok{gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{age.f }\OtherTok{\textless{}{-}} \FunctionTok{cut}\NormalTok{(}\AttributeTok{x =}\NormalTok{ gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{age, }\AttributeTok{breaks =} \FunctionTok{c}\NormalTok{(}\SpecialCharTok{{-}}\ConstantTok{Inf}\NormalTok{, }\DecValTok{29}\NormalTok{, }\DecValTok{59}\NormalTok{, }\DecValTok{74}\NormalTok{, }\ConstantTok{Inf}\NormalTok{),} \AttributeTok{labels =} \FunctionTok{c}\NormalTok{(}\StringTok{"\textless{}30"}\NormalTok{, }\StringTok{"30{-}59"}\NormalTok{, }\StringTok{"60{-}74"}\NormalTok{, }\StringTok{"75+"}\NormalTok{ ))} \FunctionTok{table}\NormalTok{(gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{age.f, }\AttributeTok{useNA =} \StringTok{"ifany"}\NormalTok{)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} \textless{}30 30{-}59 60{-}74 75+ \textless{}NA\textgreater{} } \CommentTok{\#\textgreater{} 481 1517 598 261 10} \end{Highlighting} \end{Shaded} \texttt{cut()} takes a variable like \texttt{age} as the first argument. The second thing to add after the variable name is a vector made up of the breaks. Breaks specify the lower and upper limit of each category of values. The first entry is the lowest value of the first category, the second entry is the highest value of the first category, the third entry is the highest value of the second category, and so on. The first and last values in the vector are \texttt{-Inf} and \texttt{Inf}. These are negative infinity and positive infinity. This was for convenience rather than looking up the smallest and largest values of variable \texttt{age}. It also makes the code more flexible in case there is a new data point with a smaller or larger value. The final thing to add is a vector made up of the labels for the categories, with each label inside quote marks. \hypertarget{descriptive-statistics}{% \section{Descriptive statistics}\label{descriptive-statistics}} R has built in functions for a large number of summary statistics. To illustrate the main R functions we will use the \texttt{survey} and \texttt{gss.2016} datasets. R has tons of packages to explore our dataset, but we focus on built-in possibilities, \textbf{psych} and \textbf{DescTools} packages. Let us first see what kind of objects are included in \texttt{survey} and \texttt{gss.2016} by using \texttt{summary()} function. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{summary}\NormalTok{(gss}\FloatTok{.2016}\NormalTok{)} \CommentTok{\#\textgreater{} grass age age.f } \CommentTok{\#\textgreater{} LEGAL :1126 Min. :18.00 \textless{}30 : 481 } \CommentTok{\#\textgreater{} NOT LEGAL: 717 1st Qu.:34.00 30{-}59:1517 } \CommentTok{\#\textgreater{} NA\textquotesingle{}s :1024 Median :49.00 60{-}74: 598 } \CommentTok{\#\textgreater{} Mean :49.16 75+ : 261 } \CommentTok{\#\textgreater{} 3rd Qu.:62.00 NA\textquotesingle{}s : 10 } \CommentTok{\#\textgreater{} Max. :89.00 } \CommentTok{\#\textgreater{} NA\textquotesingle{}s :10} \FunctionTok{summary}\NormalTok{(survey)} \CommentTok{\#\textgreater{} Sex Wr.Hnd NW.Hnd W.Hnd } \CommentTok{\#\textgreater{} Female:118 Min. :13.00 Min. :12.50 Left : 18 } \CommentTok{\#\textgreater{} Male :118 1st Qu.:17.50 1st Qu.:17.50 Right:218 } \CommentTok{\#\textgreater{} NA\textquotesingle{}s : 1 Median :18.50 Median :18.50 NA\textquotesingle{}s : 1 } \CommentTok{\#\textgreater{} Mean :18.67 Mean :18.58 } \CommentTok{\#\textgreater{} 3rd Qu.:19.80 3rd Qu.:19.73 } \CommentTok{\#\textgreater{} Max. :23.20 Max. :23.50 } \CommentTok{\#\textgreater{} NA\textquotesingle{}s :1 NA\textquotesingle{}s :1 } \CommentTok{\#\textgreater{} Fold Pulse Clap Exer } \CommentTok{\#\textgreater{} L on R : 99 Min. : 35.00 Left : 39 None: 24 } \CommentTok{\#\textgreater{} Neither: 18 1st Qu.: 66.00 Neither: 50 Some: 98 } \CommentTok{\#\textgreater{} R on L :120 Median : 72.50 Right :147 Freq:115 } \CommentTok{\#\textgreater{} Mean : 74.15 NA\textquotesingle{}s : 1 } \CommentTok{\#\textgreater{} 3rd Qu.: 80.00 } \CommentTok{\#\textgreater{} Max. :104.00 } \CommentTok{\#\textgreater{} NA\textquotesingle{}s :45 } \CommentTok{\#\textgreater{} Smoke Height M.I Age } \CommentTok{\#\textgreater{} Never:189 Min. :150.0 Imperial: 68 Min. :16.75 } \CommentTok{\#\textgreater{} Occas: 19 1st Qu.:165.0 Metric :141 1st Qu.:17.67 } \CommentTok{\#\textgreater{} Regul: 17 Median :171.0 NA\textquotesingle{}s : 28 Median :18.58 } \CommentTok{\#\textgreater{} Heavy: 11 Mean :172.4 Mean :20.37 } \CommentTok{\#\textgreater{} NA\textquotesingle{}s : 1 3rd Qu.:180.0 3rd Qu.:20.17 } \CommentTok{\#\textgreater{} Max. :200.0 Max. :73.00 } \CommentTok{\#\textgreater{} NA\textquotesingle{}s :28} \end{Highlighting} \end{Shaded} The type of the descriptive statistics we use depends on whether data is numeric (continuous) or categorical and so we will look at each case separately next. \hypertarget{measurements}{% \subsection{Measurements}\label{measurements}} Recall that for numeric variables, we are usually interested in measuring center tendency and spread to get a sense of data. Suppose that we are interested in \texttt{Height} column, in which students' height is measured. From the \texttt{summary(survey)} table above we know that this variable is indeed a numeric data, and therefore we can measure central tendency and spread of this variable as we do in the following codes, respectively: \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# Central Tendency } \FunctionTok{mean}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Height, }\AttributeTok{na.rm =}\NormalTok{ T) }\CommentTok{\# Mean} \CommentTok{\#\textgreater{} [1] 172.3809} \FunctionTok{median}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Height, }\AttributeTok{na.rm =}\NormalTok{ T) }\CommentTok{\# Median} \CommentTok{\#\textgreater{} [1] 171} \CommentTok{\# Spread} \FunctionTok{min}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Height, }\AttributeTok{na.rm =}\NormalTok{ T) }\CommentTok{\# Minimum} \CommentTok{\#\textgreater{} [1] 150} \FunctionTok{max}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Height, }\AttributeTok{na.rm =}\NormalTok{ T) }\CommentTok{\# Maximum} \CommentTok{\#\textgreater{} [1] 200} \FunctionTok{range}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Height, }\AttributeTok{na.rm =}\NormalTok{ T) }\CommentTok{\# Range} \CommentTok{\#\textgreater{} [1] 150 200} \FunctionTok{IQR}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Height, }\AttributeTok{na.rm =}\NormalTok{ T) }\CommentTok{\# IQR} \CommentTok{\#\textgreater{} [1] 15} \FunctionTok{var}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Height, }\AttributeTok{na.rm =}\NormalTok{ T) }\CommentTok{\# Variance} \CommentTok{\#\textgreater{} [1] 96.9738} \FunctionTok{sd}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Height, }\AttributeTok{na.rm =}\NormalTok{ T) }\CommentTok{\# Standard Deviation} \CommentTok{\#\textgreater{} [1] 9.847528} \end{Highlighting} \end{Shaded} All of these functions have optional arguments to address various complications that your data might have. For example, if your data includes some NAs, then instead of using \texttt{mean(survey\$Height)} you should use \texttt{mean(survey\$Height,\ na.rm\ =\ T)}, which tells R to ignore NAs in the data. Please study carefully the following codes and outputs: \begin{Shaded} \begin{Highlighting}[] \CommentTok{\#install.packages("psych")} \FunctionTok{library}\NormalTok{(psych)} \FunctionTok{describe}\NormalTok{(gss}\FloatTok{.2016}\NormalTok{)} \CommentTok{\#\textgreater{} vars n mean sd median trimmed mad min max range} \CommentTok{\#\textgreater{} grass* 1 1843 1.39 0.49 1 1.36 0.00 1 2 1} \CommentTok{\#\textgreater{} age 2 2857 49.16 17.69 49 48.62 20.76 18 89 71} \CommentTok{\#\textgreater{} age.f* 3 2857 2.22 0.83 2 2.17 0.00 1 4 3} \CommentTok{\#\textgreater{} skew kurtosis se} \CommentTok{\#\textgreater{} grass* 0.45 {-}1.79 0.01} \CommentTok{\#\textgreater{} age 0.17 {-}0.90 0.33} \CommentTok{\#\textgreater{} age.f* 0.51 {-}0.16 0.02} \FunctionTok{describe}\NormalTok{(survey)} \CommentTok{\#\textgreater{} vars n mean sd median trimmed mad min max} \CommentTok{\#\textgreater{} Sex* 1 236 1.50 0.50 1.50 1.50 0.74 1.00 2.0} \CommentTok{\#\textgreater{} Wr.Hnd 2 236 18.67 1.88 18.50 18.61 1.48 13.00 23.2} \CommentTok{\#\textgreater{} NW.Hnd 3 236 18.58 1.97 18.50 18.55 1.63 12.50 23.5} \CommentTok{\#\textgreater{} W.Hnd* 4 236 1.92 0.27 2.00 2.00 0.00 1.00 2.0} \CommentTok{\#\textgreater{} Fold* 5 237 2.09 0.96 3.00 2.11 0.00 1.00 3.0} \CommentTok{\#\textgreater{} Pulse 6 192 74.15 11.69 72.50 74.02 11.12 35.00 104.0} \CommentTok{\#\textgreater{} Clap* 7 236 2.46 0.76 3.00 2.57 0.00 1.00 3.0} \CommentTok{\#\textgreater{} Exer* 8 237 2.38 0.66 2.00 2.48 1.48 1.00 3.0} \CommentTok{\#\textgreater{} Smoke* 9 236 1.36 0.81 1.00 1.15 0.00 1.00 4.0} \CommentTok{\#\textgreater{} Height 10 209 172.38 9.85 171.00 172.19 10.08 150.00 200.0} \CommentTok{\#\textgreater{} M.I* 11 209 1.67 0.47 2.00 1.72 0.00 1.00 2.0} \CommentTok{\#\textgreater{} Age 12 237 20.37 6.47 18.58 18.99 1.61 16.75 73.0} \CommentTok{\#\textgreater{} range skew kurtosis se} \CommentTok{\#\textgreater{} Sex* 1.00 0.00 {-}2.01 0.03} \CommentTok{\#\textgreater{} Wr.Hnd 10.20 0.18 0.30 0.12} \CommentTok{\#\textgreater{} NW.Hnd 11.00 0.02 0.44 0.13} \CommentTok{\#\textgreater{} W.Hnd* 1.00 {-}3.17 8.10 0.02} \CommentTok{\#\textgreater{} Fold* 2.00 {-}0.18 {-}1.89 0.06} \CommentTok{\#\textgreater{} Pulse 69.00 {-}0.02 0.33 0.84} \CommentTok{\#\textgreater{} Clap* 2.00 {-}0.98 {-}0.60 0.05} \CommentTok{\#\textgreater{} Exer* 2.00 {-}0.61 {-}0.68 0.04} \CommentTok{\#\textgreater{} Smoke* 3.00 2.15 3.45 0.05} \CommentTok{\#\textgreater{} Height 50.00 0.22 {-}0.44 0.68} \CommentTok{\#\textgreater{} M.I* 1.00 {-}0.74 {-}1.46 0.03} \CommentTok{\#\textgreater{} Age 56.25 5.16 33.47 0.42} \CommentTok{\#install.packages("DescTools")} \FunctionTok{library}\NormalTok{(DescTools)} \FunctionTok{Desc}\NormalTok{(gss}\FloatTok{.2016}\NormalTok{, }\AttributeTok{plot=}\NormalTok{F)} \CommentTok{\#\textgreater{} {-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-} } \CommentTok{\#\textgreater{} Describe gss.2016 (data.frame):} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} data frame: 2867 obs. of 3 variables} \CommentTok{\#\textgreater{} 1836 complete cases (64.0\%)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} Nr ColName Class NAs Levels } \CommentTok{\#\textgreater{} 1 grass factor 1024 (35.7\%) (2): 1{-}LEGAL, 2{-}NOT } \CommentTok{\#\textgreater{} LEGAL } \CommentTok{\#\textgreater{} 2 age numeric 10 (0.3\%) } \CommentTok{\#\textgreater{} 3 age.f factor 10 (0.3\%) (4): 1{-}\textless{}30, 2{-}30{-}59,} \CommentTok{\#\textgreater{} 3{-}60{-}74, 4{-}75+ } \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} {-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-} } \CommentTok{\#\textgreater{} 1 {-} grass (factor {-} dichotomous)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} length n NAs unique} \CommentTok{\#\textgreater{} 2\textquotesingle{}867 1\textquotesingle{}843 1\textquotesingle{}024 2} \CommentTok{\#\textgreater{} 64.3\% 35.7\% } \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} freq perc lci.95 uci.95\textquotesingle{}} \CommentTok{\#\textgreater{} LEGAL 1\textquotesingle{}126 61.1\% 58.8\% 63.3\%} \CommentTok{\#\textgreater{} NOT LEGAL 717 38.9\% 36.7\% 41.2\%} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} \textquotesingle{} 95\%{-}CI (Wilson)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} {-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-} } \CommentTok{\#\textgreater{} 2 {-} age (numeric)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} length n NAs unique 0s mean meanCI\textquotesingle{}} \CommentTok{\#\textgreater{} 2\textquotesingle{}867 2\textquotesingle{}857 10 72 0 49.16 48.51} \CommentTok{\#\textgreater{} 99.7\% 0.3\% 0.0\% 49.80} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} .05 .10 .25 median .75 .90 .95} \CommentTok{\#\textgreater{} 22.80 26.00 34.00 49.00 62.00 73.00 80.00} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} range sd vcoef mad IQR skew kurt} \CommentTok{\#\textgreater{} 71.00 17.69 0.36 20.76 28.00 0.17 {-}0.90} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} lowest : 18.0 (7), 19.0 (33), 20.0 (26), 21.0 (33), 22.0 (44)} \CommentTok{\#\textgreater{} highest: 85.0 (11), 86.0 (12), 87.0 (9), 88.0 (3), 89.0 (22)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} \textquotesingle{} 95\%{-}CI (classic)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} {-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-} } \CommentTok{\#\textgreater{} 3 {-} age.f (factor)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} length n NAs unique levels dupes} \CommentTok{\#\textgreater{} 2\textquotesingle{}867 2\textquotesingle{}857 10 4 4 y} \CommentTok{\#\textgreater{} 99.7\% 0.3\% } \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} level freq perc cumfreq cumperc} \CommentTok{\#\textgreater{} 1 30{-}59 1\textquotesingle{}517 53.1\% 1\textquotesingle{}517 53.1\%} \CommentTok{\#\textgreater{} 2 60{-}74 598 20.9\% 2\textquotesingle{}115 74.0\%} \CommentTok{\#\textgreater{} 3 \textless{}30 481 16.8\% 2\textquotesingle{}596 90.9\%} \CommentTok{\#\textgreater{} 4 75+ 261 9.1\% 2\textquotesingle{}857 100.0\%} \FunctionTok{Desc}\NormalTok{(survey, }\AttributeTok{plot=}\NormalTok{F)} \CommentTok{\#\textgreater{} {-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-} } \CommentTok{\#\textgreater{} Describe survey (data.frame):} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} data frame: 237 obs. of 12 variables} \CommentTok{\#\textgreater{} 168 complete cases (70.9\%)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} Nr ColName Class NAs Levels } \CommentTok{\#\textgreater{} 1 Sex factor 1 (0.4\%) (2): 1{-}Female, 2{-}Male } \CommentTok{\#\textgreater{} 2 Wr.Hnd numeric 1 (0.4\%) } \CommentTok{\#\textgreater{} 3 NW.Hnd numeric 1 (0.4\%) } \CommentTok{\#\textgreater{} 4 W.Hnd factor 1 (0.4\%) (2): 1{-}Left, 2{-}Right } \CommentTok{\#\textgreater{} 5 Fold factor . (3): 1{-}L on R, } \CommentTok{\#\textgreater{} 2{-}Neither, 3{-}R on L } \CommentTok{\#\textgreater{} 6 Pulse numeric 45 (19.0\%) } \CommentTok{\#\textgreater{} 7 Clap factor 1 (0.4\%) (3): 1{-}Left, 2{-}Neither,} \CommentTok{\#\textgreater{} 3{-}Right } \CommentTok{\#\textgreater{} 8 Exer factor . (3): 1{-}None, 2{-}Some, } \CommentTok{\#\textgreater{} 3{-}Freq } \CommentTok{\#\textgreater{} 9 Smoke factor 1 (0.4\%) (4): 1{-}Never, 2{-}Occas, } \CommentTok{\#\textgreater{} 3{-}Regul, 4{-}Heavy } \CommentTok{\#\textgreater{} 10 Height numeric 28 (11.8\%) } \CommentTok{\#\textgreater{} 11 M.I factor 28 (11.8\%) (2): 1{-}Imperial, } \CommentTok{\#\textgreater{} 2{-}Metric } \CommentTok{\#\textgreater{} 12 Age numeric . } \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} {-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-} } \CommentTok{\#\textgreater{} 1 {-} Sex (factor {-} dichotomous)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} length n NAs unique} \CommentTok{\#\textgreater{} 237 236 1 2} \CommentTok{\#\textgreater{} 99.6\% 0.4\% } \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} freq perc lci.95 uci.95\textquotesingle{}} \CommentTok{\#\textgreater{} Female 118 50.0\% 43.7\% 56.3\%} \CommentTok{\#\textgreater{} Male 118 50.0\% 43.7\% 56.3\%} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} \textquotesingle{} 95\%{-}CI (Wilson)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} {-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-} } \CommentTok{\#\textgreater{} 2 {-} Wr.Hnd (numeric)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} length n NAs unique 0s mean meanCI\textquotesingle{}} \CommentTok{\#\textgreater{} 237 236 1 60 0 18.67 18.43} \CommentTok{\#\textgreater{} 99.6\% 0.4\% 0.0\% 18.91} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} .05 .10 .25 median .75 .90 .95} \CommentTok{\#\textgreater{} 16.00 16.50 17.50 18.50 19.80 21.15 22.05} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} range sd vcoef mad IQR skew kurt} \CommentTok{\#\textgreater{} 10.20 1.88 0.10 1.48 2.30 0.18 0.30} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} lowest : 13.0 (2), 14.0 (2), 15.0, 15.4, 15.5 (2)} \CommentTok{\#\textgreater{} highest: 22.5 (4), 22.8, 23.0 (2), 23.1, 23.2 (3)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} heap(?): remarkable frequency (9.7\%) for the mode(s) (= 17.5)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} \textquotesingle{} 95\%{-}CI (classic)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} {-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-} } \CommentTok{\#\textgreater{} 3 {-} NW.Hnd (numeric)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} length n NAs unique 0s mean meanCI\textquotesingle{}} \CommentTok{\#\textgreater{} 237 236 1 68 0 18.583 18.330} \CommentTok{\#\textgreater{} 99.6\% 0.4\% 0.0\% 18.835} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} .05 .10 .25 median .75 .90 .95} \CommentTok{\#\textgreater{} 15.500 16.300 17.500 18.500 19.725 21.000 22.225} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} range sd vcoef mad IQR skew kurt} \CommentTok{\#\textgreater{} 11.000 1.967 0.106 1.631 2.225 0.024 0.441} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} lowest : 12.5, 13.0 (2), 13.3, 13.5, 15.0} \CommentTok{\#\textgreater{} highest: 22.7, 23.0, 23.2 (2), 23.3, 23.5} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} heap(?): remarkable frequency (8.9\%) for the mode(s) (= 18)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} \textquotesingle{} 95\%{-}CI (classic)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} {-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-} } \CommentTok{\#\textgreater{} 4 {-} W.Hnd (factor {-} dichotomous)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} length n NAs unique} \CommentTok{\#\textgreater{} 237 236 1 2} \CommentTok{\#\textgreater{} 99.6\% 0.4\% } \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} freq perc lci.95 uci.95\textquotesingle{}} \CommentTok{\#\textgreater{} Left 18 7.6\% 4.9\% 11.7\%} \CommentTok{\#\textgreater{} Right 218 92.4\% 88.3\% 95.1\%} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} \textquotesingle{} 95\%{-}CI (Wilson)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} {-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-} } \CommentTok{\#\textgreater{} 5 {-} Fold (factor)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} length n NAs unique levels dupes} \CommentTok{\#\textgreater{} 237 237 0 3 3 y} \CommentTok{\#\textgreater{} 100.0\% 0.0\% } \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} level freq perc cumfreq cumperc} \CommentTok{\#\textgreater{} 1 R on L 120 50.6\% 120 50.6\%} \CommentTok{\#\textgreater{} 2 L on R 99 41.8\% 219 92.4\%} \CommentTok{\#\textgreater{} 3 Neither 18 7.6\% 237 100.0\%} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} {-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-} } \CommentTok{\#\textgreater{} 6 {-} Pulse (numeric)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} length n NAs unique 0s mean meanCI\textquotesingle{}} \CommentTok{\#\textgreater{} 237 192 45 43 0 74.15 72.49} \CommentTok{\#\textgreater{} 81.0\% 19.0\% 0.0\% 75.81} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} .05 .10 .25 median .75 .90 .95} \CommentTok{\#\textgreater{} 59.55 60.00 66.00 72.50 80.00 90.00 92.00} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} range sd vcoef mad IQR skew kurt} \CommentTok{\#\textgreater{} 69.00 11.69 0.16 11.12 14.00 {-}0.02 0.33} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} lowest : 35.0, 40.0, 48.0 (2), 50.0 (2), 54.0} \CommentTok{\#\textgreater{} highest: 96.0 (3), 97.0, 98.0, 100.0 (2), 104.0 (2)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} heap(?): remarkable frequency (9.4\%) for the mode(s) (= 80)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} \textquotesingle{} 95\%{-}CI (classic)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} {-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-} } \CommentTok{\#\textgreater{} 7 {-} Clap (factor)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} length n NAs unique levels dupes} \CommentTok{\#\textgreater{} 237 236 1 3 3 y} \CommentTok{\#\textgreater{} 99.6\% 0.4\% } \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} level freq perc cumfreq cumperc} \CommentTok{\#\textgreater{} 1 Right 147 62.3\% 147 62.3\%} \CommentTok{\#\textgreater{} 2 Neither 50 21.2\% 197 83.5\%} \CommentTok{\#\textgreater{} 3 Left 39 16.5\% 236 100.0\%} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} {-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-} } \CommentTok{\#\textgreater{} 8 {-} Exer (factor)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} length n NAs unique levels dupes} \CommentTok{\#\textgreater{} 237 237 0 3 3 y} \CommentTok{\#\textgreater{} 100.0\% 0.0\% } \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} level freq perc cumfreq cumperc} \CommentTok{\#\textgreater{} 1 Freq 115 48.5\% 115 48.5\%} \CommentTok{\#\textgreater{} 2 Some 98 41.4\% 213 89.9\%} \CommentTok{\#\textgreater{} 3 None 24 10.1\% 237 100.0\%} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} {-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-} } \CommentTok{\#\textgreater{} 9 {-} Smoke (factor)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} length n NAs unique levels dupes} \CommentTok{\#\textgreater{} 237 236 1 4 4 y} \CommentTok{\#\textgreater{} 99.6\% 0.4\% } \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} level freq perc cumfreq cumperc} \CommentTok{\#\textgreater{} 1 Never 189 80.1\% 189 80.1\%} \CommentTok{\#\textgreater{} 2 Occas 19 8.1\% 208 88.1\%} \CommentTok{\#\textgreater{} 3 Regul 17 7.2\% 225 95.3\%} \CommentTok{\#\textgreater{} 4 Heavy 11 4.7\% 236 100.0\%} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} {-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-} } \CommentTok{\#\textgreater{} 10 {-} Height (numeric)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} length n NAs unique 0s mean meanCI\textquotesingle{}} \CommentTok{\#\textgreater{} 237 209 28 67 0 172.38 171.04} \CommentTok{\#\textgreater{} 88.2\% 11.8\% 0.0\% 173.72} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} .05 .10 .25 median .75 .90 .95} \CommentTok{\#\textgreater{} 157.00 160.00 165.00 171.00 180.00 185.42 189.60} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} range sd vcoef mad IQR skew kurt} \CommentTok{\#\textgreater{} 50.00 9.85 0.06 10.08 15.00 0.22 {-}0.44} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} lowest : 150.0, 152.0, 152.4, 153.5, 154.94 (2)} \CommentTok{\#\textgreater{} highest: 191.8, 193.04, 195.0, 196.0, 200.0} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} \textquotesingle{} 95\%{-}CI (classic)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} {-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-} } \CommentTok{\#\textgreater{} 11 {-} M.I (factor {-} dichotomous)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} length n NAs unique} \CommentTok{\#\textgreater{} 237 209 28 2} \CommentTok{\#\textgreater{} 88.2\% 11.8\% } \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} freq perc lci.95 uci.95\textquotesingle{}} \CommentTok{\#\textgreater{} Imperial 68 32.5\% 26.5\% 39.2\%} \CommentTok{\#\textgreater{} Metric 141 67.5\% 60.8\% 73.5\%} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} \textquotesingle{} 95\%{-}CI (Wilson)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} {-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-} } \CommentTok{\#\textgreater{} 12 {-} Age (numeric)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} length n NAs unique 0s mean meanCI\textquotesingle{}} \CommentTok{\#\textgreater{} 237 237 0 88 0 20.3745 19.5460} \CommentTok{\#\textgreater{} 100.0\% 0.0\% 0.0\% 21.2030} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} .05 .10 .25 median .75 .90 .95} \CommentTok{\#\textgreater{} 17.0830 17.2168 17.6670 18.5830 20.1670 23.5830 30.6836} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} range sd vcoef mad IQR skew kurt} \CommentTok{\#\textgreater{} 56.2500 6.4743 0.3178 1.6057 2.5000 5.1630 33.4720} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} lowest : 16.75, 16.917 (3), 17.0 (2), 17.083 (7), 17.167 (11)} \CommentTok{\#\textgreater{} highest: 41.583, 43.833, 44.25, 70.417, 73.0} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} \textquotesingle{} 95\%{-}CI (classic)} \CommentTok{\# measurements for groups} \FunctionTok{describeBy}\NormalTok{(}\AttributeTok{x =}\NormalTok{ survey}\SpecialCharTok{$}\NormalTok{Wr.Hnd, }\AttributeTok{group =}\NormalTok{ survey}\SpecialCharTok{$}\NormalTok{Sex, }\AttributeTok{mat=}\NormalTok{T)} \CommentTok{\#\textgreater{} item group1 vars n mean sd median trimmed} \CommentTok{\#\textgreater{} X11 1 Female 1 118 17.59576 1.314768 17.5 17.64479} \CommentTok{\#\textgreater{} X12 2 Male 1 117 19.74188 1.750775 19.5 19.72737} \CommentTok{\#\textgreater{} mad min max range skew kurtosis se} \CommentTok{\#\textgreater{} X11 1.18608 13 20.8 7.8 {-}0.65369868 1.59655733 0.1210342} \CommentTok{\#\textgreater{} X12 1.48260 14 23.2 9.2 {-}0.05094141 0.01581485 0.1618592} \FunctionTok{Desc}\NormalTok{(Wr.Hnd}\SpecialCharTok{\textasciitilde{}}\NormalTok{Sex, }\AttributeTok{data=}\NormalTok{survey, }\AttributeTok{plot=}\NormalTok{F)} \CommentTok{\#\textgreater{} {-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-} } \CommentTok{\#\textgreater{} Wr.Hnd \textasciitilde{} Sex (survey)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} Summary: } \CommentTok{\#\textgreater{} n pairs: 237, valid: 235 (99.2\%), missings: 2 (0.8\%), groups: 2} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} Female Male} \CommentTok{\#\textgreater{} mean 17.596 19.742} \CommentTok{\#\textgreater{} median 17.500 19.500} \CommentTok{\#\textgreater{} sd 1.315 1.751} \CommentTok{\#\textgreater{} IQR 1.500 2.500} \CommentTok{\#\textgreater{} n 118 117} \CommentTok{\#\textgreater{} np 50.213\% 49.787\%} \CommentTok{\#\textgreater{} NAs 0 1} \CommentTok{\#\textgreater{} 0s 0 0} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} Kruskal{-}Wallis rank sum test:} \CommentTok{\#\textgreater{} Kruskal{-}Wallis chi{-}squared = 83.878, df = 1, p{-}value \textless{} 2.2e{-}16} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} Warning:} \CommentTok{\#\textgreater{} Grouping variable contains 1 NAs (0.422\%).} \end{Highlighting} \end{Shaded} \hypertarget{tables}{% \subsection{Tables}\label{tables}} For categorical variables, counts and percentages can be used to summarize data: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{table}\NormalTok{(gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{grass, }\AttributeTok{useNA =} \StringTok{"ifany"}\NormalTok{)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} LEGAL NOT LEGAL \textless{}NA\textgreater{} } \CommentTok{\#\textgreater{} 1126 717 1024} \FunctionTok{table}\NormalTok{(gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{age.f, }\AttributeTok{useNA =} \StringTok{"ifany"}\NormalTok{)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} \textless{}30 30{-}59 60{-}74 75+ \textless{}NA\textgreater{} } \CommentTok{\#\textgreater{} 481 1517 598 261 10} \FunctionTok{table}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Sex, }\AttributeTok{useNA =} \StringTok{"ifany"}\NormalTok{)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} Female Male \textless{}NA\textgreater{} } \CommentTok{\#\textgreater{} 118 118 1} \FunctionTok{prop.table}\NormalTok{(}\FunctionTok{table}\NormalTok{(gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{grass, }\AttributeTok{useNA =} \StringTok{"ifany"}\NormalTok{))} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} LEGAL NOT LEGAL \textless{}NA\textgreater{} } \CommentTok{\#\textgreater{} 0.3927450 0.2500872 0.3571678} \FunctionTok{prop.table}\NormalTok{(}\FunctionTok{table}\NormalTok{(gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{age.f, }\AttributeTok{useNA =} \StringTok{"ifany"}\NormalTok{))} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} \textless{}30 30{-}59 60{-}74 75+ \textless{}NA\textgreater{} } \CommentTok{\#\textgreater{} 0.167771189 0.529124520 0.208580398 0.091035926 0.003487967} \FunctionTok{prop.table}\NormalTok{(}\FunctionTok{table}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Sex, }\AttributeTok{useNA =} \StringTok{"ifany"}\NormalTok{))} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} Female Male \textless{}NA\textgreater{} } \CommentTok{\#\textgreater{} 0.497890295 0.497890295 0.004219409} \FunctionTok{Desc}\NormalTok{(gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{grass, }\AttributeTok{plot=}\NormalTok{F)} \CommentTok{\#\textgreater{} {-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-} } \CommentTok{\#\textgreater{} gss.2016$grass (factor {-} dichotomous)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} length n NAs unique} \CommentTok{\#\textgreater{} 2\textquotesingle{}867 1\textquotesingle{}843 1\textquotesingle{}024 2} \CommentTok{\#\textgreater{} 64.3\% 35.7\% } \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} freq perc lci.95 uci.95\textquotesingle{}} \CommentTok{\#\textgreater{} LEGAL 1\textquotesingle{}126 61.1\% 58.8\% 63.3\%} \CommentTok{\#\textgreater{} NOT LEGAL 717 38.9\% 36.7\% 41.2\%} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} \textquotesingle{} 95\%{-}CI (Wilson)} \CommentTok{\# 2D tables} \FunctionTok{table}\NormalTok{(gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{grass, gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{age.f, }\AttributeTok{useNA =} \StringTok{"ifany"}\NormalTok{)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} \textless{}30 30{-}59 60{-}74 75+ \textless{}NA\textgreater{}} \CommentTok{\#\textgreater{} LEGAL 237 625 213 48 3} \CommentTok{\#\textgreater{} NOT LEGAL 95 364 151 103 4} \CommentTok{\#\textgreater{} \textless{}NA\textgreater{} 149 528 234 110 3} \FunctionTok{Desc}\NormalTok{(grass}\SpecialCharTok{\textasciitilde{}}\NormalTok{age.f, }\AttributeTok{data=}\NormalTok{gss}\FloatTok{.2016}\NormalTok{, }\AttributeTok{plot=}\NormalTok{F)} \CommentTok{\#\textgreater{} {-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-} } \CommentTok{\#\textgreater{} grass \textasciitilde{} age.f (gss.2016)} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} Summary: } \CommentTok{\#\textgreater{} n: 1\textquotesingle{}836, rows: 2, columns: 4} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} Pearson\textquotesingle{}s Chi{-}squared test:} \CommentTok{\#\textgreater{} X{-}squared = 72.253, df = 3, p{-}value = 1.405e{-}15} \CommentTok{\#\textgreater{} Log likelihood ratio (G{-}test) test of independence:} \CommentTok{\#\textgreater{} G = 71.218, X{-}squared df = 3, p{-}value = 2.331e{-}15} \CommentTok{\#\textgreater{} Mantel{-}Haenszel Chi{-}squared:} \CommentTok{\#\textgreater{} X{-}squared = 59.423, df = 1, p{-}value = 1.271e{-}14} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} Phi{-}Coefficient 0.198} \CommentTok{\#\textgreater{} Contingency Coeff. 0.195} \CommentTok{\#\textgreater{} Cramer\textquotesingle{}s V 0.198} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} age.f \textless{}30 30{-}59 60{-}74 75+ Sum} \CommentTok{\#\textgreater{} grass } \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} LEGAL freq 237 625 213 48 1\textquotesingle{}123} \CommentTok{\#\textgreater{} perc 12.9\% 34.0\% 11.6\% 2.6\% 61.2\%} \CommentTok{\#\textgreater{} p.row 21.1\% 55.7\% 19.0\% 4.3\% .} \CommentTok{\#\textgreater{} p.col 71.4\% 63.2\% 58.5\% 31.8\% .} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} NOT LEGAL freq 95 364 151 103 713} \CommentTok{\#\textgreater{} perc 5.2\% 19.8\% 8.2\% 5.6\% 38.8\%} \CommentTok{\#\textgreater{} p.row 13.3\% 51.1\% 21.2\% 14.4\% .} \CommentTok{\#\textgreater{} p.col 28.6\% 36.8\% 41.5\% 68.2\% .} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} Sum freq 332 989 364 151 1\textquotesingle{}836} \CommentTok{\#\textgreater{} perc 18.1\% 53.9\% 19.8\% 8.2\% 100.0\%} \CommentTok{\#\textgreater{} p.row . . . . .} \CommentTok{\#\textgreater{} p.col . . . . .} \CommentTok{\#\textgreater{} } \end{Highlighting} \end{Shaded} \hypertarget{plots}{% \section{Plots}\label{plots}} In statistics and other sciences, being able to plot your results in the form of a graphic is often useful. An effective and accurate visualization can make your data come to life and convey your message in a powerful way. R has very powerful graphics capabilities that can help you visualize your data. In this chapter, we give you a look at \emph{traditional graphics} and \emph{ggplot2} graphics. We will look at five methods of visualizing data: \begin{itemize} \tightlist \item Scatterplot \item Bar plot \item Box plot \item Histogram \item One-dimensional strip plot \end{itemize} \hypertarget{traditional-graphics}{% \subsection{Traditional graphics}\label{traditional-graphics}} With traditional graphics, you can create many different types of plots, such as scatterplots and bar charts. Here are just a few of the different types of plots you can create: \begin{itemize} \tightlist \item Scatterplot: \texttt{plot()}, \texttt{stripchart()} \item Bar plot: \texttt{barplot()} \item Box plot: \texttt{boxplot()} \item Histogram: \texttt{hist()} \item One-dimensional strip plot: \texttt{stripchart()} \end{itemize} For a complete list of the different types of plots, see the Help at \texttt{?graphics}. \hypertarget{bar-plot}{% \subsubsection{Bar plot}\label{bar-plot}} A bar plot displays the distribution (frequency) of a categorical variable through vertical or horizontal bars. In its simplest form, the format of the barplot() function is \begin{Shaded} \begin{Highlighting}[] \FunctionTok{par}\NormalTok{(}\AttributeTok{mar =} \FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{, }\DecValTok{2}\NormalTok{, }\DecValTok{2}\NormalTok{, }\FloatTok{0.1}\NormalTok{), }\AttributeTok{las=}\DecValTok{1}\NormalTok{, }\AttributeTok{mgp=}\FunctionTok{c}\NormalTok{(}\FloatTok{2.5}\NormalTok{,}\FloatTok{0.1}\NormalTok{, }\DecValTok{0}\NormalTok{), }\AttributeTok{tcl=}\FloatTok{0.15}\NormalTok{)} \FunctionTok{barplot}\NormalTok{(}\FunctionTok{table}\NormalTok{(gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{age.f))} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics[width=0.6\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-39-1} \end{center} where \texttt{gss.2016\$age.f} is a factor. The values \texttt{table(gss.2016\$age.f)} determine the heights of the bars in the plot, and a vertical bar plot is produced. Including the option \texttt{horiz=TRUE} produces a horizontal bar chart instead. You can also add annotating options. The main option adds a plot title, whereas the xlab and ylab options add x-axis and y-axis labels, respectively. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{par}\NormalTok{(}\AttributeTok{mar =} \FunctionTok{c}\NormalTok{(}\DecValTok{4}\NormalTok{, }\DecValTok{4}\NormalTok{, }\DecValTok{2}\NormalTok{, }\FloatTok{0.1}\NormalTok{), }\AttributeTok{las=}\DecValTok{1}\NormalTok{, }\AttributeTok{mgp=}\FunctionTok{c}\NormalTok{(}\FloatTok{2.5}\NormalTok{,}\FloatTok{0.1}\NormalTok{, }\DecValTok{0}\NormalTok{), }\AttributeTok{tcl=}\FloatTok{0.15}\NormalTok{)} \NormalTok{counts }\OtherTok{\textless{}{-}} \FunctionTok{table}\NormalTok{(gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{age.f)} \FunctionTok{barplot}\NormalTok{(counts, } \AttributeTok{main=}\StringTok{"Simple Bar Plot"}\NormalTok{, } \AttributeTok{xlab=}\StringTok{"Age"}\NormalTok{, }\AttributeTok{ylab=}\StringTok{"Frequency"}\NormalTok{)} \FunctionTok{barplot}\NormalTok{(counts, } \AttributeTok{main=}\StringTok{"Simple Bar Plot"}\NormalTok{, } \AttributeTok{xlab=}\StringTok{"Age"}\NormalTok{, }\AttributeTok{ylab=}\StringTok{"Frequency"}\NormalTok{,} \AttributeTok{horiz=}\NormalTok{T)} \end{Highlighting} \end{Shaded} \begin{figure} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-40-1} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-40-2} \end{figure} You can customize many features of a graph (fonts, colors, axes, and labels) through options called graphical parameters. One way is to specify these options through the \texttt{par()} function. Values set in this manner will be in effect for the rest of the session or until they're changed. The format is \texttt{par(optionname=value,\ optionname=value,\ ...)}. Specifying \texttt{par()} without parameters produces a list of the current graphical settings. The relevant parameters are shown below \begin{longtable}[]{@{}ll@{}} \toprule \begin{minipage}[b]{(\columnwidth - 1\tabcolsep) * \real{0.39}}\raggedright Parameter\strut \end{minipage} & \begin{minipage}[b]{(\columnwidth - 1\tabcolsep) * \real{0.61}}\raggedright Description\strut \end{minipage}\tabularnewline \midrule \endhead \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.39}}\raggedright \texttt{mar}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.61}}\raggedright A numerical vector of the form \texttt{c(bottom,\ left,\ top,\ right)} which gives the number of lines of margin to be specified on the four sides of the plot. The default is \texttt{c(5,\ 4,\ 4,\ 2)\ +\ 0.1}.\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.39}}\raggedright \texttt{las}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.61}}\raggedright Specifies that labels are parallel (= 0) or perpendicular (= 2) to the axis.\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.39}}\raggedright \texttt{tck}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.61}}\raggedright Length of each tick mark as a fraction of the plotting region (a negative number is outside the graph, a positive number is inside, 0 suppresses ticks, and 1 creates gridlines). The default is -0.01.\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.39}}\raggedright \texttt{mgp}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.61}}\raggedright The margin line for the axis title, axis labels and axis line. Note that \texttt{mgp{[}1{]}} affects title whereas \texttt{mgp{[}2:3{]}} affect axis. The default is \texttt{c(3,\ 1,\ 0)}.\strut \end{minipage}\tabularnewline \bottomrule \end{longtable} If the argument of \texttt{barplot()} is a matrix rather than a vector, the resulting graph will be a stacked or grouped bar plot. If \texttt{beside=FALSE} (the default), then each column of the matrix produces a bar in the plot, with the values in the column giving the heights of stacked ``sub-bars.'' If \texttt{beside=TRUE}, each column of the matrix represents a group, and the values in each column are juxtaposed rather than stacked. Consider the cross-tabulation of age and votes: \begin{Shaded} \begin{Highlighting}[] \NormalTok{counts }\OtherTok{\textless{}{-}} \FunctionTok{table}\NormalTok{(gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{grass, gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{age.f)} \NormalTok{counts} \CommentTok{\#\textgreater{} } \CommentTok{\#\textgreater{} \textless{}30 30{-}59 60{-}74 75+} \CommentTok{\#\textgreater{} LEGAL 237 625 213 48} \CommentTok{\#\textgreater{} NOT LEGAL 95 364 151 103} \end{Highlighting} \end{Shaded} You can graph the results as either a stacked or a grouped bar plot. The resulting graphs are displayed below \begin{Shaded} \begin{Highlighting}[] \FunctionTok{par}\NormalTok{(}\AttributeTok{mar =} \FunctionTok{c}\NormalTok{(}\DecValTok{4}\NormalTok{, }\DecValTok{4}\NormalTok{, }\DecValTok{2}\NormalTok{, }\FloatTok{0.1}\NormalTok{), }\AttributeTok{las=}\DecValTok{1}\NormalTok{, }\AttributeTok{mgp=}\FunctionTok{c}\NormalTok{(}\FloatTok{2.5}\NormalTok{,}\FloatTok{0.1}\NormalTok{, }\DecValTok{0}\NormalTok{), }\AttributeTok{tcl=}\FloatTok{0.15}\NormalTok{)} \FunctionTok{barplot}\NormalTok{(counts, } \AttributeTok{main=}\StringTok{"Stacked Bar Plot"}\NormalTok{, } \AttributeTok{xlab=}\StringTok{"Age"}\NormalTok{, }\AttributeTok{ylab=}\StringTok{"Frequency"}\NormalTok{,} \AttributeTok{col=}\FunctionTok{c}\NormalTok{(}\StringTok{"lightgreen"}\NormalTok{, }\StringTok{"red"}\NormalTok{),} \AttributeTok{legend=}\NormalTok{T)} \FunctionTok{barplot}\NormalTok{(counts, } \AttributeTok{main=}\StringTok{"Stacked Bar Plot"}\NormalTok{, } \AttributeTok{xlab=}\StringTok{"Age"}\NormalTok{, }\AttributeTok{ylab=}\StringTok{"Frequency"}\NormalTok{,} \AttributeTok{col=}\FunctionTok{c}\NormalTok{(}\StringTok{"lightgreen"}\NormalTok{, }\StringTok{"red"}\NormalTok{),} \AttributeTok{legend=}\NormalTok{T, }\AttributeTok{beside =}\NormalTok{ T)} \end{Highlighting} \end{Shaded} \begin{figure} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-42-1} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-42-2} \end{figure} Bar plots needn't be based on counts or frequencies. You can create bar plots that represent means, medians, standard deviations, and so forth by using the aggregate function and passing the results to the \texttt{barplot()} function. The following listing shows an example, which is displayed below. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{par}\NormalTok{(}\AttributeTok{mar =} \FunctionTok{c}\NormalTok{(}\DecValTok{4}\NormalTok{, }\DecValTok{4}\NormalTok{, }\DecValTok{2}\NormalTok{, }\FloatTok{0.1}\NormalTok{), }\AttributeTok{las=}\DecValTok{1}\NormalTok{, }\AttributeTok{mgp=}\FunctionTok{c}\NormalTok{(}\FloatTok{2.5}\NormalTok{,}\FloatTok{0.1}\NormalTok{, }\DecValTok{0}\NormalTok{), }\AttributeTok{tcl=}\FloatTok{0.15}\NormalTok{)} \NormalTok{means }\OtherTok{\textless{}{-}} \FunctionTok{aggregate}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Height, } \NormalTok{ survey[,}\StringTok{"Sex"}\NormalTok{,}\AttributeTok{drop=}\NormalTok{F], mean, }\AttributeTok{na.rm=}\NormalTok{T)} \NormalTok{means} \CommentTok{\#\textgreater{} Sex x} \CommentTok{\#\textgreater{} 1 Female 165.6867} \CommentTok{\#\textgreater{} 2 Male 178.8260} \FunctionTok{barplot}\NormalTok{(means}\SpecialCharTok{$}\NormalTok{x, }\AttributeTok{names.arg =}\NormalTok{ means}\SpecialCharTok{$}\NormalTok{Sex, } \AttributeTok{main=}\StringTok{"Mean height"}\NormalTok{)} \FunctionTok{barplot}\NormalTok{(means}\SpecialCharTok{$}\NormalTok{x, }\AttributeTok{names.arg =}\NormalTok{ means}\SpecialCharTok{$}\NormalTok{Sex, } \AttributeTok{main=}\StringTok{"Mean height"}\NormalTok{, }\AttributeTok{horiz =}\NormalTok{ T)} \end{Highlighting} \end{Shaded} \begin{figure} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-43-1} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-43-2} \end{figure} \texttt{means\$x} is the vector containing the heights of the bars, and the option \texttt{names.arg=means\$Sex} is added to provide labels. Please study carefully the following codes and outputs: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{par}\NormalTok{(}\AttributeTok{mar =} \FunctionTok{c}\NormalTok{(}\DecValTok{4}\NormalTok{, }\DecValTok{4}\NormalTok{, }\DecValTok{2}\NormalTok{, }\FloatTok{0.1}\NormalTok{), }\AttributeTok{las=}\DecValTok{1}\NormalTok{, }\AttributeTok{mgp=}\FunctionTok{c}\NormalTok{(}\FloatTok{2.5}\NormalTok{,}\FloatTok{0.1}\NormalTok{, }\DecValTok{0}\NormalTok{), }\AttributeTok{tcl=}\FloatTok{0.15}\NormalTok{)} \FunctionTok{barplot}\NormalTok{(}\FunctionTok{table}\NormalTok{(gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{age.f))} \FunctionTok{barplot}\NormalTok{(}\FunctionTok{table}\NormalTok{(gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{grass))} \FunctionTok{barplot}\NormalTok{(}\FunctionTok{table}\NormalTok{(gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{grass), }\AttributeTok{col=}\FunctionTok{c}\NormalTok{(}\StringTok{"green"}\NormalTok{, }\StringTok{"purple"}\NormalTok{))} \FunctionTok{barplot}\NormalTok{(}\FunctionTok{table}\NormalTok{(gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{grass), }\AttributeTok{col=}\FunctionTok{c}\NormalTok{(}\StringTok{"\#78A678"}\NormalTok{, }\StringTok{"\#7463AC"}\NormalTok{))} \FunctionTok{barplot}\NormalTok{(}\FunctionTok{table}\NormalTok{(gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{grass), }\AttributeTok{col=}\FunctionTok{c}\NormalTok{(}\StringTok{"\#78A678"}\NormalTok{, }\StringTok{"\#7463AC"}\NormalTok{),} \AttributeTok{xlab=}\StringTok{"Should marijuana be legal?"}\NormalTok{, }\AttributeTok{ylab=}\StringTok{"Number of responses"}\NormalTok{)} \FunctionTok{par}\NormalTok{(}\AttributeTok{las=}\DecValTok{1}\NormalTok{, }\AttributeTok{mgp=}\FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{,}\FloatTok{0.2}\NormalTok{,}\DecValTok{0}\NormalTok{), }\AttributeTok{tcl=}\FloatTok{0.1}\NormalTok{, }\AttributeTok{mar=}\FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{,}\DecValTok{3}\NormalTok{,}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{))} \FunctionTok{barplot}\NormalTok{(}\FunctionTok{table}\NormalTok{(gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{grass), }\AttributeTok{col=}\FunctionTok{c}\NormalTok{(}\StringTok{"\#78A678"}\NormalTok{, }\StringTok{"\#7463AC"}\NormalTok{),} \AttributeTok{xlab=}\StringTok{"Should marijuana be legal?"}\NormalTok{, }\AttributeTok{ylab=}\StringTok{"Number of responses"}\NormalTok{)} \CommentTok{\# to save plot to PNG} \FunctionTok{par}\NormalTok{(}\AttributeTok{las=}\DecValTok{1}\NormalTok{, }\AttributeTok{mgp=}\FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{,}\FloatTok{0.2}\NormalTok{,}\DecValTok{0}\NormalTok{), }\AttributeTok{tcl=}\FloatTok{0.1}\NormalTok{, }\AttributeTok{mar=}\FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{,}\DecValTok{3}\NormalTok{,}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{))} \FunctionTok{barplot}\NormalTok{(}\FunctionTok{table}\NormalTok{(gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{grass, gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{age.f), }\AttributeTok{beside =}\NormalTok{ T, }\AttributeTok{legend=}\NormalTok{T)} \FunctionTok{png}\NormalTok{(}\AttributeTok{filename =} \StringTok{"output/image/barplot\_1.png"}\NormalTok{, }\AttributeTok{width =} \DecValTok{400}\NormalTok{, }\AttributeTok{height =} \DecValTok{300}\NormalTok{)} \FunctionTok{par}\NormalTok{(}\AttributeTok{las=}\DecValTok{1}\NormalTok{, }\AttributeTok{mgp=}\FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{,}\FloatTok{0.2}\NormalTok{,}\DecValTok{0}\NormalTok{), }\AttributeTok{tcl=}\FloatTok{0.1}\NormalTok{, }\AttributeTok{mar=}\FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{,}\DecValTok{3}\NormalTok{,}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{))} \FunctionTok{barplot}\NormalTok{(}\FunctionTok{table}\NormalTok{(gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{grass, gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{age.f), }\AttributeTok{beside =}\NormalTok{ T, }\AttributeTok{legend=}\NormalTok{T)} \FunctionTok{dev.off}\NormalTok{()} \CommentTok{\#\textgreater{} pdf } \CommentTok{\#\textgreater{} 2} \end{Highlighting} \end{Shaded} \begin{figure} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-44-1} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-44-2} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-44-3} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-44-4} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-44-5} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-44-6} \end{figure} \hypertarget{histogram}{% \subsubsection{Histogram}\label{histogram}} Histograms display the distribution of a continuous variable by dividing the range of scores into a specified number of bins on the x-axis and displaying the frequency of scores in each bin on the y-axis. You can create histograms with the function \begin{Shaded} \begin{Highlighting}[] \FunctionTok{par}\NormalTok{(}\AttributeTok{mar =} \FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{, }\DecValTok{2}\NormalTok{, }\DecValTok{2}\NormalTok{, }\FloatTok{0.1}\NormalTok{), }\AttributeTok{las=}\DecValTok{1}\NormalTok{, }\AttributeTok{mgp=}\FunctionTok{c}\NormalTok{(}\FloatTok{2.5}\NormalTok{,}\FloatTok{0.1}\NormalTok{, }\DecValTok{0}\NormalTok{), }\AttributeTok{tcl=}\FloatTok{0.15}\NormalTok{)} \FunctionTok{hist}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Wr.Hnd)} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics[width=0.6\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-45-1} \end{center} where \texttt{survey\$Wr.Hnd} is a numeric vector of values. The option \texttt{freq=FALSE} creates a plot based on probability densities rather than frequencies. The \texttt{breaks=} option controls the number of bins. The default produces equally spaced breaks when defining the cells of the histogram. The following listing provides the code for four variations of a histogram; the results are plotted in figure 6.8. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{par}\NormalTok{(}\AttributeTok{mar =} \FunctionTok{c}\NormalTok{(}\DecValTok{4}\NormalTok{, }\DecValTok{4}\NormalTok{, }\DecValTok{2}\NormalTok{, }\FloatTok{0.1}\NormalTok{), }\AttributeTok{las=}\DecValTok{1}\NormalTok{, }\AttributeTok{mgp=}\FunctionTok{c}\NormalTok{(}\FloatTok{2.5}\NormalTok{,}\FloatTok{0.1}\NormalTok{, }\DecValTok{0}\NormalTok{), }\AttributeTok{tcl=}\FloatTok{0.15}\NormalTok{)} \FunctionTok{hist}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Wr.Hnd)} \FunctionTok{hist}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Wr.Hnd,} \AttributeTok{breaks=}\DecValTok{20}\NormalTok{, }\AttributeTok{col =} \StringTok{"lightblue"}\NormalTok{)} \FunctionTok{hist}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Wr.Hnd,} \AttributeTok{breaks=}\DecValTok{20}\NormalTok{, }\AttributeTok{col =} \StringTok{"lightblue"}\NormalTok{,} \AttributeTok{freq =}\NormalTok{ F)} \FunctionTok{rug}\NormalTok{(}\FunctionTok{jitter}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Wr.Hnd))} \FunctionTok{lines}\NormalTok{(}\FunctionTok{density}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Wr.Hnd, }\AttributeTok{na.rm =} \ConstantTok{TRUE}\NormalTok{), } \AttributeTok{col=}\StringTok{"red"}\NormalTok{, }\AttributeTok{lwd=}\DecValTok{2}\NormalTok{)} \FunctionTok{range}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Wr.Hnd, }\AttributeTok{na.rm =}\NormalTok{ T)} \CommentTok{\#\textgreater{} [1] 13.0 23.2} \FunctionTok{hist}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Wr.Hnd,} \AttributeTok{breaks=}\FunctionTok{seq}\NormalTok{(}\AttributeTok{from=}\DecValTok{13}\NormalTok{, }\AttributeTok{to=}\DecValTok{24}\NormalTok{, }\AttributeTok{by=}\DecValTok{1}\NormalTok{), } \AttributeTok{col =} \StringTok{"lightblue"}\NormalTok{, }\AttributeTok{freq =}\NormalTok{ F)} \FunctionTok{curve}\NormalTok{(}\FunctionTok{dnorm}\NormalTok{(x, }\AttributeTok{mean=}\FunctionTok{mean}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Wr.Hnd, }\AttributeTok{na.rm =}\NormalTok{ T), } \AttributeTok{sd=}\FunctionTok{sd}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Wr.Hnd, }\AttributeTok{na.rm =}\NormalTok{ T)), } \AttributeTok{from=}\DecValTok{13}\NormalTok{, }\AttributeTok{to=}\DecValTok{24}\NormalTok{, }\AttributeTok{add=}\NormalTok{T, }\AttributeTok{col=}\StringTok{"red"}\NormalTok{, }\AttributeTok{lwd=}\DecValTok{2}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{figure} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-46-1} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-46-2} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-46-3} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-46-4} \end{figure} \hypertarget{box-plot}{% \subsubsection{Box plot}\label{box-plot}} A box-and-whiskers plot describes the distribution of a continuous variable by plotting its five-number summary: the minimum, lower quartile (25th percentile), median (50th percentile), upper quartile (75th percentile), and maximum. It can also display observations that may be outliers (values outside the range of \(\pm1.5*IQR\), where IQR is the interquartile range defined as the upper quartile minus the lower quartile). For example, this statement produces the plot shown below: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{par}\NormalTok{(}\AttributeTok{mar =} \FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{, }\DecValTok{2}\NormalTok{, }\DecValTok{2}\NormalTok{, }\FloatTok{0.1}\NormalTok{), }\AttributeTok{las=}\DecValTok{1}\NormalTok{, }\AttributeTok{mgp=}\FunctionTok{c}\NormalTok{(}\FloatTok{2.5}\NormalTok{,}\FloatTok{0.1}\NormalTok{, }\DecValTok{0}\NormalTok{), }\AttributeTok{tcl=}\FloatTok{0.15}\NormalTok{)} \FunctionTok{boxplot}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Wr.Hnd, }\AttributeTok{main=}\StringTok{"Box plot"}\NormalTok{, }\AttributeTok{ylab=}\StringTok{"cm"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics[width=0.6\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-47-1} \end{center} Box plots can be created for individual variables or for variables by group. The format is \begin{Shaded} \begin{Highlighting}[] \FunctionTok{boxplot}\NormalTok{(formula, }\AttributeTok{data=}\NormalTok{dataframe)} \end{Highlighting} \end{Shaded} where \texttt{formula} is a formula and \texttt{dataframe} denotes the data frame (or list) providing the data. An example of a formula is \texttt{y\ \textasciitilde{}\ A}, where a separate box plot for numeric variable \texttt{y} is generated for each value of categorical variable \texttt{A}. The formula \texttt{y\ \textasciitilde{}\ A*B} would produce a box plot of numeric variable \texttt{y}, for each combination of levels in categorical variables \texttt{A} and \texttt{B}. Adding the option \texttt{horizontal=TRUE} to reverse the axis orientation. The following code revisits the impact of sex on height with parallel box plots. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{par}\NormalTok{(}\AttributeTok{mar =} \FunctionTok{c}\NormalTok{(}\DecValTok{4}\NormalTok{, }\DecValTok{4}\NormalTok{, }\DecValTok{2}\NormalTok{, }\FloatTok{0.1}\NormalTok{), }\AttributeTok{las=}\DecValTok{1}\NormalTok{, }\AttributeTok{mgp=}\FunctionTok{c}\NormalTok{(}\FloatTok{2.5}\NormalTok{,}\FloatTok{0.1}\NormalTok{, }\DecValTok{0}\NormalTok{), }\AttributeTok{tcl=}\FloatTok{0.15}\NormalTok{)} \FunctionTok{boxplot}\NormalTok{(Height }\SpecialCharTok{\textasciitilde{}}\NormalTok{ Sex, }\AttributeTok{data=}\NormalTok{survey)} \FunctionTok{boxplot}\NormalTok{(Height }\SpecialCharTok{\textasciitilde{}}\NormalTok{ Sex, }\AttributeTok{data=}\NormalTok{survey, }\AttributeTok{horizontal=}\ConstantTok{TRUE}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{figure} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-49-1} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-49-2} \end{figure} \hypertarget{scatterplot}{% \subsubsection{Scatterplot}\label{scatterplot}} To create a scatterplot, you use the \texttt{plot()} function. A scatterplot creates points (or sometimes bubbles or other symbols) on your chart. Each point corresponds to an observation in your data. You've probably seen or created this type of graphic a million times, so you already know that scatterplots use the Cartesian coordinate system, where one variable is mapped to the x‐axis and a second variable is mapped to the y‐axis. The most common high level function used to produce plots in R is the \texttt{plot()} function. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{par}\NormalTok{(}\AttributeTok{mar =} \FunctionTok{c}\NormalTok{(}\DecValTok{3}\NormalTok{, }\DecValTok{3}\NormalTok{, }\DecValTok{2}\NormalTok{, }\FloatTok{0.1}\NormalTok{), }\AttributeTok{las=}\DecValTok{1}\NormalTok{, }\AttributeTok{mgp=}\FunctionTok{c}\NormalTok{(}\FloatTok{1.5}\NormalTok{,}\FloatTok{0.1}\NormalTok{, }\DecValTok{0}\NormalTok{), }\AttributeTok{tcl=}\FloatTok{0.15}\NormalTok{)} \FunctionTok{plot}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Wr.Hnd)} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics[width=0.6\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-50-1} \end{center} R has plotted the values of \texttt{Wr.Hnd} (on the y axis) against an index since we are only plotting one variable to plot. The index is just the order of the \texttt{Wr.Hnd} values in the data frame (1 first in the data frame and 237 last). The \texttt{Wr.Hnd} variable name has been automatically included as a y axis label and the axes scales have been automatically set. To plot a scatterplot of one numeric variable against another numeric variable we just need to include both variables as arguments when using the \texttt{plot()} function. For example to plot \texttt{Wr.Hnd} on the y axis and \texttt{Height} of the x axis. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{par}\NormalTok{(}\AttributeTok{mar =} \FunctionTok{c}\NormalTok{(}\DecValTok{3}\NormalTok{, }\DecValTok{3}\NormalTok{, }\DecValTok{2}\NormalTok{, }\FloatTok{0.1}\NormalTok{), }\AttributeTok{las=}\DecValTok{1}\NormalTok{, }\AttributeTok{mgp=}\FunctionTok{c}\NormalTok{(}\FloatTok{1.5}\NormalTok{,}\FloatTok{0.1}\NormalTok{, }\DecValTok{0}\NormalTok{), }\AttributeTok{tcl=}\FloatTok{0.15}\NormalTok{)} \FunctionTok{plot}\NormalTok{(}\AttributeTok{x =}\NormalTok{ survey}\SpecialCharTok{$}\NormalTok{Height, }\AttributeTok{y =}\NormalTok{ survey}\SpecialCharTok{$}\NormalTok{Wr.Hnd)} \end{Highlighting} \end{Shaded} \begin{center}\includegraphics[width=0.6\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-51-1} \end{center} There is an equivalent approach for these types of plots which often causes some confusion at first. You can also use the formula notation when using the \texttt{plot()} function. However, in contrast to the previous method the formula method requires you to specify the y axis variable first, then a \texttt{\textasciitilde{}} and then our x axis variable. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{par}\NormalTok{(}\AttributeTok{mar =} \FunctionTok{c}\NormalTok{(}\DecValTok{4}\NormalTok{, }\DecValTok{4}\NormalTok{, }\FloatTok{0.1}\NormalTok{, }\FloatTok{0.1}\NormalTok{))} \FunctionTok{plot}\NormalTok{(Wr.Hnd}\SpecialCharTok{\textasciitilde{}}\NormalTok{Height, }\AttributeTok{data=}\NormalTok{survey)} \FunctionTok{plot}\NormalTok{(Wr.Hnd}\SpecialCharTok{\textasciitilde{}}\NormalTok{Height, }\AttributeTok{data=}\NormalTok{survey, }\AttributeTok{col=}\NormalTok{survey}\SpecialCharTok{$}\NormalTok{Sex, }\AttributeTok{pch=}\DecValTok{16}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{figure} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-52-1} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-52-2} \end{figure} \hypertarget{ggplot2-graphics}{% \subsection{ggplot2 graphics}\label{ggplot2-graphics}} ggplot2 graphics is based on \textbf{ggplot2} package. Because \textbf{ggplot2} isn't part of the standard distribution of R, you have to download the package from CRAN and install it. To install the \textbf{ggplot2} package, use: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{install.packages}\NormalTok{(}\StringTok{"ggplot2"}\NormalTok{)} \end{Highlighting} \end{Shaded} And then to load the package, use: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(}\StringTok{"ggplot2"}\NormalTok{)} \end{Highlighting} \end{Shaded} The basic concept of a ggplot2 graphic is that you combine different plot elements into layers. Each layer of a ggplot2 graphic contains information about the following: \begin{itemize} \tightlist \item The data that you want to plot: for \texttt{ggplot()}, this must be a data frame. \item A mapping from the data to your plot: this usually is as simple as telling \texttt{ggplot()} what goes on the x‐axis and what goes on the y‐axis. \item A geometric object, or geom in ggplot terminology: the geom defines the overall look of the layer (for example, whether the plot is made up of bars, points, or lines). \end{itemize} \hypertarget{bar-plot-1}{% \subsubsection{Bar plot}\label{bar-plot-1}} Please study carefully the following codes and outputs: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(ggplot2)} \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ gss}\FloatTok{.2016}\NormalTok{, }\AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{age.f)) }\SpecialCharTok{+} \FunctionTok{geom\_bar}\NormalTok{()} \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ gss}\FloatTok{.2016}\NormalTok{, }\AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{grass)) }\SpecialCharTok{+} \FunctionTok{geom\_bar}\NormalTok{()} \end{Highlighting} \end{Shaded} \begin{figure} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-55-1} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-55-2} \end{figure} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ gss}\FloatTok{.2016}\NormalTok{, }\AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{age.f)) }\SpecialCharTok{+} \FunctionTok{geom\_bar}\NormalTok{() }\SpecialCharTok{+} \FunctionTok{scale\_x\_discrete}\NormalTok{(}\AttributeTok{na.translate =}\NormalTok{ F)} \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ gss}\FloatTok{.2016}\NormalTok{, }\AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{grass)) }\SpecialCharTok{+} \FunctionTok{geom\_bar}\NormalTok{() }\SpecialCharTok{+} \FunctionTok{scale\_x\_discrete}\NormalTok{(}\AttributeTok{na.translate =}\NormalTok{ F)} \end{Highlighting} \end{Shaded} \begin{figure} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-56-1} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-56-2} \end{figure} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ gss}\FloatTok{.2016}\NormalTok{, }\AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{age.f, }\AttributeTok{fill=}\NormalTok{age.f)) }\SpecialCharTok{+} \FunctionTok{geom\_bar}\NormalTok{() }\SpecialCharTok{+} \FunctionTok{scale\_x\_discrete}\NormalTok{(}\AttributeTok{na.translate =}\NormalTok{ F)} \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ gss}\FloatTok{.2016}\NormalTok{, }\AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{grass, }\AttributeTok{fill=}\NormalTok{grass)) }\SpecialCharTok{+} \FunctionTok{geom\_bar}\NormalTok{() }\SpecialCharTok{+} \FunctionTok{scale\_x\_discrete}\NormalTok{(}\AttributeTok{na.translate =}\NormalTok{ F)} \end{Highlighting} \end{Shaded} \begin{figure} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-57-1} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-57-2} \end{figure} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ gss}\FloatTok{.2016}\NormalTok{, }\AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{grass, }\AttributeTok{fill=}\NormalTok{grass)) }\SpecialCharTok{+} \FunctionTok{geom\_bar}\NormalTok{() }\SpecialCharTok{+} \FunctionTok{scale\_x\_discrete}\NormalTok{(}\AttributeTok{na.translate =}\NormalTok{ F) }\SpecialCharTok{+} \FunctionTok{scale\_fill\_manual}\NormalTok{(}\AttributeTok{values =} \FunctionTok{c}\NormalTok{(}\StringTok{"\#78A678"}\NormalTok{, }\StringTok{"\#7463AC"}\NormalTok{), }\AttributeTok{guide=}\NormalTok{F)} \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ gss}\FloatTok{.2016}\NormalTok{, }\AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{grass, }\AttributeTok{fill=}\NormalTok{grass)) }\SpecialCharTok{+} \FunctionTok{geom\_bar}\NormalTok{() }\SpecialCharTok{+} \FunctionTok{scale\_x\_discrete}\NormalTok{(}\AttributeTok{na.translate =}\NormalTok{ F) }\SpecialCharTok{+} \FunctionTok{scale\_fill\_manual}\NormalTok{(}\AttributeTok{values =} \FunctionTok{c}\NormalTok{(}\StringTok{"\#78A678"}\NormalTok{, }\StringTok{"\#7463AC"}\NormalTok{), }\AttributeTok{guide=}\NormalTok{F) }\SpecialCharTok{+} \FunctionTok{theme\_minimal}\NormalTok{() }\SpecialCharTok{+} \FunctionTok{labs}\NormalTok{(}\AttributeTok{x=}\StringTok{"Should marijuana be legal?"}\NormalTok{, }\AttributeTok{y=}\StringTok{"Number of responses"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{figure} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-58-1} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-58-2} \end{figure} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ gss}\FloatTok{.2016}\NormalTok{[}\SpecialCharTok{!}\FunctionTok{is.na}\NormalTok{(gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{grass),], } \AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{age.f, }\AttributeTok{fill=}\NormalTok{grass)) }\SpecialCharTok{+} \FunctionTok{geom\_bar}\NormalTok{() }\SpecialCharTok{+} \FunctionTok{scale\_x\_discrete}\NormalTok{(}\AttributeTok{na.translate =}\NormalTok{ F)} \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ gss}\FloatTok{.2016}\NormalTok{[}\SpecialCharTok{!}\FunctionTok{is.na}\NormalTok{(gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{grass),], } \AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{age.f, }\AttributeTok{fill=}\NormalTok{grass)) }\SpecialCharTok{+} \FunctionTok{geom\_bar}\NormalTok{(}\AttributeTok{position =} \StringTok{"fill"}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{scale\_x\_discrete}\NormalTok{(}\AttributeTok{na.translate =}\NormalTok{ F)} \end{Highlighting} \end{Shaded} \begin{figure} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-59-1} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-59-2} \end{figure} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ gss}\FloatTok{.2016}\NormalTok{[}\SpecialCharTok{!}\FunctionTok{is.na}\NormalTok{(gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{grass),], } \AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{age.f, }\AttributeTok{fill=}\NormalTok{grass)) }\SpecialCharTok{+} \FunctionTok{geom\_bar}\NormalTok{(}\AttributeTok{position =} \StringTok{"dodge"}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{scale\_x\_discrete}\NormalTok{(}\AttributeTok{na.translate =}\NormalTok{ F) }\SpecialCharTok{+} \FunctionTok{labs}\NormalTok{(}\AttributeTok{x=}\StringTok{"Should marijuana be legal?"}\NormalTok{, }\AttributeTok{y=}\StringTok{"Number of responses"}\NormalTok{, }\AttributeTok{fill=}\StringTok{"Legal"}\NormalTok{)} \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ gss}\FloatTok{.2016}\NormalTok{[}\SpecialCharTok{!}\FunctionTok{is.na}\NormalTok{(gss}\FloatTok{.2016}\SpecialCharTok{$}\NormalTok{age.f),], } \AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{grass, }\AttributeTok{fill=}\NormalTok{age.f)) }\SpecialCharTok{+} \FunctionTok{geom\_bar}\NormalTok{(}\AttributeTok{position =} \StringTok{"dodge"}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{scale\_x\_discrete}\NormalTok{(}\AttributeTok{na.translate =}\NormalTok{ F) }\SpecialCharTok{+} \FunctionTok{labs}\NormalTok{(}\AttributeTok{x=}\StringTok{"Should marijuana be legal?"}\NormalTok{, }\AttributeTok{y=}\StringTok{"Number of responses"}\NormalTok{, }\AttributeTok{fill=}\StringTok{"Age"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{figure} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-60-1} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-60-2} \end{figure} \hypertarget{histogram-1}{% \subsubsection{Histogram}\label{histogram-1}} Please study carefully the following codes and outputs: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ survey, }\AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{Wr.Hnd)) }\SpecialCharTok{+} \FunctionTok{geom\_histogram}\NormalTok{()} \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ survey, }\AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{Wr.Hnd)) }\SpecialCharTok{+} \FunctionTok{geom\_histogram}\NormalTok{(}\AttributeTok{bins =} \DecValTok{10}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{figure} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-62-1} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-62-2} \end{figure} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ survey, }\AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{Wr.Hnd)) }\SpecialCharTok{+} \FunctionTok{geom\_histogram}\NormalTok{(}\AttributeTok{bins =} \DecValTok{10}\NormalTok{, }\AttributeTok{fill=}\StringTok{"lightblue"}\NormalTok{, }\AttributeTok{colour=}\StringTok{"blue"}\NormalTok{)} \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ survey, }\AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{Wr.Hnd)) }\SpecialCharTok{+} \FunctionTok{geom\_histogram}\NormalTok{(}\AttributeTok{binwidth =} \DecValTok{1}\NormalTok{, }\AttributeTok{fill=}\StringTok{"lightblue"}\NormalTok{, }\AttributeTok{colour=}\StringTok{"blue"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{figure} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-63-1} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-63-2} \end{figure} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ survey, }\AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{Wr.Hnd)) }\SpecialCharTok{+} \FunctionTok{geom\_histogram}\NormalTok{(}\AttributeTok{binwidth =} \DecValTok{1}\NormalTok{, }\AttributeTok{fill=}\StringTok{"lightblue"}\NormalTok{, }\AttributeTok{colour=}\StringTok{"blue"}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{facet\_wrap}\NormalTok{(}\SpecialCharTok{\textasciitilde{}}\NormalTok{Sex)} \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ survey[}\SpecialCharTok{!}\FunctionTok{is.na}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Sex),], }\AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{Wr.Hnd)) }\SpecialCharTok{+} \FunctionTok{geom\_histogram}\NormalTok{(}\AttributeTok{binwidth =} \DecValTok{1}\NormalTok{, }\AttributeTok{fill=}\StringTok{"lightblue"}\NormalTok{, }\AttributeTok{colour=}\StringTok{"blue"}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{facet\_wrap}\NormalTok{(}\SpecialCharTok{\textasciitilde{}}\NormalTok{Sex)} \end{Highlighting} \end{Shaded} \begin{figure} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-64-1} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-64-2} \end{figure} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ survey[}\SpecialCharTok{!}\FunctionTok{is.na}\NormalTok{(survey}\SpecialCharTok{$}\NormalTok{Sex),], }\AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{Wr.Hnd)) }\SpecialCharTok{+} \FunctionTok{geom\_histogram}\NormalTok{(}\AttributeTok{binwidth =} \DecValTok{1}\NormalTok{, }\AttributeTok{fill=}\StringTok{"lightblue"}\NormalTok{, }\AttributeTok{colour=}\StringTok{"blue"}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{facet\_wrap}\NormalTok{(}\SpecialCharTok{\textasciitilde{}}\NormalTok{Sex, }\AttributeTok{ncol=}\DecValTok{1}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{figure} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-65-1} \end{figure} \hypertarget{scatterplot-1}{% \subsubsection{Scatterplot}\label{scatterplot-1}} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ survey, }\AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{Wr.Hnd, }\AttributeTok{y=}\NormalTok{Height)) }\SpecialCharTok{+} \FunctionTok{geom\_point}\NormalTok{()} \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ survey, }\AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{Wr.Hnd, }\AttributeTok{y=}\NormalTok{Height, }\AttributeTok{color=}\NormalTok{Sex)) }\SpecialCharTok{+} \FunctionTok{geom\_point}\NormalTok{()} \end{Highlighting} \end{Shaded} \begin{figure} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-66-1} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-66-2} \end{figure} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ survey, }\AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{Wr.Hnd, }\AttributeTok{y=}\NormalTok{Height, }\AttributeTok{color=}\NormalTok{Sex)) }\SpecialCharTok{+} \FunctionTok{geom\_point}\NormalTok{() }\SpecialCharTok{+} \FunctionTok{scale\_color\_discrete}\NormalTok{(}\AttributeTok{na.translate=}\NormalTok{F)} \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ survey, }\AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{Wr.Hnd, }\AttributeTok{y=}\NormalTok{Height, }\AttributeTok{color=}\NormalTok{Sex)) }\SpecialCharTok{+} \FunctionTok{geom\_point}\NormalTok{() }\SpecialCharTok{+} \FunctionTok{scale\_color\_discrete}\NormalTok{(}\AttributeTok{na.translate=}\NormalTok{F) }\SpecialCharTok{+} \FunctionTok{geom\_smooth}\NormalTok{(}\AttributeTok{se =}\NormalTok{ F, }\AttributeTok{method =}\NormalTok{ lm)} \end{Highlighting} \end{Shaded} \begin{figure} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-67-1} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-67-2} \end{figure} \hypertarget{box-plot-1}{% \subsubsection{Box plot}\label{box-plot-1}} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ survey, }\AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{Sex, }\AttributeTok{y=}\NormalTok{Wr.Hnd)) }\SpecialCharTok{+} \FunctionTok{geom\_boxplot}\NormalTok{()} \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ survey, }\AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{Sex, }\AttributeTok{y=}\NormalTok{Wr.Hnd)) }\SpecialCharTok{+} \FunctionTok{geom\_boxplot}\NormalTok{() }\SpecialCharTok{+} \FunctionTok{scale\_x\_discrete}\NormalTok{(}\AttributeTok{na.translate=}\NormalTok{F)} \end{Highlighting} \end{Shaded} \begin{figure} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-68-1} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-68-2} \end{figure} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ survey, }\AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{Sex, }\AttributeTok{y=}\NormalTok{Wr.Hnd, }\AttributeTok{fill=}\NormalTok{Sex)) }\SpecialCharTok{+} \FunctionTok{geom\_boxplot}\NormalTok{() }\SpecialCharTok{+} \FunctionTok{scale\_x\_discrete}\NormalTok{(}\AttributeTok{na.translate=}\NormalTok{F) } \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ survey, }\AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{Sex, }\AttributeTok{y=}\NormalTok{Wr.Hnd, }\AttributeTok{fill=}\NormalTok{Sex)) }\SpecialCharTok{+} \FunctionTok{geom\_boxplot}\NormalTok{() }\SpecialCharTok{+} \FunctionTok{scale\_x\_discrete}\NormalTok{(}\AttributeTok{na.translate=}\NormalTok{F) }\SpecialCharTok{+} \FunctionTok{scale\_fill\_discrete}\NormalTok{(}\AttributeTok{guide=}\NormalTok{F) } \end{Highlighting} \end{Shaded} \begin{figure} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-69-1} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-69-2} \end{figure} \hypertarget{stripchart}{% \subsubsection{Stripchart}\label{stripchart}} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ survey, }\AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\DecValTok{1}\NormalTok{, }\AttributeTok{y=}\NormalTok{Wr.Hnd)) }\SpecialCharTok{+} \FunctionTok{geom\_jitter}\NormalTok{() }\SpecialCharTok{+} \FunctionTok{scale\_x\_discrete}\NormalTok{(}\AttributeTok{na.translate=}\NormalTok{F) } \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ survey, }\AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{Sex, }\AttributeTok{y=}\NormalTok{Wr.Hnd)) }\SpecialCharTok{+} \FunctionTok{geom\_jitter}\NormalTok{() }\SpecialCharTok{+} \FunctionTok{scale\_x\_discrete}\NormalTok{(}\AttributeTok{na.translate=}\NormalTok{F)} \end{Highlighting} \end{Shaded} \begin{figure} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-71-1} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-71-2} \end{figure} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ survey, }\AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{Sex, }\AttributeTok{y=}\NormalTok{Wr.Hnd)) }\SpecialCharTok{+} \FunctionTok{geom\_jitter}\NormalTok{(}\AttributeTok{width =} \FloatTok{0.1}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{scale\_x\_discrete}\NormalTok{(}\AttributeTok{na.translate=}\NormalTok{F) } \FunctionTok{ggplot}\NormalTok{(}\AttributeTok{data =}\NormalTok{ survey, }\AttributeTok{mapping =} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{Sex, }\AttributeTok{y=}\NormalTok{Wr.Hnd)) }\SpecialCharTok{+} \FunctionTok{geom\_jitter}\NormalTok{(}\AttributeTok{width =} \FloatTok{0.1}\NormalTok{, }\AttributeTok{alpha=}\FloatTok{0.5}\NormalTok{, }\AttributeTok{color=}\StringTok{"red"}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{scale\_x\_discrete}\NormalTok{(}\AttributeTok{na.translate=}\NormalTok{F) }\SpecialCharTok{+} \FunctionTok{geom\_boxplot}\NormalTok{(}\AttributeTok{alpha=}\FloatTok{0.5}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{figure} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-72-1} \includegraphics[width=0.5\linewidth]{03_Getting_started_with_a_data_analaysis_files/figure-latex/unnamed-chunk-72-2} \end{figure} \hypertarget{appendix-appendix}{% \appendix} \hypertarget{recaps-in-1-minutes-or-less}{% \chapter{Recaps in 1 minutes or less}\label{recaps-in-1-minutes-or-less}} \hypertarget{possibilities-of-using-r}{% \section{Possibilities of using R}\label{possibilities-of-using-r}} \begin{itemize} \item Console: type a command and hit Enter \begin{itemize} \tightlist \item \emph{Base R} Console \item \emph{RGui} Console on Windows \item \emph{RStudio} Console \end{itemize} \item Script: edit a text files and hit Ctrl+R or Ctrl+Enter \begin{itemize} \tightlist \item \emph{RGui} Script Window (Ctrl+R) \item \emph{RStudio} Source Pane (Ctrl+Enter) \end{itemize} \item Point and Click \begin{itemize} \tightlist \item \emph{R Commander} \item \emph{jamovi}, \emph{JASP} etc. \end{itemize} \end{itemize} \hypertarget{console-features-1}{% \section{Console features}\label{console-features-1}} \begin{itemize} \tightlist \item history of command: Up/Down arrows \item autocompletion: TAB \item continuation prompt: Esc \end{itemize} \hypertarget{advantages-of-script-editor-in-rstudio}{% \section{Advantages of Script editor in RStudio}\label{advantages-of-script-editor-in-rstudio}} \begin{itemize} \tightlist \item multi-line editor \item full-featured text editor: e.g.~row numbering, syntax highlighting \item autocompletion of filenames, function names, arguments and objects \item cross-platform interface to R \item surrounded by integrated graphical environment (workspace, files, plots, help, etc.) \end{itemize} \hypertarget{useful-keyboard-shortcuts-in-rstudio}{% \section{Useful keyboard shortcuts in RStudio}\label{useful-keyboard-shortcuts-in-rstudio}} \begin{itemize} \tightlist \item Ctrl+Enter: Run commands \item Clipboard Operations (Cut, Copy, Paste Operations): Ctrl+X, Ctrl+C, Ctrl+V \item Ctrl++, Ctrl+-: Zoom in/out \item Ctrl+Shift+C: Comment lines/uncomment lines \item Ctrl+F: Find and replace text within script editor \item Ctrl+S: Save the script file \item Alt+-: Write assignment operator \item Ctrl+Shift+F10: Restart R session \end{itemize} \hypertarget{base-types-in-r}{% \section{Base types in R}\label{base-types-in-r}} \begin{itemize} \tightlist \item character (or string): \texttt{"apple\ juice"} \item integer (whole numbers): \texttt{12L} \item double (real numbers, decimal numbers): \texttt{12}, \texttt{12.4} \item logical (true false type things): \texttt{TRUE}, \texttt{FALSE} \end{itemize} \hypertarget{data-structures-1}{% \section{Data structures}\label{data-structures-1}} \begin{itemize} \tightlist \item \emph{Vector}: one-dimensional, homogeneous \item \emph{Matrix}: two-dimensional, homogeneous \item \emph{Array}: two or more dimensional, homogeneous \item \emph{List}: one-dimensional, heterogeneous \item \emph{Factor}: integer vector with levels, which is a character vector \item \emph{Data frame}: two-dimensional, heterogeneous \end{itemize} \begin{longtable}[]{@{}ccc@{}} \caption{Data structures}\tabularnewline \toprule Dimension & Homogenous & Heterogeneous\tabularnewline \midrule \endfirsthead \toprule Dimension & Homogenous & Heterogeneous\tabularnewline \midrule \endhead 1D & Vector, Factor & List\tabularnewline 2D & Matrix & Data frame\tabularnewline nD & Array &\tabularnewline \bottomrule \end{longtable} \hypertarget{operators-1}{% \section{Operators}\label{operators-1}} \begin{longtable}[]{@{}lll@{}} \caption{R operators in order of precedence from highest to lowest}\tabularnewline \toprule Operator & Description & Example\tabularnewline \midrule \endfirsthead \toprule Operator & Description & Example\tabularnewline \midrule \endhead \texttt{::} & access & \texttt{MASS::survey}\tabularnewline \texttt{\$} & component & \texttt{my.s\$Sex}\tabularnewline \texttt{{[}} \texttt{{[}{[}} & indexing & \texttt{my.s\$Height{[}c(2,\ 45){]}}\tabularnewline \texttt{\^{}} \texttt{**} & exponentiation & \texttt{2\^{}3}\tabularnewline \texttt{-} \texttt{+} & unary minus, unary plus & \texttt{-2}\tabularnewline \texttt{:} & sequence operator & \texttt{1:10}\tabularnewline \texttt{\%any\%} e.g.~\texttt{\%\%} \texttt{\%/\%} \texttt{\%in\%} & special operators & \texttt{12\%\%3}\tabularnewline \texttt{*} \texttt{/} & multiplication, division & \texttt{12*3}\tabularnewline \texttt{+} \texttt{-} & addition, subtraction & \texttt{2.3\ +\ 2}\tabularnewline \texttt{\textless{}} \texttt{\textgreater{}} \texttt{\textless{}=} \texttt{\textgreater{}=} \texttt{==} \texttt{!=} & comparisions & \texttt{2\textless{}=3}\tabularnewline \texttt{!} & logical NOT & \texttt{!TRUE}\tabularnewline \texttt{\&} & logical AND & \texttt{TRUE\ \&\ FALSE}\tabularnewline \texttt{\textbar{}} & logical OR & \texttt{TRUE\ \textbar{}\ FALSE}\tabularnewline \texttt{\textless{}-} & assignment & \texttt{col\ \textless{}-\ 12}\tabularnewline \bottomrule \end{longtable} R language provides following types of operators: \begin{itemize} \tightlist \item Arithmetic Operators: \texttt{\^{}} \texttt{**}, \texttt{-} (unary), \texttt{+} (unary), \texttt{\%\%}, \texttt{\%/\%}, \texttt{*}, \texttt{/}, \texttt{-} (binary), \texttt{+} (binary) \item Relational Operators: \texttt{\textless{}}, \texttt{\textgreater{}}, \texttt{\textless{}=}, \texttt{\textgreater{}=}, \texttt{==}, \texttt{!=}, \texttt{\%in\%} \item Logical Operators: \texttt{!}, \texttt{\&}, \texttt{\textbar{}} \item Assignment Operators: \texttt{\textless{}-} \item Miscellaneous Operators: \texttt{::}, \texttt{\$}, \texttt{{[}}, \texttt{{[}{[}}, \texttt{:}, \texttt{?} \end{itemize} \hypertarget{maths-functions}{% \section{Maths functions}\label{maths-functions}} \begin{longtable}[]{@{}lll@{}} \caption{\label{tab:matfuggvenyek}Mathematical functions}\tabularnewline \toprule Function & Description & Example\tabularnewline \midrule \endfirsthead \toprule Function & Description & Example\tabularnewline \midrule \endhead {abs(x)} & Takes the absolute value of x & {abs(-1)}\tabularnewline {sign(x)} & The signs of \texttt{x} & {sign(pi)}\tabularnewline {sqrt(x)} & Returns the square root of x & {sqrt(9+16)}\tabularnewline {exp(x)} & Returns the exponential of x & {exp(1)}\tabularnewline {log(x,base=exp(1))} & Takes the logarithm of x with base y; if base is not specified, returns the natural logarithm & {log(exp(3));log(8,10)}\tabularnewline {log10(x);log2(x)} & Takes the logarithm of x with base 10 or 2 & {log10(1000);log2(256)}\tabularnewline {cos(x);sin(x);tan(x)} & Trigonometric functions & {cos(pi);sin(0);tan(0)}\tabularnewline {round(x,digits=0)} & Rounds a numeric input to a specified number of decimal places & {round(c(1.5,-1.5))}\tabularnewline {floor(x)} & Rounds a numeric input down to the next lower integer & {floor(c(1.5,-1.5))}\tabularnewline {ceiling(x)} & Rounds a numeric input up to the next higher integer & {ceiling(c(1.5,-1.5))}\tabularnewline {trunc(x)} & Truncates (i.e.~cuts off) the decimal places of a numeric input & {trunc(c(1.5,-1.5))}\tabularnewline \bottomrule \end{longtable} \hypertarget{string-functions}{% \section{String functions}\label{string-functions}} \begin{longtable}[]{@{}lll@{}} \toprule \begin{minipage}[b]{(\columnwidth - 2\tabcolsep) * \real{0.32}}\raggedright Function\strut \end{minipage} & \begin{minipage}[b]{(\columnwidth - 2\tabcolsep) * \real{0.41}}\raggedright Description\strut \end{minipage} & \begin{minipage}[b]{(\columnwidth - 2\tabcolsep) * \real{0.27}}\raggedright Example\strut \end{minipage}\tabularnewline \midrule \endhead \begin{minipage}[t]{(\columnwidth - 2\tabcolsep) * \real{0.32}}\raggedright \texttt{paste();paste0(sep="")}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 2\tabcolsep) * \real{0.41}}\raggedright Concatenate strings\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 2\tabcolsep) * \real{0.27}}\raggedright \texttt{paste(\textquotesingle{}a\textquotesingle{},\textquotesingle{}b\textquotesingle{},sep=\textquotesingle{}=\textquotesingle{})}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 2\tabcolsep) * \real{0.32}}\raggedright \texttt{nchar(x)}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 2\tabcolsep) * \real{0.41}}\raggedright Count the number of characters\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 2\tabcolsep) * \real{0.27}}\raggedright \texttt{nchar(\textquotesingle{}alma\textquotesingle{})}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 2\tabcolsep) * \real{0.32}}\raggedright \texttt{substr(x,start,stop)}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 2\tabcolsep) * \real{0.41}}\raggedright Substrings of a character vector\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 2\tabcolsep) * \real{0.27}}\raggedright \texttt{substr(\textquotesingle{}alma\textquotesingle{},\ 3,\ 5)}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 2\tabcolsep) * \real{0.32}}\raggedright \texttt{tolower(x)}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 2\tabcolsep) * \real{0.41}}\raggedright Convert to lower-case\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 2\tabcolsep) * \real{0.27}}\raggedright \texttt{tolower(\textquotesingle{}Kiss\ Géza\textquotesingle{})}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 2\tabcolsep) * \real{0.32}}\raggedright \texttt{toupper(x)}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 2\tabcolsep) * \real{0.41}}\raggedright Convert to upper-case\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 2\tabcolsep) * \real{0.27}}\raggedright \texttt{toupper(\textquotesingle{}Kiss\ Géza\textquotesingle{})}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 2\tabcolsep) * \real{0.32}}\raggedright \texttt{chartr(old,new,x)}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 2\tabcolsep) * \real{0.41}}\raggedright Translates characters\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 2\tabcolsep) * \real{0.27}}\raggedright \texttt{chartr(\textquotesingle{}it\textquotesingle{},\textquotesingle{}ál\textquotesingle{},\textquotesingle{}titik\textquotesingle{})}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 2\tabcolsep) * \real{0.32}}\raggedright \texttt{cat(sep="\ ")}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 2\tabcolsep) * \real{0.41}}\raggedright Concatenate and print\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 2\tabcolsep) * \real{0.27}}\raggedright \texttt{cat(\textquotesingle{}alma\textquotesingle{},\textquotesingle{}fa\textbackslash{}n\textquotesingle{},sep=\textquotesingle{}\textquotesingle{})}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 2\tabcolsep) * \real{0.32}}\raggedright \texttt{grep();grepl();regexpr()}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 2\tabcolsep) * \real{0.41}}\raggedright Pattern matching\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 2\tabcolsep) * \real{0.27}}\raggedright \texttt{grepl(pattern=\textquotesingle{}lm\textquotesingle{},x=\textquotesingle{}alma\textquotesingle{})}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 2\tabcolsep) * \real{0.32}}\raggedright \texttt{sub();gsub()}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 2\tabcolsep) * \real{0.41}}\raggedright Pattern matching and replacement\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 2\tabcolsep) * \real{0.27}}\raggedright \texttt{gsub(\textquotesingle{}lm\textquotesingle{},repl=\textquotesingle{}nyj\textquotesingle{},x=\textquotesingle{}alma\textquotesingle{})}\strut \end{minipage}\tabularnewline \bottomrule \end{longtable} \hypertarget{base-r-statistical-functions}{% \section{Base R Statistical Functions}\label{base-r-statistical-functions}} \begin{longtable}[]{@{}llll@{}} \caption{\label{tab:statfuggvenyek2}Base R Statistical Functions}\tabularnewline \toprule Function & Description & Example & Value of Example\tabularnewline \midrule \endfirsthead \toprule Function & Description & Example & Value of Example\tabularnewline \midrule \endhead {max(x)} & The largest value of \texttt{x} & {max(1:10)} & {10}\tabularnewline {min(x)} & The smallest value of \texttt{x} & {min(11:20)} & {11}\tabularnewline {sum(x)} & The sum of all the values of \texttt{x} & {sum(1:5)} & {15}\tabularnewline {prod(x)} & The product of all the values of \texttt{x} & {prod(1:5)} & {120}\tabularnewline {mean(x)} & Mean of \texttt{x} & {mean(1:10)} & {5.5}\tabularnewline {median(x)} & Median of \texttt{x} & {median(1:10)} & {5.5}\tabularnewline {range(x)} & The minimum and the maximum & {range(1:10)} & {1 10}\tabularnewline {sd(x)} & Standard deviation of x & {sd(1:10)} & {3.03}\tabularnewline {var(x)} & Variance of \texttt{x} & {var(1:10)} & {9.17}\tabularnewline {cor(x,y)} & Correlation between \texttt{x} and \texttt{y} & {cor(1:10,11:20)} & {1}\tabularnewline \bottomrule \end{longtable} \hypertarget{regular-sequences}{% \section{Regular sequences}\label{regular-sequences}} \begin{longtable}[]{@{}llll@{}} \toprule \begin{minipage}[b]{(\columnwidth - 3\tabcolsep) * \real{0.21}}\raggedright Function\strut \end{minipage} & \begin{minipage}[b]{(\columnwidth - 3\tabcolsep) * \real{0.42}}\raggedright Description\strut \end{minipage} & \begin{minipage}[b]{(\columnwidth - 3\tabcolsep) * \real{0.18}}\raggedright Example\strut \end{minipage} & \begin{minipage}[b]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\raggedright Value of Example\strut \end{minipage}\tabularnewline \midrule \endhead \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.21}}\raggedright \texttt{from:to}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.42}}\raggedright generates a sequence from \texttt{from=} to \texttt{to=} in steps of 1 or -1\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.18}}\raggedright \texttt{1:5}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\raggedright \texttt{1\ 2\ 3\ 4\ 5}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.21}}\raggedright \texttt{seq(from,\ to,\ by,\ length.out)}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.42}}\raggedright generate regular sequences\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.18}}\raggedright \texttt{seq(from=2,\ to=10,\ by=2)}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\raggedright \texttt{2\ \ 4\ \ 6\ \ 8\ 10}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.21}}\raggedright \texttt{rep(x,\ times,\ each)}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.42}}\raggedright replicate elements of vectors\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.18}}\raggedright \texttt{rep(x=0,\ times=3)}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\raggedright \texttt{0\ 0\ 0}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.21}}\raggedright \texttt{paste(sep,\ collapse)}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.42}}\raggedright concatenate vectors\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.18}}\raggedright \texttt{paste("No",\ 1:3,\ sep="\_")}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 3\tabcolsep) * \real{0.19}}\raggedright \texttt{"No\_1"\ "No\_2"\ "No\_3"}\strut \end{minipage}\tabularnewline \bottomrule \end{longtable} \hypertarget{subsetting-1}{% \section{Subsetting}\label{subsetting-1}} \begin{longtable}[]{@{}ll@{}} \toprule \begin{minipage}[b]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright Data structure\strut \end{minipage} & \begin{minipage}[b]{(\columnwidth - 1\tabcolsep) * \real{0.56}}\raggedright Example\strut \end{minipage}\tabularnewline \midrule \endhead \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright \begin{itemize} \tightlist \item Vector \item Factor \item List \item Data frame \end{itemize}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.56}}\raggedright \begin{itemize} \tightlist \item \texttt{x{[}3{]}} \item \texttt{x{[}1:3{]}} \item \texttt{x{[}c(2,\ 3,\ 1){]}} \item \texttt{x{[}-2{]}} \item \texttt{x{[}-c(1,\ 2){]}} \item \texttt{x{[}"Jane"{]}} \item \texttt{x{[}c("Jane",\ "Mark"){]}} \item \texttt{x{[}c(T,\ F,\ T,\ T){]}} \item \texttt{x{[}{[}2{]}{]}} \item \texttt{x{[}{[}"Jane"{]}{]}} \end{itemize}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright \begin{itemize} \tightlist \item Matrix \item Data frame \end{itemize}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.56}}\raggedright \begin{itemize} \tightlist \item \texttt{x{[}1,\ 2{]}} \item \texttt{x{[},\ 2{]}} \item \texttt{x{[}2:4,\ {]}} \item \texttt{x{[}c(2,\ 3,\ 1),\ c("name",\ "sport"){]}} \item \texttt{x{[}c("Jane",\ "Mark"),\ c(T,\ F,\ T){]}} \end{itemize}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright \begin{itemize} \tightlist \item Array (3D) \end{itemize}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.56}}\raggedright \begin{itemize} \tightlist \item \texttt{x{[}1:3,\ c(2,1),\ 2:3{]}} \end{itemize}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright \begin{itemize} \tightlist \item List, Data frame \end{itemize}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.56}}\raggedright \begin{itemize} \tightlist \item \texttt{x\$name} \end{itemize}\strut \end{minipage}\tabularnewline \bottomrule \end{longtable} \hypertarget{packages}{% \section{Packages}\label{packages}} \begin{longtable}[]{@{}ll@{}} \toprule \begin{minipage}[b]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright Operation\strut \end{minipage} & \begin{minipage}[b]{(\columnwidth - 1\tabcolsep) * \real{0.53}}\raggedright Example\strut \end{minipage}\tabularnewline \midrule \endhead \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright Install a package from CRAN\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.53}}\raggedright \texttt{install.packages("package\_name")}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright Load a package\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.53}}\raggedright \texttt{library(package\_name)}\strut \end{minipage}\tabularnewline \bottomrule \end{longtable} \hypertarget{readingwriting-data-files}{% \section{Reading/Writing data files}\label{readingwriting-data-files}} \begin{longtable}[]{@{}ll@{}} \toprule \begin{minipage}[b]{(\columnwidth - 1\tabcolsep) * \real{0.30}}\raggedright Operation\strut \end{minipage} & \begin{minipage}[b]{(\columnwidth - 1\tabcolsep) * \real{0.70}}\raggedright Example\strut \end{minipage}\tabularnewline \midrule \endhead \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.30}}\raggedright Import text files\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.70}}\raggedright \texttt{read.table(file,\ sep,\ dec,\ header,\ fileEncoding)}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.30}}\raggedright Import Excel or SPSS files\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.70}}\raggedright \texttt{rio::import(file)}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.30}}\raggedright Export text files\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.70}}\raggedright \texttt{write.table(x,\ file,\ sep,\ dec,\ row.names,\ quote,\ fileEncoding)}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.30}}\raggedright Export Excel or SPSS files\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.70}}\raggedright \texttt{rio::export(x,\ file)}\strut \end{minipage}\tabularnewline \bottomrule \end{longtable} \hypertarget{filter}{% \section{Filter}\label{filter}} \begin{longtable}[]{@{}ll@{}} \toprule \begin{minipage}[b]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright Data structure\strut \end{minipage} & \begin{minipage}[b]{(\columnwidth - 1\tabcolsep) * \real{0.58}}\raggedright Example\strut \end{minipage}\tabularnewline \midrule \endhead \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright \begin{itemize} \tightlist \item Vector \end{itemize}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.58}}\raggedright \begin{itemize} \tightlist \item \texttt{x{[}x\ \textless{}\ 2{]}} \item \texttt{x{[}x\ ==\ "Jane"{]}} \item \texttt{x{[}x\ ==\ "Jane"\ \textbar{}\ x\ ==\ "Mark"{]}} \end{itemize}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright \begin{itemize} \tightlist \item Data frame \end{itemize}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.58}}\raggedright \begin{itemize} \tightlist \item \texttt{x{[}x\$v1\ \textless{}\ 2,\ {]}} \item \texttt{x{[}x\$v2\ ==\ "Jane",\ {]}} \item \texttt{x{[}x\$v2\ ==\ "Jane"\ \textbar{}\ x\$v2\ ==\ "Mark",\ {]}} \end{itemize}\strut \end{minipage}\tabularnewline \bottomrule \end{longtable} \hypertarget{sort}{% \section{Sort}\label{sort}} \begin{longtable}[]{@{}ll@{}} \toprule \begin{minipage}[b]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright Data structure\strut \end{minipage} & \begin{minipage}[b]{(\columnwidth - 1\tabcolsep) * \real{0.56}}\raggedright Example\strut \end{minipage}\tabularnewline \midrule \endhead \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright \begin{itemize} \tightlist \item Vector \end{itemize}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.56}}\raggedright \begin{itemize} \tightlist \item \texttt{sort(x)} \item \texttt{sort(x,\ decreasing=T)} \end{itemize}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright \begin{itemize} \tightlist \item Factor \end{itemize}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.56}}\raggedright \begin{itemize} \tightlist \item \texttt{sort(table(x))} \item \texttt{sort(table(x),\ decreasing=T)} \end{itemize}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright \begin{itemize} \tightlist \item Data frame \end{itemize}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.56}}\raggedright \begin{itemize} \tightlist \item \texttt{x{[}order(x\$name),\ {]}} \item \texttt{x{[}order(x\$name,\ decreasing=T),\ {]}} \end{itemize}\strut \end{minipage}\tabularnewline \bottomrule \end{longtable} \hypertarget{data-type-conversion}{% \section{Data type conversion}\label{data-type-conversion}} \begin{longtable}[]{@{}ll@{}} \toprule \begin{minipage}[b]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright Conversion\strut \end{minipage} & \begin{minipage}[b]{(\columnwidth - 1\tabcolsep) * \real{0.46}}\raggedright Example\strut \end{minipage}\tabularnewline \midrule \endhead \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright \begin{itemize} \tightlist \item Numeric vector to factor \item Character vector to factor \end{itemize}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.46}}\raggedright \begin{itemize} \tightlist \item \texttt{factor(x)} \end{itemize}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright \begin{itemize} \tightlist \item Character to numeric \end{itemize}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.46}}\raggedright \begin{itemize} \tightlist \item \texttt{as.numeric(x)} \end{itemize}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright \begin{itemize} \tightlist \item Factor to character \end{itemize}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.46}}\raggedright \begin{itemize} \tightlist \item \texttt{as.character(x)} \end{itemize}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright \begin{itemize} \tightlist \item Factor to numeric \end{itemize}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.46}}\raggedright \begin{itemize} \tightlist \item \texttt{as.numeric(as.character(x))} \end{itemize}\strut \end{minipage}\tabularnewline \bottomrule \end{longtable} \hypertarget{transformation-1}{% \section{Transformation}\label{transformation-1}} \begin{longtable}[]{@{}ll@{}} \toprule \begin{minipage}[b]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright Transformation\strut \end{minipage} & \begin{minipage}[b]{(\columnwidth - 1\tabcolsep) * \real{0.46}}\raggedright Example\strut \end{minipage}\tabularnewline \midrule \endhead \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright \begin{itemize} \tightlist \item Numeric to factor \end{itemize}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.46}}\raggedright \begin{itemize} \tightlist \item \texttt{cut(x,\ breaks,\ labels)} \end{itemize}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright \begin{itemize} \tightlist \item Factor to factor \end{itemize}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.46}}\raggedright \begin{itemize} \tightlist \item \texttt{car::recode(x,\ \textquotesingle{}\textquotesingle{})} \end{itemize}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright \begin{itemize} \tightlist \item Numeric to numeric \end{itemize}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.46}}\raggedright \begin{itemize} \tightlist \item mathematical functions \begin{itemize} \tightlist \item \texttt{round()}, \texttt{log()} \item \texttt{exp()}, \texttt{sin()}, etc. \end{itemize} \item operators \begin{itemize} \tightlist \item \texttt{+},\texttt{-}, \texttt{/}, \texttt{*}, etc. \end{itemize} \end{itemize}\strut \end{minipage}\tabularnewline \bottomrule \end{longtable} \hypertarget{descriptive-statistics-1}{% \section{Descriptive statistics}\label{descriptive-statistics-1}} \begin{longtable}[]{@{}ll@{}} \toprule \begin{minipage}[b]{(\columnwidth - 1\tabcolsep) * \real{0.35}}\raggedright Descriptive statistics\strut \end{minipage} & \begin{minipage}[b]{(\columnwidth - 1\tabcolsep) * \real{0.47}}\raggedright Example\strut \end{minipage}\tabularnewline \midrule \endhead \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.35}}\raggedright \begin{itemize} \tightlist \item Measurements \end{itemize}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.47}}\raggedright \begin{itemize} \tightlist \item \texttt{psych::describe()} \item \texttt{psych::describeBy()} \item \texttt{DescTools::Desc()} \end{itemize}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.35}}\raggedright \begin{itemize} \tightlist \item Tables \end{itemize}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.47}}\raggedright \begin{itemize} \tightlist \item \texttt{table(useNA="ifany")} \item \texttt{DescTools::Desc()} \end{itemize}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.35}}\raggedright \begin{itemize} \tightlist \item Plots \end{itemize}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.47}}\raggedright \begin{itemize} \tightlist \item Traditional graphics \begin{itemize} \tightlist \item \texttt{hist()} \item \texttt{boxplot()} \item \texttt{stripchart()} \item \texttt{plot()} \item \texttt{barplot()} \end{itemize} \item ggplot2 graphics \texttt{ggplot()\ +} \begin{itemize} \tightlist \item \texttt{geom\_histogram()} \item \texttt{geom\_boxplot()} \item \texttt{geom\_jitter()} \item \texttt{geom\_point()} \item \texttt{geom\_bar()} \end{itemize} \end{itemize}\strut \end{minipage}\tabularnewline \bottomrule \end{longtable} \hypertarget{terminology-1}{% \chapter{Terminology}\label{terminology-1}} \hypertarget{terms-in-statistics}{% \section{Terms in Statistics}\label{terms-in-statistics}} \begin{description} \item[Bar chart] A graph used to display summary statistics such as the \emph{mean} (in the case of a \emph{scale variable}) or the \emph{frequency} (in the case of a \emph{nominal variable}). \item[Boxplot] a visual representation of data that shows central tendency (usually the median) and spread (usually the interquartile range) of a numeric variable for one or more groups; boplots are often used to compare the distribution of a continuous variable across several groups \item[Case / Observation] A case is the unit of analysis; one person or other entity. In psychology, this is normally the data deriving from a single participant. In some research, the cases will not be people. For example, we may be interested in the average academic attainment for pupils from different schools. Here, the cases would be the schools. In R, a single row of data in a data frame represents a case. \item[Categorical variable] variable measured in categories; there are two types of categorical variables: ordinal variables have categories with a logical order (e.g., Liker scales), while nominal variables have categories with no logical order (e.g., religious affiliation) \item[Data] A set of values. A data set is typically made up of a number of \emph{variables}. In quantitative research, data are numeric. \item[Descriptive statistics] Procedures that allow you to describe data by summarising, displaying or illustrating them. Often used as a general term for summary descriptive statistics: \emph{measures of central tendency} and \emph{measures of dispersion}. Graphs are descriptive statistics used to illustrate the data. \item[Frequency/ies] The number of times a particular value of a variable occurs. \item[Histogram] a visual display of data used to examine the distribution of a numeric variable \item[Line graph] a visual display of data often used to examine the relationship between two continuous variables or for something measured over time \item[Missing values] A data set may be incomplete, for example, if some observations or measurements failed or if participants didn't respond to some questions. It is important to distinguish these missing data points from valid data. Missing values are the values R has reserved for each variable to indicate that a data point is missing. These missing values can either be specified by the user (user missing) or automatically set by R (\texttt{NA}). \item[Nominal data] Data collected at a level of measurement that yields nominal data (nominal just means `named'), also referred to as `categorical data', where the value does not imply anything other than a label; for example, 1 = male and 2 = female. \item[Observation / Case] An observation is the unit of analysis; one person or other entity. In psychology, this is normally the data deriving from a single participant. In some research, the cases will not be people. For example, we may be interested in the average academic attainment for pupils from different schools. Here, the observation would be the schools. In R, a single row of data in a data frame represents an observation. \item[Participant] People who take part in an experiment or research study. Previously, the word `subject' was used, and still is in many statistics books. \item[Population] The total set of all possible scores for a particular variable. \item[Quantitative data] Is used to describe numeric data measured on any of the four levels of measurement. Sometimes though, the term `qualitative data' is then used to describe data measured with nominal scales. \item[Sample] A subset of observations from some \emph{population} that is often analyzed to learn about the poplulation sampled. \item[Scatterplot] a graph that shows one dot for each observation in the data set \item[Summary statistics] used to provide an overview of the characteristics of a sample; this typically includes measures central tendency and spread for numeric variables and the frequencies and percentages of categorical variables \item[Statistics] A general term for procedures for summarising or displaying data (\emph{descriptive statistics}) and for analysing data (\emph{inferential statistical tests}). \item[Variable] a measured characteristic of some entity (e.g., income, years of education, sex, height, blood pressure, smoking status, etc.); A variable in R is represented by a column in data frame. \end{description} \hypertarget{terms-in-r}{% \section{Terms in R}\label{terms-in-r}} \begin{description} \item[Argument] information input into a function that controls how the function behaves \item[Assigning] assigning a value to an object is done by using a left-arrow (\texttt{\textless{}-}), with the arrow separating the name of the object on the left from the expression itself on the right: \texttt{object\_name\ \textless{}-\ expression} \item[Character] a basic data type in R that comprises things that cannot be used in mathematical operations; often, character variables are names, addresses, zip codes, or other similar values \item[Comment] Statements included in code but not analyzed; in R, comment is denoted by hashtag (\texttt{\#}) and is often used to clarify the codes \item[Constants] Constants, as the name suggests, are entities whose value cannot be altered. Basic types of constant are double constants, integer constants, logical constants and character constants. \item[csv] a file extension indicating that the file contains comma separated values or semicolon separated values \item[Data frame] an object type in R that holds data with values in rows and columns with rows treated as observations and columns treated as variables \item[Data management] the procedures used to prepare the data for analysis; data management often includes recoding variables, ensuring that missing values are treated properly, checking and fixing data types, and other data-cleaning procedures \item[Data types] in R, these include numeric (double, integer), character, logical; the data type suggests how a variable was measured and recorded or recoded, and different analytic strategies are used to manage and analyze different variable types \item[Expression] An expression is an instruction to perform a particular task. An expression is any sequence of R constants, object's names, operators, function calls, and parentheses. An expression has a type as well as a value. \item[Factor] A categorical variable and its value labels. Value labels may be nothing more than ``1,'' ``2,''\ldots, if not assigned explicitly. More formally, a type of object that represents a categorical variable. It stores its labels in its levels attribute. \item[Function] a set of machine-readable instructions to perform a task in R; often, the task is to conduct some sort of data management or analysis, but there are also functions that exist just for fun. \item[Index] The order number of a variable in a data set or the subscript of a value in a object. The number of the component in a list or data frame, or of an element in a vector. \item[Integer] a similar data type to numeric, but containing only whole numbers \item[Length] The number of observations/cases in a variable, including missing values, or the number of variables in a data set. For vectors, it is the number of its elements (including NAs). For lists or data frames, it is the number of its components. \item[Levels] The values that a categorical variable can have. Actually stored as a part of the factor itself in what appears to be a very short character variable (even when the values themselves are numbers). \item[List] A set of objects of any class. Can contain vectors, data frames, matrices and even other lists. \item[Matrix] A data set that must contain only one type of variable, e.g.~all numeric or character. More formally, a two-dimensional array; that is, a vector with a dim attribute of length 2. Information, or data elements, stored in a rectangular format with rows and columns. \item[NA] the R placeholder for missing values, often translated as ``not available.'' \item[NaN] A missing value. Stands for Not a Number. Something that is undefined mathematically such as zero divided by zero. \item[NULL] An object you can use to drop variables or values. E.g. \texttt{mydata\$x\ \textless{}-\ NULL} drops the variable \texttt{x} from the data set \texttt{mydata}. Assigning it to an object deletes it. \item[Numeric] A variable that contains only numbers. This can be double and integer. \item[Object] information stored in R; data analysis and data management are then performed on these stored objects. Includes data frames, vectors, factors, matrices, arrays, lists and functions. \item[Operators] An operator is a symbol that tells the compiler to perform specific mathematical, logical, or other manipulations. R language is rich in built-in operators and provides following types of operators: Arithmetic Operators, Relational Operators, Logical Operators, Assignment Operators, Miscellaneous Operators. \item[Package] a collection of functions and datasets for use in R that usually has a specific purpose, such as conducting partial correlation anaylyses (\textbf{ppcor} package) \item[Precedence of operations] the order in which mathematical operations should be performed when solving an equation: parentheses, exponents, multiplication, division, addition, and subtraction (PEMDAS) \item[Recycling rules] If one tries to add two structures with a different number of elements, then the shortest is recycled to length of longest. That is, if for instance you add \texttt{c(1,\ 2,\ 3)} to a six-element vector then you will really add \texttt{c(1,\ 2,\ 3,\ 1,\ 2,\ 3)}. If the length of the longer vector is not a multiple of the shorter one, a warning is given. \item[RMarkdown file] RMarkdown provides an authoring framework for data science. You can use a single R Markdown file to both 1) save and execute code; 2) generate high quality reports that can be shared with an audience. \item[sav] the file extension for a data file saved in a format for the Statistical Package for Social Sciences (SPSS) statistical software \item[Script file] a text file in R similar to something written in the Notepad text editor on a Windows computer or the TextEdit text editor on a Mac computer; it is saved with a \texttt{.R} file extension \item[Vector] Vectors are one-dimensional and homogenous data structures. It can exist on its own in memory or it can be part of a data frame. More formally, a set of values that have the same base type. A vector can be a vector of characters, logical, integers or double. \item[Working directory] R uses a working directory, where R will look, by default, for files you ask it to load. It also where, by default, any files you write to disk will go. \item[Workspace] A temporary work area in which all R computation happens. Data that exists there will vanish if not saved to your hard drive before quitting R. More formally, the area of your computer's main memory where R does all its work. Data must be loaded into it from files, and packages must be loaded into it from the library, before you can use either. \end{description} \hypertarget{terms-in-statistics-and-r}{% \section{Terms in Statistics and R}\label{terms-in-statistics-and-r}} \begin{longtable}[]{@{}ll@{}} \caption{Terms in statistics and R}\tabularnewline \toprule \begin{minipage}[b]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright Terms in Statistics\strut \end{minipage} & \begin{minipage}[b]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright Terms in R\strut \end{minipage}\tabularnewline \midrule \endfirsthead \toprule \begin{minipage}[b]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright Terms in Statistics\strut \end{minipage} & \begin{minipage}[b]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright Terms in R\strut \end{minipage}\tabularnewline \midrule \endhead \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright \begin{itemize} \tightlist \item dataset \item sample \end{itemize}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright \begin{itemize} \tightlist \item data frame \end{itemize}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright \begin{itemize} \tightlist \item observation \end{itemize}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright \begin{itemize} \tightlist \item rows in a data frame \end{itemize}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright \begin{itemize} \tightlist \item variable \end{itemize}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright \begin{itemize} \tightlist \item columns in a data frame \end{itemize}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright \begin{itemize} \tightlist \item categorical variable \item qualitative variable \begin{itemize} \tightlist \item nominal variable \item ordinal variable \end{itemize} \end{itemize}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright \begin{itemize} \tightlist \item factor \end{itemize}\strut \end{minipage}\tabularnewline \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright \begin{itemize} \tightlist \item numeric variable \item quantitative variable \begin{itemize} \tightlist \item continuous variable \item discrete variable \end{itemize} \end{itemize}\strut \end{minipage} & \begin{minipage}[t]{(\columnwidth - 1\tabcolsep) * \real{0.42}}\raggedright \begin{itemize} \tightlist \item numeric vector \begin{itemize} \tightlist \item double vector \item integer vector \end{itemize} \end{itemize}\strut \end{minipage}\tabularnewline \bottomrule \end{longtable} \hypertarget{miscellaneous}{% \chapter{Miscellaneous}\label{miscellaneous}} \hypertarget{rules-of-using-r}{% \section{Rules of using R}\label{rules-of-using-r}} \begin{itemize} \tightlist \item use \emph{RStudio} \item use \emph{RStudio} in project-oriented environment \item use \emph{RMarkdown} files in \emph{RStudio} \item use as many comments as possible \end{itemize} \hypertarget{good-to-know}{% \section{Good to know}\label{good-to-know}} \begin{itemize} \tightlist \item R is case sensitive: \texttt{Apple} and \texttt{apple} are different objects. \item Use a semicolon to put two or more commands on a single line: \texttt{a\ \textless{}-\ 2+2;\ a} \item Force R to print the value of expression by using parentheses: \texttt{(a\ \textless{}-\ 2+2)} \end{itemize} \hypertarget{why-is-my-code-broken}{% \section{Why is my code broken?}\label{why-is-my-code-broken}} \begin{itemize} \tightlist \item Are all your parentheses in the right places? \item Do you have commas where you should? \item How's your capitalization? \item What about continuation prompt? \item Did you load the package you're trying to use? \item If none of these fix your problem, try googling the error message R gives you. There's usually a good StackOverflow question on whatever you're trying to accomplish. \end{itemize} \hypertarget{other-resources}{% \section{Other resources}\label{other-resources}} \begin{itemize} \tightlist \item \href{https://www.rstudio.com/resources/cheatsheets/}{R Cheatsheets} contain information-dense infographics for many of the packages we've used in this course, and plenty other useful tools you may need in your own work. \item \href{https://abarik.github.io/roforrasok/}{My collection (in Hungarian)} \end{itemize} \bibliography{book.bib,packages.bib} \end{document}
{ "alphanum_fraction": 0.70619712, "avg_line_length": 54.3330890973, "ext": "tex", "hexsha": "8b9d80573b684197557851834a10c5ff05803ed7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5f1fc1c25b6da58d8a6d47b634fe7af0a2f8003b", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "abarik/basicr_2020_21_2", "max_forks_repo_path": "docs/basicr_2020_21_2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5f1fc1c25b6da58d8a6d47b634fe7af0a2f8003b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "abarik/basicr_2020_21_2", "max_issues_repo_path": "docs/basicr_2020_21_2.tex", "max_line_length": 884, "max_stars_count": null, "max_stars_repo_head_hexsha": "5f1fc1c25b6da58d8a6d47b634fe7af0a2f8003b", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "abarik/basicr_2020_21_2", "max_stars_repo_path": "docs/basicr_2020_21_2.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 117454, "size": 370769 }
% Title: What it takes to reach the goal % Author: Ajahn Piak \chapterAuthor{Ajahn Piak} \chapterNote{An interview with a senior disciple of Ajahn Chah conducted by\linebreak Ajahn Chandako.} \chapter{What it Takes to Reach the Goal} \tocChapterNote{By \theChapterAuthor. An interview with a senior disciple of Ajahn Chah conducted by Ajahn Chandako.} \markright{\theChapterAuthor} During the first few years of his monastic career, a young monk's training is divided between Wat Pah Nanachat and other branch monasteries of Wat Pah Pong. One of the disciples of Ajahn Chah who has helped to train Wat Pah Nanachat monks is Tan Ajahn Piak, abbot of Wat Pah Cittabhāvanā, a branch monastery situated to the north of Bangkok. The following conversation with Tan Chandako took place in 1998. \emph{Tan Ajahn Piak}: The \emph{Kruba Ajahns} rarely say anything directly about Nibbāna because it is beyond a normal person's realm of possible experience. Even if the people listening believe the explanation, it still doesn't actually help them much, and if they don't believe it they may make a lot of bad \emph{kamma} for themselves. So the \emph{Kruba Ajahns} usually refer to it using metaphors or refuse to speak of it at all, only teaching the path to get there. The important thing is to keep going straight without stopping. For example, say you want to go to Fa Kram Village over there; if you follow the path and keep walking you'll get there in a short time. If you stop to take a look at something and then chat with people, then go off with them to see something else, it will take a long time before you reach Fa Kram, if ever. But the reality is that almost everybody gets sidetracked or at least stuck in \emph{samādhi}, thinking that they've arrived already. Even Luang Por Chah was stuck for a while; Tan Ajahn Mahā Boowa for six years; Ajahn Tate for ten years; Ajahn Sot (Wat Pak Nam) for twenty years. \emph{Tan Chandako}: Because to all intents and purposes it appears to be full enlightenment? \emph{Tan Ajahn Piak}: Yes. There seem to be no \emph{kilesas} whatsoever. Everything is clear. Many people don't make it past this stage. Other people practice for five Rains Retreats, ten Rains Retreats, and still feel they haven't made much progress and get discouraged. But one has to keep in mind that it is always only a very few people who have the \emph{pāramī} to reach the goal. Compare it with the US President or the Thai King. Out of an entire nation of millions of people, only one person at a time has the \emph{pāramī} to be in the top position. You have to think in terms of what you are going to do to set yourself above the crowd, creating the causes and conditions for future liberation. Effort in the practice is what makes the difference. There are thousands of monks in Thailand who ordain with the sincere intention of realizing Nibbāna. What sets people apart, why some succeed while others don't, is mainly due to their level of effort, as well as the effort they've put forth in the past. A person has to train himself to the point where it becomes an ingrained character trait to be continuously putting forth effort, whether he's around other people or alone. Some people are very diligent as long as there is a teacher or other monks watching, but as soon as they're alone their effort slackens. When I was a young monk and my body was strong, I'd stay up later than everyone else walking \emph{jongrom} and see the candles in the other \emph{kuṭīs} go out one by one. Then I'd get up before the others and watch the candles gradually being lit. It wasn't that I had it easy. The \emph{kilesas} in my heart were always trying to convince me to take a rest: `Everyone else has crashed out. Why shouldn't you do the same?' The two voices in my head would argue: `You're tired. You need a rest. You're too sleepy to practice.' `What are you going to do to overcome sleepiness? Keep going.' Sometimes the \emph{kilesas} would win, but then I'd start again and eventually they weakened. \emph{Tan Chandako}: It's often when \emph{samādhi} or \emph{vipassanā} has been going well that \emph{kilesas} seem to arise the most. At such times it seems I've got more \emph{kilesas} than ever. Is that normal? \emph{Tan Ajahn Piak}: Very normal. The average person has a huge amount of \emph{kilesas}. Just to recognize that one has a lot of \emph{kilesas} is already a big step. Even the \emph{sotāpanna} has many \emph{kilesas} to become free from, much work to be done. Even at that stage it's not as if everything is \emph{sabai}. It's as if there is a vast reservoir of \emph{kilesas} below us which gradually come to the surface, and it's not easy to know how much is remaining. Just when you think you've fully gone beyond a particular \emph{kilesa}, it will arise again. This happens over and over. The only thing to do is to keep using \emph{paññā} to keep pace with the \emph{kilesas}, meet and let go of them as they arise in the present. \emph{Tan Chandako}: Have you ever met or heard of anyone who has attained \emph{magga-phala} by only contemplating and not practising \emph{samādhi}? \emph{Tan Ajahn Piak}: No, if you want a straight answer. \emph{Samādhi} is essential for the mind to have enough power to cut thoroughly through the \emph{kilesas}. However, if one is practising \emph{vipassanā} with the understanding and intention that it will lead to the development of \emph{samādhi} at a later stage, this is a valid way to go about it. The character of almost all meditation monks, both Thais and those born in Western countries, is such that they need to use a lot of \emph{paññā} right from the very beginning in order to gradually make their minds peaceful enough to be able to develop \emph{samādhi}. Only a very small percentage of Thais, and possibly no Westerners, are the type to develop \emph{samādhi} fully before beginning \emph{vipassanā}. \emph{Tan Chandako}: Can it be said how deep and strong \emph{samādhi} must be in order to attain \emph{magga-phala}? \emph{Tan Ajahn Piak}: It must be strong enough to be still and unified as one, without any thinking whatsoever. There will still be awareness -- knowing what one is experiencing. \emph{Tan Chandako}: According to whether one is in a remote location or in a busy monastery, should one's Dhamma practice change or remain the same? \emph{Tan Ajahn Piak}: Dhamma practice takes on a different character if you are in the city or are busy with duties in a monastery. In the forest there are few external distractions and it is easy to make the mind peaceful. If you have many sense contacts and dealings with other people, it is essential to figure out how not to pick up other people's emotional vibes (\emph{arom}). Otherwise what happens is that the people around us feel lighter, while we feel heavier and heavier. It's necessary to be able to completely drop mental engagement as soon as interactions with other people have finished. Otherwise all the conversations and emotions of the day are floating around in the \emph{citta} when one goes to sit in meditation. It's easy to say, `Just be mindful' and `Don't pick up other people's baggage', but it is very difficult to do. Luang Por Chah could take on the problems and sufferings of others without picking up any of them himself, because his \emph{citta} was very strong. The people around him didn't know what was happening. They just knew that they felt cool and happy around Luang Por. But this is not a practice for beginners. Most people just get burned out. Practising in the forest is easier, and I recommend that you should try as much as possible not to get involved with too many responsibilities, especially being an abbot. If someone tries to tell you that you are selfish and should be helping others, reflect that this is due in large part to the conditioning from Western society. If the Buddha had thought that way, we never would have had a Buddha. In order to put your mind at rest, reflect on the goodness you've done and rejoice in the \emph{pāramī} that you're creating. Those who try to help others too much before they've helped themselves will never be able to teach or help beyond the superficial. If their teachings mislead others due to their own ignorance, they can make a lot of negative \emph{kamma}. Many of the Wat Pah Pong monks try to emulate Luang Por in his later years, when he would talk with people all day, rather than his early years of difficult practice. But it was precisely those years in the forest that made Luang Por into the great teacher that he was. \emph{Tan Chandako}: Have you ever heard of anyone attaining \emph{magga-phala} by any means other than analyzing the body into its component parts and elements? \emph{Tan Ajahn Piak}: No. At the very least, when the \emph{citta} is clearly known as \emph{anattā}, the knowing mind will return to knowing the body thoroughly as \emph{anattā} as well. \emph{Tan Chandako}: In one of Luang Por Chah's Dhamma talks he says that even for \emph{arahants} there are still \emph{kilesas}, but like a bead of water rolling off a lotus petal: nothing sticks. How do you understand this? \emph{Tan Ajahn Piak}: Luang Por liked to use language in unconventional ways in order to get people's attention and make them think. What he was referring to was the body -- the result of previous \emph{kamma} -- but the \emph{citta} was completely devoid of \emph{kilesas}. Normally people use other terms to refer to the body and the physical \emph{dukkha} of an \emph{arahant}, but Luang Por was quite creative in his use of the convention of language. \emph{Tan Chandako}: I've heard that while still a student, before you'd met Luang Por Chah, you had a vision of him. \emph{Tan Ajahn Piak}: That's right. I'd intended to return, [to New York, to finish a master's degree in business management] but soon after I'd begun to meditate I had a clear vision of a monk whom I didn't recognize, chewing betel nut. I went to see many of the famous \emph{Kruba Ajahns} at that time -- Luang Por Fun, Luang Por Waen -- but when I met Luang Por Chah I recognized him from the vision and figured that he would be my teacher. When I began to consider ordaining instead of completing my studies, my family tried hard to dissuade me, but I found meditation so peaceful that everything else felt like \emph{dukkha}. \dividerRule \section{The Authors} Tan Ajahn Piak still lives in his monastery to the north of Bangkok. Any fields surrounding it are long gone and now the Bangkok suburban sprawl has engulfed Wat Pah Cittabhāvanā. The 2011 flooding saw the monastery submerged under a couple of metres of water. However, Ajahn Piak still provides a refuge for those seeking the Buddha's path. His reputation as a meditation teacher has grown, and his emphasis on combining the cultivation of \emph{samādhi} with staying up all night brings many people to practice under him. Despite poor health he has begun travelling and teaching abroad in recent years, most notably in Malaysia, Singapore, Australia and New Zealand. Tan Chandako carried on training in Thailand under various teachers, and also spent periods of time in Perth, living in Bodhiñāṇa Monastery. He spent a year in Wat Pah Nanachat as Vice-Abbot in 2002, before seeking a place to settle down. A Rains Retreat in the Czech Republic led to his return to Australia and finally to Auckland, New Zealand, where in 2004 he was invited by the ABTA (Auckland Theravāda Buddhist Association) to establish a monastic residence on their recently-acquired property not too far from the city. Thus Vimutti Monastery was born, and an extensive programme of tree-planting and construction has been under way since then. Additional land has been purchased to provide something of a buffer zone. As well as his responsibility for running the monastery, Ajahn Chandako provides regular teaching and retreats both at the monastery and in various other parts of New Zealand. Every year he comes to Thailand and visits his home in the US, where he also conducts retreats.
{ "alphanum_fraction": 0.7818271768, "avg_line_length": 55.1272727273, "ext": "tex", "hexsha": "5f469295e12233f6e9caa3a61c1a685bb8875a6b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8c70e360908fe326f497be340e6e720bce3150b4", "max_forks_repo_licenses": [ "CC-BY-3.0" ], "max_forks_repo_name": "profound-labs/forest-path", "max_forks_repo_path": "manuscript/tex/what-it-takes.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8c70e360908fe326f497be340e6e720bce3150b4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-3.0" ], "max_issues_repo_name": "profound-labs/forest-path", "max_issues_repo_path": "manuscript/tex/what-it-takes.tex", "max_line_length": 117, "max_stars_count": 1, "max_stars_repo_head_hexsha": "8c70e360908fe326f497be340e6e720bce3150b4", "max_stars_repo_licenses": [ "CC-BY-3.0" ], "max_stars_repo_name": "profound-labs/forest-path", "max_stars_repo_path": "manuscript/tex/what-it-takes.tex", "max_stars_repo_stars_event_max_datetime": "2017-05-14T17:09:16.000Z", "max_stars_repo_stars_event_min_datetime": "2017-05-14T17:09:16.000Z", "num_tokens": 3192, "size": 12128 }
\documentclass[11pt]{article} \input{preamble.tex} \usepackage{hyperref} \hypersetup{backref, linkcolor=blue, citecolor=black, colorlinks=true, hyperindex=true} \begin{document} The source for this in the doc subdirectory of the otcetera repo \url{https://github.com/OpenTreeOfLife/otcetera/tree/master/doc}. \begin{center} {\bf Summarizing a taxonomy and multiple estimates of phylogenetic trees} \\ {Mark T.~Holder$^{1,2,\ast}$. feel free to contribute and add your name} \end{center} \tableofcontents \section{Background} The \otol project is attempting to build a platform for summarizing what is known about phylogenetic relationships across all of Life. Presenting an easy-to-interpret summary of trees that have been ``curated'' is one component of that effort. The project has decided that this summary should include a tree which \begin{compactenum} \item can be served and browsed; \item contains annotations indicating which input trees support a particular grouping; \item has all of the tips of the taxonomy; \item displays as many of the groups in the input trees as is feasible; \item may utilize ranking of trees; \item (tentative - not sure if everyone is on board with this one) has no unsupported groups;\label{noUnsupportedReq} \item (tentative) does not ``prefer'' lack of resolution (defined more thoroughly below) \end{compactenum} This results in a new form of a supertree problem described below as the ``taxonomy-based tree summary'' problem. \subsection{Taxonomy-based supertree} This is a novel name for a special form of the supertree problem. A taxonomy-based supertree has at least one input (the taxonomic tree) which is complete. \subsubsection{Taxonomy-based summary tree} Many supertree approaches seek to maximize accuracy according to some notion of distance between the true tree and estimated true. The summary that we seek has requirements (e.g. \ref{noUnsupportedReq}) which lead to less resolved trees. Such trees may fail to display all of the well-supported (or even uncontested) rooted triples, but such trees also make it easier for users to see the connection between the summary tree and the input trees. Thus, the phrase ``Taxonomy-based summary tree'' is used here to describe a taxonomy-based supertree which tries to maximize some notion of explaining the set of input trees (rather than a supertree designed to display the highest number of groups in the inputs, or some other criterion). \subsubsection{Taxonomy-based summary of ranked input trees} We have been pursuing a strategy that uses a ranking of trees. In cases of conflict, the grouping that is compatible with the higher ranked tree is shown in the tree (if it is not contradicted by another grouping from trees of even higher rank). The taxonomic tree is considered to be the lowest ranked input. Applying a ranking to the input trees is biologically questionable (because the importance of an input \ps should be based on the support for that grouping -- it is rarely the case that tree-wide rank would adequately describe the degree of statistical support for that group). Using tree-based ranks also introduces subjectivity into the summary building process. Nevertheless, tree-rank based summaries represent a reasonable starting point because they are easy to explain and the ranking permits some algorithmic simplifications. \subsection{The Sum of Weighted Input \PSs Displayed Score} Let $\mathcal{P}$ be a multiset of \pss. If each member, $i$, in this set is assigned a weight, $w_i$, then the sum of weighted input \pss displayed score, $S_w$, for a tree $T$ is: \begin{equation} S_w(T, \mathcal{P}) = \sum_{i\in \mathcal{P}} w_i \displaysPred{T}{i} \end{equation} where {\displaysPred{T}{x}} is an identity function that evaluates to 1 if tree $T$ displays $x$, and to 0 otherwise. \subsubsection{\MSWIPSD problem} Trying to find the set of trees that maximize $S_w$ is one natural goal for a summarization procedure. We can call this the ``\MSWIPSD'' problem, and the set of trees that maximize the score are denoted: \begin{equation} \mathcal{S}_{w}(\mathcal{P}) = \argmax_T S_w(T,\mathcal{P}) \end{equation} Tree-based ranking can be viewed as a means of providing weights to \pss. Each tree is converted to a set of \pss, and the tree's weight is assigned to each \ps in the set. The union of the tree's sets becomes the multiset, $\mathcal{P}$, referred to in the definition of the score above. \href{https://github.com/OpenTreeOfLife/treemachine/wiki/MaxWeightOfInputTreeEdgesDisplayed}{This link} sketches out a proof of how an extreme form of tree-based ranks can lead to greedy tree addition strategy can be guaranteed to find the set of trees that solve the \MSWIPSD problem. If the difference in tree ranks are sufficiently large from one tree to the next, then there is no need to consider skipping a \ps from a high ranking tree even if not considering that grouping would result in all of the \ps from the lower ranking trees being displayed. Unfortunately, that proof only applies to an exact algorithm which returns all possible trees that maximize the score. We do not know of a polynomial-time algorithm for solving that problem. \subsection{Trees without unsupported groups}\label{unsupportedTheory} It turns out that our ``no unsupported groups'' rule corresponds to: \begin{compactenum} \item Finding a tree, $T$, that maximizes the score by displaying as many high-ranking input \pss as possible. \item Let \pssInOptimalTree denote the set of input \pss displayed by this tree. \item Let \tripleSetInOptimal denote a set of rooted triples that is sufficient to encode the information in \pssInOptimalTree. \item Collapse edges in $T$ to create $T^{\ast}$ which is a minor-minimal tree with respect to \tripleSetInOptimal \end{compactenum} \citet{JanssonLL2012} define ``minor-minimal'' from \citet{Semple2003}: \begin{quote} ``If $T$ is a phylogenetic tree consistent with $\mathcal{R}$ and it is not possible to obtain a tree consistent with $\mathcal{R}$ by contracting an internal edge of $T$, then $T$ is called minor-minimal with respect to $\mathcal{R}$.'' \end{quote} A minor-minimal tree will display no unsupported groups. Consider an edge connecting parent $\parent{V}$ to its child $V$ in a complete tree $T$. This edge is supported (or ``the node $V$ is supported'') in the sense of the \MSWIPSD score if collapsing the edge leads to a lower (worse) score. For an edge to be supported in this sense, it must display at least 1 input \pss, and the tree that would be created by collapsing the edge to a polytomy must {\em not} display those \ps. Equivalently, we can state the conditions for node $V$ (or its subtending edge) being supported by an input \ps derived from node $x$ of tree $t$ as: \begin{eqnarray} \leafLabels{V} \cap \leafLabels{t} & = & \leafLabels{x}\\ \left(\leafLabels{\parent{V}} \cap \leafLabels{t}\right) - \leafLabels{x} & \neq &\emptyset\hskip 5em \mbox{and}\\ \leafLabels{c} \cap \leafLabels{t} & \neq & \leafLabels{x} \hskip 2em \forall c \in \children{V} \end{eqnarray} where $\children{V}$ is the set of nodes that are children of $V$. The first condition guarantees that the node $V$ displays the \ps made by $x$. The second condition assures that if the edge leading to $V$ were collapsed, the resulting polytomy would not display the \ps. The final condition assures that none of the children of $V$ also display this \ps from $x$; if any child displayed the \ps, then collapsing the edge leading to $V$ would still yield a tree that still displays the \ps derived from $x$. If we demand that a summary tree contains no unsupported groups (in the previous sense of support), we are constraining the set of summary trees such that, for every internal node in the summary tree there is at least one \ps in the input set that supports it. This restricts the number of edges in the tree: as the sum of the number of internal nodes in the inputs becomes an upper bound on the number of internal nodes in the summary tree. However, restricting the summary to only contain supported groups does not decrease the number of \pss from the inputs that are displayed. Furthermore, any maximal scoring solution can be converted to a maximal scoring solution by collapsing edges one at a time and checking for the existence of further unsupported groups. If $\mathcal{S}_s$ is the subset of $\mathcal{S}_w$ which do not contain any unsupported nodes, then $\mathcal{S}_s$ is never empty, but may be much smaller than $\mathcal{S}_w$ because every resolution of $\mathcal{S}_s$ is a member of $\mathcal{S}_w$. \subsubsection{Minimally resolved phylogenetic supertrees} \label{minrs} \citet{JanssonLL2012} define a minimally resolved supertree in the context of a supertree that is consistent with (displays) every member of a set of rooted triplets. They define a minimally resolved supertree (\textsc{MinRS} tree): for ``a set $\mathcal{R}$ of rooted triples with the leaf label set $L$ $\ldots$[the \textsc{MinRS} tree is] a rooted, unordered tree whose leaves are distinctly labeled by $L$ which has as few internal nodes as possible an which is consistent with every rooted triple in $\mathcal{R}$.'' They present a polynomial time algorithm for finding the \textsc{MinRS} tree for pectinate tree shapes. Their general algorithm for finding \textsc{MinRS} in $2^{O(n\log p)}$ time where $n$ is the number of leaves and $p$ is the largest outdegree of any internal nodes in the output. \subsubsection{trees that are minor-minimal with respect to set of triples} Page 277 of \citet{JanssonLL2012} points out that the concept of minor-minimal trees from \citet{Semple2003} is not the same as minimally resolved trees. It also points out that the BUILD tree is minor-minimal. \textsc{Note: I should have read \citet{JanssonLL2012} further before writing the next paragraph.} Note that, every internal node of a \textsc{MinRS} tree will be ``supported'', but the ``no unsupported nodes'' rule is not the same as the \textsc{MinRS} rule. The ``no unsupported nodes'' rule is more lenient in the sense of admitting more supertrees Consider the inputs (in terms of rooted triples): $A,B\mid C$ and $D,E\mid F$. The supertree $(((A,B),C,D,E),F)$ is one of several trees which has no unsupported nodes, but only the tree $((A,B,D,E),C,F)$ is the only \textsc{MinRS} tree. \subsubsection{\textsc{MinRS} behaving badly} Suppose we have taxa $h$uman, $g$orilla, $d$og, $c$at, $f$ugu, $t$una, $s$hark, and $r$ay. Let us suppose that we have two studies. One focuses on mammals and contributes $\{gh|d,cd|h\}$. The other focuses on fish, and contributes $\{ft|r,sr|f\}$. The \textsc{BUILD} tree is then $((h,g),(c,d),(f,t),(s,r))$. It is possible to merge groups to obtain the \textsc{MinRS} tree $((h,g,f,t),(c,d,s,r))$. This tree has fewer nodes, but seems worse. %to me: Ben This example illustrates the problem of what to do when triplets fall into multiple non-overlapping groups. \citet{JanssonLL2012} mentions the possibility of merging groups in the \textsc{BUILD} tree to obtain a tree with fewer internal nodes. However, it might be preferable to leave groups unmerged if there is no triplet in the input tree to support the merger. Perhaps it would be possible to indicate visually which sister branches can be merged without contradicting an input tree, since such groups could be merge by the addition of a new triplet to the input set. \subsection{Evaluating how well a single summary tree summarizes a set of summary trees}\label{treeAdmissibility} As discussed above, we might try to seek the set of trees that contain no unsupported nodes and that maximize the \SWIPSD score. However, one of the requirements is that we return a single tree. \subsubsection{Number of fully resolved trees that maximize/fail-to-maximize the \SWIPSD score} One can interpret an unresolved tree as a set of trees - specifically the set of trees that can be produced by resolving the tree. We may be able to formalize a score for a single summary tree, $T$, by considering the set of trees that can be produced by resolving it, calling this set $\mathcal{R}(T)$. Specifically, if our primary summary is a set of trees $\mathcal{S}$, we may want to evaluate $T$ by the number of trees in $\mathcal{S}$ which are not found in $\mathcal{R}(T)$ and the number (or proportion) of trees in $\mathcal{R}(T)$ which are not in $\mathcal{S}$. Both of these statistics would be small if $T$ is a good summary of $\mathcal{S}$; they would be zero if the set of trees to be summarized is identical to the resolutions of the tree. Note this pair of sets (the ``false negative'' and ``false positive'' sets) are the sets that are calculated when calculating the symmetric difference between sets (and related statistics such as the Robinson-Foulds distance in phylogenetics). Given the difficulty enumerating either $\mathcal{S}$ or $\mathcal{R}(T)$, we may not be able to easily apply this method of scoring trees often. It would also be difficult to figure out an appropriate weighting of false positives vs false negatives. However, there may be cases in which we compare two very similar trees, $A$ and $B$, it is obvious that $A$ has a lower value than $B$ for one statistic and an equal-or-lower value for the other. Using an analogy to statistics, we would say that $A$ dominates $B$ and that $B$ is an inadmissible summary. \subsubsection{The number of unsupported triples implied} Consider the inputs (in terms of rooted triples): $A,B\mid C$ and $D,E\mid F$. All valid solutions in $\mathcal{S}$ will display both of these triples (because they are compatible). These are the only 2 ``supported'' triplets. One supertree without unsupported nodes $(((A,B,D),C,E),F)$ implies 16 rooted triplets (14 unsupported triples). One supertree without unsupported nodes $(((A,B),C,D,E),F)$ implies 13 rooted triplets (11 unsupported triples). The \textsc{MinRS} tree $((A,B,D,E),C,F)$ displays 12 triplets (10 unsupported triplets). The \textsc{BUILD} tree $((A,B),C,(D,E),F)$, which has no unsupported nodes, displays only 8 triplets (6 unsupported) Thus, the final tree might be preferred on the basis of implying the lowest number of unsupported triplets, even though it has a higher number of internal nodes than the \textsc{MinRS} tree. \subsection{Relationship between a series of trees and a series \pss}\label{orderPSsTheory} In terms of the \MSWIPSD problem, the set of input trees can converted to a multiset of input \pss without altering the solution because no aspect of the scoring system depends on whether the input \ps which are displayed were derived from the same tree. As mentioned above, using a ranking system that very strongly favors the more highly ranked trees simplifies the search for a solution to the \MSWIPSD problem. Thus a ranked list of trees (highest priority to lowest) can be mapped to a list of sets of \pss. A greedy approach that tries builds up a solution by adding one \ps at a time that could generate the optimal set of summary trees could be guaranteed to work if the input order is correct. The greedy solver would have to accept as many splits as possible, and avoid rejecting a \ps unnecessarily, but it would be greedy in the sense that it does not have to ``look ahead'' or reconsider a split that it has accepted or rejected. Unfortunately, it is not clear how to convert the ranked list of trees to a ordering of the splits. The ranked list of sets of \pss that can be naturally derived from the ranked list of trees only provides a partial order. I need to dig through my notes, but I think that there are cases for which the order of adding subtrees within a tree affects the output. Checking all possible input orders would be one solution. Currently, otcetera and peyotl-based supertree steps just use a postorder traversal (which is arbitrary with respect to the order of sister groups). \NeedsAlgorithmicWork \subsection{The interpretation of input trees with tips mapped to non-terminal taxa} Some of the input phylogenetic estimates may have leaves that are not mapped to terminal taxa. The correct biological interpretation of such labels is not clear. Some possible meanings of a leaf in a tree being mapped to a non-terminal taxon, $A$: \begin{compactenum} \item $A$ should be a terminal taxon - the reference taxonomy is incorrect. \item the taxon $A$ is asserted or assumed by the authors of the study to be monophyletic.\label{itmMonophyleticTip} \item at least one descendant taxon of $A$ occurs at this point in the tree, but it is not known which descendant.\label{itmUnknownTip} \item the phylogenetic analysis was conducted using a ``chimeric'' set of character data drawn from multiple members of the taxon $A$. \item a non-extant lineage was sampled and included in the phylogenetic analysis. That tip is thought to be: \begin{compactenum} \item the most recent common ancestor of taxon $A$, \item an ancestor of taxa that are descendants of $A$ (but we don't know which one), OR \item an extinct taxon that is a member of $A$. \end{compactenum} \end{compactenum} Presumably case \ref{itmUnknownTip} is the most common case in our corpus. Even if case \ref{itmMonophyleticTip} is the case for some trees+leaf combinations, I presume that we would want the source of such phylogenetic claims to be more transparent. Thus, I assume that we do {\em not} want such tips to be interpreted as providing evidence for monophyly of the non-terminal tip. \subsubsection{expanding non-terminals to the contained terminals} If the taxon is monophyletic based on other trees in the corpus, these cases are not too not problematic. One could simply transform the tip to a polytomy containing all of the terminal taxa that are descendants of the mapped taxon as children of the polytomy. This input representation would imply that the input tree supported the monophyly of the taxon, but that could be rectified by making note of the fact that the polytomy was an expansion of a tip and later suppressing any annotation that claims that the tree supports monophyly. One could also follow this expansion procedure in the case of contested taxa. However, that would presumably entail more care to ensure that the \ps that corresponds to the polytomy does not contribute to the topological decisions during the tree construction. \subsubsection{expanding non-terminals to the contained terminals attached to the parent of the leaf}\label{expandNonTermPar} As described above, expanding the non-terminals to their contained terminals requires some bookkeeping to note that the internal node produced should not generate a \ps that is taken to be an input. If we are calculated ``supported by'' statements about the summary tree as a post-processing step (rather than propagating that information at every step of the pipeline), it is sufficient to transform the input tree with a non-terminal taxon mapping to a tree that does not claim monophyly for the non-terminal taxon. This can be done by creating the polytomy of terminal leaves at the parent node of the node that is mapped to a non-terminal taxon (and pruning the ``barren'' leaf that is the remnant of the tip mapped to the non-terminal taxon). \textbf{BEN: while this doesn't impose monophyly of the non-terminal taxon, it does seem to impose monophyly of non-terminal taxa + sisters. I'm not sure why this is correct. The ``optimizing assignment'' below seems clearly correct.} \textbf{BEN: after thinking about this, I suppose we could interpret the clade (A,B) in an input tree as stipulating that then groups A and B are jointly monophyletic (so $A \cup B$ is monophyletic). Not that this is bulletproof, but instead clarifying that we could phrase this as an interpretation rule for input trees. By interpreting input trees in this way, we assign the blame for cases where this is wrong to the input trees that contain such statements.} \textbf{BEN: when the input trees are in fact created under the interpration that (A,B) means ``there exists an A and a B'' that form a monophyletic group, then this is naturally still problematic. So the problem doesn't go away.} \subsubsection{optimizing the assignment to a terminal taxon} Under the unknown tip case (case \ref{itmUnknownTip} above), we could treat the correct assignment of the non-terminal tip to a terminal tip as an unknown, latent variable to be optimized. In other words, we would try to make the assignments in such a way as to maximize the \SWIPSD score. This sounds like it would lead to a combinatorial explosion in complexity when we have multiple trees that use the same non-terminal taxon in the corpus. So, as far as I know, we have not seriously considered this. \subsubsection{pruning non-terminal taxon tips} We have many tips that are pruned from the input trees because they are not correctly mapped to a taxon in the reference taxonomy. We could adopt the unknown tip (case \ref{itmUnknownTip} above) interpretation, and prune these tips. This seems a bit draconian and wasteful - particularly given the the study curation tool does not warn about non-terminal mapping. \subsubsection{pruning non-terminal taxon tips if the terminal taxon is not monophyletic} We might view leaves mapped to non-terminal taxa as hopelessly ambiguous whenever the non-terminal taxon is not monophyletic (based on other trees). Thus, we could prune these cases when other trees reject monophyly. This seems more reasonable that unconditionally pruning them, but more difficult to implement. A higher ranked \ps might contest the monophyly of the taxon, but that \ps might be in conflict with even more highly ranked \pss. There may be some clever trick for determining whether a taxon will be monophyletic in the final tree without performing synthesis iteratively. \subsubsection{pruning non-terminal taxon tips if the terminal taxon is contested} This is a proposal that is intermediate between the previous 2 proposals. It is easy to test for whether or not a taxon is contested. However, if there are high ranking \pss that support the monophyly of the taxon, then this procedure may prune tips that are not really ambiguous given the full data. \subsection{Proposed formalization of the goal} It would be great if the summary draft tree would be: an admissible summary tree ({\em sensu} section \ref{treeAdmissibility}) of the set of trees that maximize the ranked-tree \SWIPSD score. Unfortunately that is probably infeasible. I think that a reasonable back up would be to produce a summary tree which: \begin{compactenum} \item displays every uncontested taxon, and \item shows an admissible summary of the \MSWIPSD set for each subproblem that is created by tiling the tree into the contested subproblems. \end{compactenum} \subsection{Decomposition into uncontested taxon subproblems} One can efficiently: \begin{compactenum} \item determine whether a taxon context is contested, \item resolve any polytomy in an input tree which can be resolved by constraining the set of uncontested taxa, and \item produce a subproblem for each uncontested taxon. Each subproblem just contains a subset of each input tree that overlaps with this subproblem \end{compactenum} \subsubsection{Constraining uncontested taxa may force the \SWIPSD score to decrease.} See footnote\footnote{there used to be a conjecture by MTH to the contrary of this section header. A little thought revealed the case that is described here.} Note that in general, enforcing the presence of a contested \pss may increase the \SWIPSD score. For example, consider the ranked inputs: \begin{compactenum} \item \newick{((A,B),C)} \item \newick{((A,C),D)} \item \newick{((A,D),B)} \end{compactenum} The third tree displays a grouping that is not contested by either of the other 2 groups, but if we force that grouping to be present in the full tree, that full tree cannot display both of the higher ranked \pss. In essence, the set of \pss implied by the first 2 tree is larger than the union of each tree's set of \pss; in this case, a novel \pss: \vvps{A,B,C}{D} must be true of every tree that displays the \pss from the first 2 trees. These examples of ``implied'' or ``emergent'' \pss seem to require that the certain patterns of overlap and omission of leaves in the 2 statements that are being combined. Enforcing an uncontested taxon into the final tree can decrease the score in cases like this: \begin{compactenum} \item \newick{((A,B1),C)} \item \newick{((C,B2),A)} \item taxonomy: \newick{(A,(B1,B2)B,C)} \end{compactenum} Neither input contests the monophyly of \texttt{B}, but the first 2 statements cannot both be true if \texttt{B} is monophyletic. Thus the decomposition into uncontested groups is not a trick to speed up the identification of an optimal summary tree. Rather, it is a way to make the summary transparent and easy to fix: if a biologist sees that a taxon which they know to be non-monophyletic is uncontested, then he/she only needs to upload a tree demonstrating non-monophyly to cause this group to be treated differently in the next construction of the summary tree. \subsubsection{implementation notes: otc-uncontested-decompose} This operation is implemented in the {\tt otc-uncontested-decompose}. It takes a taxonomic tree and each of the input trees (in ranked order), and a flag that specifies which output directory should hold the subproblems. It uses an embedding approach outlined in algorithm \ref{embedTree}. When that procedure is over, every non-leaf node in the taxonomic tree has data structures that store information about every edge in an input tree that connects a child which aligns to this node or one of its descendants to its parent. If the input tree has more structure about the taxon, then the pairing of paths will be in the LoopPaths lookup table, and if the pairing just passes through a node it will be listed in the ExitPaths lookup table. We can detect whether or not a node is contested by counting the number of nodes deeper in the taxonomy that serve as ancestral nodes in the ExitPaths for a particular input tree. If there is only one such node, then the input tree does not contest the taxon. If there are no such nodes, then the input tree's root maps to this taxon. If there are multiple nodes, then the input tree contests this taxon. \ProofWriteupNeeded The traversal to decompose the tree will alter the taxonomic tree, so the taxonomic tree is cloned and embedded into the original taxonomic tree. This assures that none of the taxonomic information is lost. After producing the embedding for all of the trees, the taxonomy is traversed in post-order fashion. If a node is contested, then the branch that leads to its parent is collapsed and all of its path pairing are moved deeper in the tree. This may cause them to go from the category of ExitPaths to LoopPaths (if the parent of the taxon is also the ancestral taxonomic node of the path pairing, then it will become a loop of that parental taxon). All of the LoopPaths of the collapsed taxon become LoopPaths of the parent. If the taxon is not contested, then all of the trees that are embedded in the node are written out (in their ranked order) as subproblems. The path pairings that exit the node are then assigned the OTT ID of the uncontested taxon. When the subproblem trees are written out, any edge of an input tree that ends is an uncontested taxon is treated as a terminal edge. The OTT ID associated with the path pairing (the ID of the uncontested taxon) is what is written as the leaf label for the tree. In this way the slices of the input trees are tiled into different subproblems and the tips of the subproblems refer either to a true tip in the taxonomy, or two an uncontested taxon. This will enable grafting of the solved subproblems back together by simple ID matching. \subsection{Subproblem simplifications}\label{simplificationTheory} Consider the case of having a series of \pss to add in ranked order (where the rank-based are extreme enough to allow a greedy addition strategy). This elides the problem of ordering statements discussed in section \ref{orderPSsTheory}. Here we assume that an order has been established. They are designed assuming a greedy solver that is given a list of \pss and must decide for each whether to add it to the solution or reject it as incompatible with the current solution. There are some simplification steps which should be applicable in a stack-based approach to produce a smaller subproblem. By stack-based we mean: \begin{compactenum} \item apply simplification 1 to reduce the problem size. \item apply simplification 2 to reduce the problem size. \item[$\ldots$] \item[$n$] apply simplification $n$ to reduce the problem size. \item[$n + 1$] solve the reduced subproblem \item[$n + 2$]``undo'' simplification $n$ to augment the solution \item[$n + 3$] ``undo'' simplification $n-1$ to augment the solution \item[$\ldots$] \item[$2n + 1$] ``undo'' simplification $1$ to augment the solution \item[$2n +2$] assure that all mentioned tips are present in the solution. \end{compactenum} These simplifications should not result in a worse \SWIPSD score for the set trees. However, we lack any guarantees about how they interact with heuristic solutions of the reduced subproblem. Furthermore, we lack guarantees about how the simplifications affect a strategy that always returns an optimal summary tree, but which does not guarantee that it will find the full set of optimal summary trees. The simplifications can be applied iteratively until no more simplifications are possible. They are designed assuming a greedy solver that is given a list of \pss and must decide for each whether to add it to the solution or reject it as incompatible with the current solution. The input set of \ps subproblems is simple in that all of the labelled tips are treated as terminal taxa for the purpose of the summary. The inputs do contain trivial statements to assure that all leaves are include (e.g. the taxonomic tree is often a polytomy of all tips). \subsubsection{Prune tips that only occur in trivial \ps or exclude groups} No \ps has any support for these tips attaching anywhere above the root of the tree. So attaching them all a tree the attaches them all at the root of the subproblem will be among the optimal solutions. \simplification (1) find the set of tips that only occur in trivial \ps or in the exclude groups of \pss; (2) record these taxon labels; (3) record a copy of all \pss that are affected by pruning of these IDs and a mapping of the \ps that will result from the pruning (4) prune all such tips; \undoActions (1) restore the original statements \subsubsection{Remove any trivial \ps} \simplification (1) record any trivial split \undoActions (1) restore the original statements \subsubsection{Remove any redundant \ps} \simplification if a \ps occurs twice in the list (1) record the second and subsequent positions. (2) remove the lower ranked \ps \undoActions (1) restore the original statements \subsubsection{Conditional addition of any ``dominated'' \pss} Consider a pair of \pss: $a=\vvps{a_i}{a_e}$ and $b=\vvps{b_i}{b_e}$. We say that $a$ is dominated by $b$ iff: $a_i \subset b_i$ and $a_e\subset b_e$. If $a$ is dominated by $b$ and $b$ is higher ranked than $a$, then we can note that $a$ need not be attempted if $b$ is accepted into the solution. $a$ contains less information, so adding it will not alter the solution. If $b$ is rejected, then it is possible that $a$ will be add-able, however. \simplification if a \ps occurs twice in the list (1) record the second and subsequent positions. (2) remove the lower ranked \ps \undoActions (1) restore the original statements \subsubsection{Prune ``dominated'' tips} Note that all of the compatibility/conflict decisions rely on tests for whether or not set of labels is empty -- the presence of multiple tips rather than 1 tip in a required or prohibited set will not affect a decision about whether or not any \ps will be accepted into the solution or rejected. Consider a pair of tip labels $a$ and $b$ and a set of \pss $\mathcal{P}$. We say that $a$ is dominated by $b$ iff, $\forall (\vvps{i}{e}) \in \mathcal{P}$ one of the following applies: $a\notin \leafLabels{p}$ or $(a \in i \mbox{ and } b \in i)$ or $(a \in e \mbox{ and } b \in e)$. In other words, if $b$ is on the same ``side'' as $a$ for every \ps that $a$ occurs in, then $a$ is dominated by $b$. \simplification If there exists a tip label $a$ that is dominated by $b$: (1) copy every \ps that $a$ occurs in and record how it will map to a \ps with $a$ pruned. (2) prune $a$ from all of the \ps (3) set up bookkeeping to note whether any affected \ps is accepted or rejected for the solution. \undoActions for every affected \ps that was accepted. Add the stored \ps. This should place $a$ correctly on the solution. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Pipeline} \stepInput reference taxonomy and a ranked list of trees (study ID + tree ID pairs). \stepOutput a summary tree and annotations about which input trees supports each branch. \comment{the step numbering here does {\em not} agree with the sub dir numbering in the \otc supertree subdir. That dir structure needs to be updated.} \subsection{Prune the reference taxonomy to remove some flags} \stepExplanation Not all taxa in OTT are reliable enough to belong in the summary tree. The reference taxonomy producing software flags taxa in several ways (\href{https://github.com/OpenTreeOfLife/reference-taxonomy/wiki/taxon-flags}{which are listed here} \TODO{we still need real documentation of the flagging system}). \stepInput ``raw'' reference taxonomy with flags produced by \url{https://github.com/OpenTreeOfLife/reference-taxonomy} \stepOutput $\taxonomy$, the complete taxonomy for synthesis with some taxa pruned. This will determine the leaf label set of the final summary tree. \currImpl has been performed by treemachine. Note that there appear to be a couple of issues with that impl. See \href{https://github.com/OpenTreeOfLife/treemachine/commit/48211803f137ad0b7c096c28d1c10d32f671194f}{this comment} and \href{https://github.com/OpenTreeOfLife/treemachine/issues/168}{issue 168}. (2) Documentation needed on which flags are pruned and why. (3) We should serve this tree somewhere as it is a crucial input for the rest of the pipeline. \currURL \TODO{Temp url} \url{http://phylo.bio.ku.edu/ot/taxonomy.tre} holds the tree (obtained from either Joseph Brown or Ruchi Chaudhary via email) which MTH has been using as \taxonomy. \subsection{Snapshot input studies} \stepExplanation For a make-based system it would be useful to copy the incoming \nexson files to a snapshot location if they differ from the version of that study that is already found in that staging location. \stepInput local copy of \texttt{phylesystem} git repo, list of trees (study+tree ID + optional git SHA) to be used \stepOutput (1) copies of the \nexson files from the specified SHA. (2) record of tree identifiers \currImpl None - similar operation done by \gcmdr. \implTODO Flat file implementation needed \currURL None \subsection{Snapshot of input trees} \stepExplanation The study files may contain multiple trees, for a make-base system it would be good to have a timestamped file for each tree \stepInput snapshot of \nexson from previous step. list of tree identifiers. \stepOutput (1) one \nexson for each tree. The current naming convention of studyID\_treeID.nexson could be used - there is no need to support multiple git-SHAs per tree. \currImpl None - similar operation done by \gcmdr. \implTODO Flat file implementation needed \currURL None \subsection{Pruning of input trees} \stepExplanation To improve the chance of having a correct rooting, we prune the trees to just the ingroup. We also prune the tree down such that they contain no more than 1 exemplar of any terminal taxon and there are no cases of the taxon for one tip containing the taxon mapped to another tip. \stepInput snapshot of \nexson trees from previous step. \stepOutput (1) \phyloInputs, the input set of tree represented as one newick tree for each input tree with internal node labels that correspond to the node ID in the \nexson of the MRCA node. (2) a record of the pruning edits performed. \currImpl None - similar operation done by \gcmdr. \implTODO (1) Flat file implementation needed. (2) record of edits needed. (3) identifiers for the internal nodes would be nice for reporting the provenance of edges in the summary tree. (4) We should serve these trees somewhere as they are crucial inputs for the rest of the pipeline. \currURL \TODO{Temp url} \url{http://phylo.bio.ku.edu/ot/pruned-input-trees.tar.gz} is an archive of a set of these tree - without the node identifiers and with nodes that have out-degree=1 (obtained from either Joseph Brown or Ruchi Chaudhary via email) which MTH has been using as \phyloInputs. \subsection{Expand tips mapped to non-terminal taxa}\label{expandedPhyloStep} \stepExplanation As explained in section \ref{expandNonTermPar}, expanding tips that are mapped to non-terminal taxa to the full set of their terminal descendants and attaching these tips to the parent of the taxon should generate a tree that correctly represents what the input tree says (without erroneously claiming that the tree supports monophyly). A clever implementation would note whether a descendant terminal taxon occurs in other trees in the $\phyloInputs$ corpus. If there are multiple terminal descendant taxa in the expansion that only occur in the taxonomy, then it should be fine to let the expansions just contain 1 of these tips, $x$. This would mean that the others are pruned in the next step, but will be placed in the correct spot in the final summary tree because they should attach at the same parent node as the single exemplar, $x$. Failing to take this optimization will only mean that the pruned taxonomy is too large. \stepInput \taxonomy and \phyloInputs \stepOutput \expandedPhylo -- the set of phylogenetic inputs expanded such that no leaf is mapped to a non-terminal taxon. \currImpl None \implTODO \TODO{write this} \currURL We should probably post this set of trees, as many tools don't deal with tips that are mapped to non-terminal taxa. So these trees may be the most accessible set of inputs for most interested parties. \subsection{Prune taxonomy down to tips represented in \expandedPhylo}\label{prunedTaxonomyStep} \stepExplanation This is just an optimization step. Each terminal taxon that is only found in \taxonomy, can be placed on the final summary tree by creating a tree for the overlapping taxonomic inputs and then grafting on the ``taxonomy only'' lineages. This pruning makes the inputs for the subsequent steps smaller \stepInput \taxonomy and \expandedPhylo \stepOutput \prunedTaxonomy the pruned taxonomy \currImpl \otcprune can perform this \implTODO \currURL may want to post this somewhere. \subsection{Decompose the inputs into subproblems of uncontested taxa}\label{decomposeStep} \stepExplanation The decision to force uncontested taxa in the final summary means that we can separate the problems into non-overlapping subproblems. \stepInput \prunedTaxonomy and \expandedPhylo \stepOutput subproblems. Currently expressed (1) as one newick tree file per subproblem with the name \texttt{SUBPROBLEMID.tre}, and (2) a file called \texttt{SUBPROBLEMID-tree-names.txt} with a treefile name on each line or ``TAXONOMY'' indicating the source of each tree. the \texttt{SUBPROBLEMID} is `ott' followed by the OTT ID. \currImpl \otcdecompose \implTODO \currURL \url{http://phylo.bio.ku.edu/ot/export-sub-temp.tar.gz} has a snapshot, but those subproblems were not produced with the non-terminal tips expanded to terminals so there is some wonkiness - such tips are pruned if the taxon is contested, but their ID sets still affect the embedding of the deeper nodes in the tree. This needs to be rerun after step \ref{expandedPhyloStep} is completed. \subsection{Simplify subproblems}\label{simplifyStep} \stepExplanation As (to be) described in section \ref{simplificationTheory} there are several operations that can be performed that will reduce the size of the subproblems but which should not compromise our ability to obtain the same subproblem solution. Many subproblems are trivially solvable, so the tool that does this will also be a crude solver. \stepInput The set of subproblems, a simplified-problems directory, and a solutions directory \stepOutput When possible, subproblem solutions and simplifications will be written to the 2 output directories. \currImpl an implementation is started, but far from complete \href{https://github.com/OpenTreeOfLife/peyotl/blob/supertree/scripts/supertree/simplify_subproblems.py}{in the supertree branch of peyotl} \implTODO \TODO {finish} \currURL \subsection{Solve subproblems} \stepExplanation Attempt to find an admissible summary tree for the set of summaries that are the \MSWIPSD set. We probably want (1) a brute force implementation that we can use for small subproblems so we do not have to worry about errors from finding local optima, and (2) one or more heuristics. \stepInput a ``raw'' subproblem from step \ref{decomposeStep} or a simplified subproblem from step \ref{simplifyStep}. \stepOutput a tree for each subproblem - stored in a solutions dir under the name \texttt{SUBPROBLEMID.tre} \currImpl treemachine may provide one solver. \implTODO exact impl and, perhaps we need another approximate solver \currURL \subsection{Collapse unsupported nodes} \stepExplanation If the solver does not guarantee that no unsupported nodes will be introduced, then we can collapse them at this point. As noted in section \ref{unsupportedTheory}, this should be done iteratively rather than by identifying all unsupported edges and collapsing all of them. The latter approach would collapse too many edges. \stepInput the solutions directory holding all of the subproblem solution trees. \stepOutput a supported solutions directory holding all of the subproblem solution trees which contain no unsupported nodes. \currImpl None \implTODO \TODO{write this} \subsection{Assemble pruned summary tree from subproblem solutions}\label{assemblyStep} \stepExplanation Because the problems do not overlap, and the file names match the tip labels (when a tip of one subproblem is actually an uncontested non-terminal taxon in \prunedTaxonomy), this is a simple grafting procedure.\\ Note that each subproblem is supported by the taxonomy (at a minimum), so this step cannot introduce unsupported groups. \stepInput the solutions directory holding all of the subproblem solution trees. \stepOutput \prunedSummary -- the summary tree pruned down to the leaf set of \prunedTaxonomy. \currImpl None \implTODO \TODO{write this} \subsection{Graft the pruned taxonomy-only taxa back onto the tree} \stepExplanation ``phylo-referencing'' style logic can be used to place the taxa that were pruned in \ref{prunedTaxonomyStep} \stepInput \prunedSummary and \taxonomy \stepOutput \summaryTree -- the final summary tree \currImpl None \implTODO \TODO{write this} \subsection{Create annotations for nodes in $\summaryTree$}\label{annotationsStep} \stepExplanation At minimum, we would want statements of which input nodes support which nodes in $\summaryTree$. But we could also noted nodes displayed, nodes in conflict, and whether or not the node was constrained because it was a contested taxon. \stepInput \summaryTree and \expandedPhylo \stepOutput some as yet undefined format for expressing these annotations. \currImpl None \implTODO \TODO{write this} \subsection{Serve \summaryTree and the annotations} \stepExplanation we may be able to compile the annotations into a set of static files to be served up to the current tree browser. Or we may wish have a full database-driven web service \stepInput \summaryTree and annotations from \ref{annotationsStep} \stepOutput a web services API comparable to the \href{https://github.com/OpenTreeOfLife/opentree/wiki/Open-Tree-of-Life-APIs#tree-of-life}{tree-of-life part of the API}.// some of the services in that API would definitely require a db rather than just flat-files (e.g the MRCA and induced\_subtree) \currImpl None unless we load the tree into treemachine and add methods for serving up the annotations that are not coming from the graph-of-life \implTODO \TODO{write this} \newpage \section{Algorithms} \begin{algorithm} \caption{EmbedPhyloIntoTaxonomicScaffold}\label{embedTree} \begin{algorithmic} \REQUIRE the taxonomic tree $\taxonomy$. \REQUIRE an input tree, $T$ with a unique identifier $\mbox{id}(T)$ \FOR{each node $n_i$ in $\nodes{T}$} \STATE{$z(n_i) \leftarrow \mbox{AlignNodes}(\taxonomy, n_i)$} \ENDFOR \FOR{each node $n_i$ in $\nodes{T}$} \IF{$n_i \neq \treeRoot{T}$} \STATE $y_i \leftarrow \parent{n_i}$ \STATE$\mbox{EmbedEdge}(\taxonomy, {y_i}, z(y_i), n_i, z(n_i), \mbox{id}(T))$ \ENDIF \ENDFOR \end{algorithmic}\end{algorithm} \begin{algorithm} \caption{AlignNodes}\label{alignNodes} \begin{algorithmic} \REQUIRE the taxonomic tree $\taxonomy$. \REQUIRE a node from input tree, $n$. \IF{isLeaf($n$)} \RETURN the node in $\taxonomy$ that is mapped to the same taxonomic identifier that $n$ is mapped to. \ELSE \RETURN the node in $\taxonomy$ that is the least inclusive taxon that is an ancestor of all of taxonomic labels in $\leafLabels{n}$. \ENDIF \end{algorithmic}\end{algorithm} \begin{algorithm} \caption{EmbedEdge}\label{embedEdge} \begin{algorithmic} \REQUIRE the taxonomic tree $\taxonomy$. \REQUIRE a node from input tree, $n$, and it pair node in $\taxonomy$, $z(n)$ \REQUIRE the parent node of $n$, called $y$ and its pair node in $\taxonomy$, $z(y)$ \REQUIRE an identifier, $t$, that uniquely identifies the tree that contains $n$ and $y$. \REQUIRE Each non-leaf node in $\taxonomy$ has 2 lookup tables: \textsc{LoopPaths} and \textsc{ExitPaths} \STATE $p \leftarrow \left[n, z(n), y, z(y)\right]$ \COMMENT{$p$ is called the ``path pairing'' information} \IF{$z(n) = z(y)$} \STATE $z(y).\textsc{LoopPaths}[t] \leftarrow p$ \ELSE \STATE $c\leftarrow z(n)$ \WHILE{$c \neq z(y)$} \STATE $c.\textsc{ExitPaths}[t] \leftarrow p$ \STATE $c \leftarrow \parent{c}$ \ENDWHILE \ENDIF \end{algorithmic} \end{algorithm} \section{Subproblem solver approaches}\label{subproblemSolver} MTH had some email conversation with David Bryant -- some of the ideas here came from that conversation. If we had a complete ordering of splits, we could use a variant of \textsc{BUILD} \citep{AhoSSU1981} to generate a set of consistent splits, $\mathcal{C}$. The procedure for doing that is outlined in algorithm \ref{consistentSplitsFromRankedList}. Note that the original BUILD has scaling $O(MN)$ where $N$ is the number of leaves in the leaf set and $M$ is the number of input splits. \citet{JanssonLL2012} discuss how \citet{HenzingerKW1999} provide a DP approach that reduces the runtime to $\min\{O(MN^{1/2}), O(M + N^2 \log N)\}$, and \citet{HolmLT2001} improve this to $\min\{O(N + M\log^2 N), O(M + N^2 \log N)\}$ \begin{algorithm} \caption{ConsistentSplitsFromRankedList}\label{consistentSplitsFromRankedList} \begin{algorithmic} \REQUIRE An ordered list of $M$ splits, $\mathcal{R} = [R_1, R_2, R_3, \ldots, R_M]$ \STATE $\mathcal{C} = [R_1]$ \FOR{each split $i$ in $[2, 3 \ldots M]$} \STATE $\mathcal{T} \leftarrow \mathcal{C} + R_i$ \COMMENT{where `+' means concatenating 2 lists} \IF{\textsc{BUILD}$(\mathcal{T})$ does not return null} \STATE $\mathcal{C} \leftarrow \mathcal{T}$ \ENDIF \ENDFOR \RETURN $\mathcal{C}$ \end{algorithmic} \end{algorithm} If the number of splits in $\mathcal{C}$ is not too large, then we could use the (exponential) algorithm of \citet{JanssonLL2012} to find a \textsc{MinRS} tree to find a solution to the subproblem. \textsc{Note: it looks like \citet{ByrkaGJ2010} have a method for finding a set of consistent splits} \subsection{Partial rankings} (this section has some crude thoughts and no good solution.) We have a complete ordering of tree, but this only generates a partial ordering on splits. If two splits are both first encountered (during traversal through all groups in all trees) in the same tree, then the ordering of the splits is undetermined. A clumsy way to deal with this would be to use branch and bound: Add splits in a greedy fashion for a postorder traversal, and then add splits in a preorder traversal. Whichever order yields the largest number of splits accepted, treat that as a bound. Then start investigating constraints that force the inclusion of some of the excluded splits. The set of splits in a tree which are incompatible with a previously excluded split can be discarded as a preprocessing step. \section{Subproblem simplifications} \subsection{Using the intersection leaf set} Shortcut: prune the next tree to the intersection of its leaf set and the leaf set of the current batch of consistent phylogenetic statements. Any statement in the pruned tree that is inconsistent, need not be considered. (since the conflict detection uses a smaller graph, it might be faster than using the full leaf set). This is for the context of building up a set of consistent phylogenetic statements by adding phylogenetic statements from a new tree. Let $T_i$ be the new tree. Let $\mathcal{A}_{i-1}$ denote a set of phylogenetic statements (from the consistent trees 1 to $i-1$). Let $T_i^{\ast}$ and $\mathcal{A}_{i-1}^{\ast}$ denote $T_i$ and $\mathcal{A}_{i-1}$ pruned down by removing any leaves that are not in $\leafLabels{\mathcal{A}_{i-1}} \cap \leafLabels{T_i}$ and (in the case of $T_i$) any nodes that have an out-degree $<2$ as a result of this pruning. \begin{theorem} If a phylogenetic statement from $T_i^{\ast}$ is not consistent with $\mathcal{A}_{i-1}^{\ast}$, then the corresponding statement from $T_i$ will not be consistent with $\mathcal{A}_{i-1}$. \end{theorem} Proof: By the definition of compatability, adding new leaves cannot make an tree that is incompatible with a statement in $\mathcal{A}_{i-1}^{\ast}$ compatible with the fuller version of that statement. \subsection{Testing consistency by pruning off new leaves} Shortcut: prune off any ``new'' leaves from the next tree and then test for consistency. (since the conflict detection uses a smaller graph, it might be faster than using the full leaf set). Let $T_i^{\dag}$ denote $T_i$ pruned down by removing any leaves that are not in $\leafLabels{\mathcal{A}_{i-1}}$ any nodes that have an out-degree $<2$ as a result of this pruning. Each internal node in $T_i$ that is not a common ancestor of all of the labels in $\leafLabels{T_i^{\dag}}$ can be partitioned into one of 3 categories: \begin{compactenum} \item $\mathcal{N}^{\ddag}(T_i)$ is the set of nodes that have more than one child subtree with an contains a leaf in $\leafLabels{T_i^{\dag}}$. \item $\mathcal{N}^{\dag}(T_i)$ is the set of nodes that have exactly one child subtree with an contains a leaf in $\leafLabels{T_i^{\dag}}$. For every member of $\mathcal{N}^{\dag}(T_i)$ called $n$ let $D^{\dag}(n)$ denote the first descendant node that is either a leaf of $T_i$ or in $\mathcal{N}^{\ddag}(T_i)$. \item $\mathcal{N}^{\circ}(T_i)$ is the set of nodes that are not the ancestor of any label in $\leafLabels{T_i^{\dag}}$. \end{compactenum} \begin{theorem} If a phylogenetic statement derived from a member of $\mathcal{N}^{\ddag}(T_i)$ is consistent with $\mathcal{A}$ if and only if the corresponding member of $\mathcal{N}^{\ddag}(T_i^{\dag})$ is consistent with $\mathcal{A}$. \end{theorem} Proof: To be written \begin{theorem} Every phylogenetic statement derived from a member of $\mathcal{N}^{\circ}(T_i)$ is consistent with $\mathcal{A}_{i-1}$. \end{theorem} Proof: To be written. True because these statents are only making statements about previously unmentioned taxa. \begin{theorem} If $D^{\dag}(n)$ is a leaf, then the phylogenetic statement that corresponds to $n$ is consistent with $\mathcal{A}_{i-1}$. \end{theorem} Proof: To be written. True becaus we are only adding sister groups to a previously trivial group. \begin{theorem} If $D^{\dag}(n)$ is an internal node, then the phylogenetic statement that corresponds to $n$ will conflict with $\mathcal{A}_{i-1}$ if and only if the phylogenetic statement that corresponds to $D^{\dag}(n)$ conflicts with $\mathcal{A}_{i-1}$. \end{theorem} Proof: To be written. If $D^{\dag}(n)$ is consistent with $\mathcal{A}_{i-1}$ then there must be at least one tree that has an edge that separates all of the descendants of $D^{\dag}(n)$ from all of the (mentioned) ancestors of $n$. Attaching the ``new'' subtree along this edge produces a tree that proves that $D^{\dag}(n)$ is also consistent with $\mathcal{A}_{i-1}$. In the case of $D^{\dag}(n)$ conflicting with $\mathcal{A}_{i-1}$, we know that an edge separating a subset of the descendants from the mentioned ancestors cannot be found. Adding more leaves will not alter that. %\begin{theorem} %One can identify the most resolved collapsed form of $T_i$ which is % consistent with $\mathcal{A}_{i-1}$ by pruning both $\mathcal{A}_{i-1}$ and % $T_i$ down to the intersection of the leaf labels set: $\leafLabels{\mathcal{A}_{i-1}} \cap \leafLabels{T_i}$ %\end{theorem} \section{Acknowledgements} Thanks to David Bryant for suggestions on the subproblem solver and for pointing me to the work of J.~Jansson in the context of the the section \ref{subproblemSolver} and \ref{minrs}. \input{glossary.tex} \bibliography{otcetera} \end{document}
{ "alphanum_fraction": 0.755910937, "avg_line_length": 57.9936908517, "ext": "tex", "hexsha": "fbdeba17b93cbcb2d06575e9efe26f361a38764b", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2015-12-02T13:24:53.000Z", "max_forks_repo_forks_event_min_datetime": "2015-07-03T19:21:20.000Z", "max_forks_repo_head_hexsha": "2b5ff724094f768df9bc37b9f0ffb319abd03a20", "max_forks_repo_licenses": [ "BSD-2-Clause", "MIT" ], "max_forks_repo_name": "OpenTreeOfLife/otcetera", "max_forks_repo_path": "doc/summarizing-taxonomy-plus-trees.tex", "max_issues_count": 70, "max_issues_repo_head_hexsha": "89ad4d7b1ae62ef0f299b2ab823d4044e855a921", "max_issues_repo_issues_event_max_datetime": "2022-03-21T19:06:18.000Z", "max_issues_repo_issues_event_min_datetime": "2015-03-19T08:19:40.000Z", "max_issues_repo_licenses": [ "BSD-2-Clause", "MIT" ], "max_issues_repo_name": "mtholder/otcetera", "max_issues_repo_path": "doc/summarizing-taxonomy-plus-trees.tex", "max_line_length": 348, "max_stars_count": 4, "max_stars_repo_head_hexsha": "2b5ff724094f768df9bc37b9f0ffb319abd03a20", "max_stars_repo_licenses": [ "BSD-2-Clause", "MIT" ], "max_stars_repo_name": "OpenTreeOfLife/otcetera", "max_stars_repo_path": "doc/summarizing-taxonomy-plus-trees.tex", "max_stars_repo_stars_event_max_datetime": "2020-11-30T07:43:07.000Z", "max_stars_repo_stars_event_min_datetime": "2015-04-29T09:23:12.000Z", "num_tokens": 13955, "size": 55152 }
% This is "sig-alternate.tex" V2.1 April 2013 % This file should be compiled with V2.5 of "sig-alternate.cls" May 2012 % % This example file demonstrates the use of the 'sig-alternate.cls' % V2.5 LaTeX2e document class file. It is for those submitting % articles to ACM Conference Proceedings WHO DO NOT WISH TO % STRICTLY ADHERE TO THE SIGS (PUBS-BOARD-ENDORSED) STYLE. % The 'sig-alternate.cls' file will produce a similar-looking, % albeit, 'tighter' paper resulting in, invariably, fewer pages. % % ---------------------------------------------------------------------------------------------------------------- % This .tex file (and associated .cls V2.5) produces: % 1) The Permission Statement % 2) The Conference (location) Info information % 3) The Copyright Line with ACM data % 4) NO page numbers % % as against the acm_proc_article-sp.cls file which % DOES NOT produce 1) thru' 3) above. % % Using 'sig-alternate.cls' you have control, however, from within % the source .tex file, over both the CopyrightYear % (defaulted to 200X) and the ACM Copyright Data % (defaulted to X-XXXXX-XX-X/XX/XX). % e.g. % \CopyrightYear{2007} will cause 2007 to appear in the copyright line. % \crdata{0-12345-67-8/90/12} will cause 0-12345-67-8/90/12 to appear in the copyright line. % % --------------------------------------------------------------------------------------------------------------- % This .tex source is an example which *does* use % the .bib file (from which the .bbl file % is produced). % REMEMBER HOWEVER: After having produced the .bbl file, % and prior to final submission, you *NEED* to 'insert' % your .bbl file into your source .tex file so as to provide % ONE 'self-contained' source file. % % ================= IF YOU HAVE QUESTIONS ======================= % Questions regarding the SIGS styles, SIGS policies and % procedures, Conferences etc. should be sent to % Adrienne Griscti ([email protected]) % % Technical questions _only_ to % Gerald Murray ([email protected]) % =============================================================== % % For tracking purposes - this is V2.0 - May 2012 \documentclass{sig-alternate-05-2015} \begin{document} % Copyright \setcopyright{acmcopyright} %\setcopyright{rightsretained} %\setcopyright{usgov} %\setcopyright{usgovmixed} %\setcopyright{cagov} %\setcopyright{cagovmixed} %%% % DOI %%% \doi{10.475/123_4} %%% % ISBN %%% \isbn{123-4567-24-567/08/06} %Conference \conferenceinfo{HPCSYSPROS '16}{November 14, 2016, Salt Lake City, UT, USA} %%% \acmPrice{\$15.00} % % --- Author Metadata here --- %%% \conferenceinfo{WOODSTOCK}{'97 El Paso, Texas USA} %\CopyrightYear{2007} % Allows default copyright year (20XX) to be over-ridden - IF NEED BE. %\crdata{0-12345-67-8/90/01} % Allows default copyright data (0-89791-88-6/97/05) to be over-ridden - IF NEED BE. % --- End of Author Metadata --- \title{Cluster Computing with OpenHPC} %%% \title{Alternate {\ttlit ACM} SIG Proceedings Paper in LaTeX %%% Format\titlenote{(Produces the permission block, and %%% copyright information). For use with %%% SIG-ALTERNATE.CLS. Supported by ACM.}} %%% \subtitle{[Extended Abstract] %%% \titlenote{A full version of this paper is available as %%% \textit{Author's Guide to Preparing ACM SIG Proceedings Using %%% \LaTeX$2_\epsilon$\ and BibTeX} at %%% \texttt{www.acm.org/eaddress.htm}}} \input{authors} \maketitle \input{abstract} % % The code below should be generated by the tool at % http://dl.acm.org/ccs.cfm % Please copy and paste the code instead of the example below. \begin{CCSXML} <ccs2012> <concept> <concept_id>10003456.10003457.10003490.10003503</concept_id> <concept_desc>Social and professional topics~Software management</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003456.10003457.10003490.10003507</concept_id> <concept_desc>Social and professional topics~System management</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10011007.10011006.10011072</concept_id> <concept_desc>Software and its engineering~Software libraries and repositories</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10011007.10011006.10011066</concept_id> <concept_desc>Software and its engineering~Development frameworks and environments</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10010520.10010521.10010537</concept_id> <concept_desc>Computer systems organization~Distributed architectures</concept_desc> <concept_significance>300</concept_significance> </concept> </ccs2012> \end{CCSXML} \ccsdesc[500]{Social and professional topics~Software management} \ccsdesc[300]{Social and professional topics~System management} \ccsdesc[500]{Software and its engineering~Software libraries and repositories} \ccsdesc[300]{Software and its engineering~Development frameworks and environments} \ccsdesc[300]{Computer systems organization~Distributed architectures} % % End generated code % % % Use this command to print the description % \printccsdesc %%% \keywords{ACM proceedings; \LaTeX; text tagging} \section{Introduction} \input{intro} \section{Community building blocks for HPC systems} \subsection{Motivation} Many HPC sites spend considerable effort aggregating a large suite of open-source components to provide a capable HPC environment for their users. This is frequently motivated by the necessity to build and deploy HPC focused packages that are either absent or outdated in popular Linux distributions. Further, local packaging or customization typically tries to give software versioning access to users (e.g. via environment modules or similar equivalent). With this background motivation in mind, combined with a desire to minimize duplication and share best practices across sites, the OpenHPC community project was formed with the following mission and vision principles: \\ \noindent {\bf Mission:} to provide a reference collection of open-source HPC software components and best practices, lowering barriers to deployment, advancement, and use of modern HPC methods and tools. \noindent {\bf Vision:} OpenHPC components and best practices will enable and accelerate innovation and discoveries by broadening access to state-of-the-art, open-source HPC methods and tools in a consistent environment, supported by a collaborative, worldwide community of HPC users, developers, researchers, administrators, and vendors. \\ %%% OpenHPC %%% is focused on lowering the barrier to entry for HPC by providing a collection %%% of pre-packaged and validated binary components that can be used to install and %%% manage HPC clusters throughout their life cycle. We also provide multiple %%% system configuration recipes that leverage community reference designs and best %%% practices. The remaining sections provide a further overview of the community project by highlighting related work (\S\ref{sec:related_work}), the technical governance structure (\S\ref{sec:governance}), repository enablement (\S\ref{sec:repo_enable}), packaging conventions~(\S\ref{sec:packaging}), underlying build infrastructure~(\S\ref{sec:build_infra}), and integration testing~(\S\ref{sec:integ_testing}). \newpage \input{related_work} \subsection{Governance \& Community} \label{sec:governance} Under the auspices of the Linux Foundation, OpenHPC has established a two pronged governance structure consisting of a governing board and a technical steering committee (TSC). The governing board is responsible for budgetary oversight, intellectual property policies, marketing, and long-term road map guidance. The TSC drives the technical aspects of the project including stack architecture, software component selection, builds and releases, and day-to-day project maintenance. Individual roles within the TSC are highlighted in Figure~\ref{fig:tsc_governance}. These include common roles like maintainers and testing coordinators, but also include unique HPC roles designed to ensure influence, and capture points of view, from two key constituents. In particular, the {\em component development representative(s)} are included to represent the upstream development communities for software projects that might be integrated with the OpenHPC packaging collection. In contrast, the {\em end-user/site representative(s)} are downstream recipients of OpenHPC integration efforts and serve the interest of administrators and users of HPC systems that might leverage OpenHPC collateral. At present, there are nearly 20 community volunteers serving on the TSC~\cite{TSC_url} with representation from academia, industry, and government R\&D laboratories. %%Public mailing lists for community interaction and support are available at %%http://groups.io/openhpc, and there is a set of openHPC Slack channels as well %%(http://openhpc.slack.com). We also welcome bug reports and patches at %%http://github.com/openhpc. \begin{figure} \includegraphics[width=1.0\linewidth]{figures/governance} \caption{Identified roles within the OpenHPC Technical Steering Committee (TSC).} \label{fig:tsc_governance} \end{figure} \subsection{Installation/Repository Overview} \label{sec:repo_enable} As mentioned previously, OpenHPC endeavors to adopt a repository-based delivery model similar to the underlying OS distributions commonly used as the basis for HPC Linux clusters. At present, OpenHPC is providing builds targeted against two supported OS distributions: CentOS7 and SLES12. The underlying package managers for these two distributions are {\bf \texttt yum} and {\bf \texttt zypper}, respectively, and OpenHPC provides public repositories that are compatible with these RPM-based package managers. The installation procedure outlined in current OpenHPC recipes targets bare-metal systems and assumes that one of the supported base operating systems is first installed on a chosen {\em master} host. This is typically done leveraging bootable media from ISO images provided by the base OS and once installed, OpenHPC recipes highlight steps to install additional software and perform configurations to use the {\em master} host to provision the remaining cluster. \newpage An overview of the physical infrastructure expected for use with current OpenHPC recipes is shown in Figure~\ref{fig:cluster_arch} and highlights the high-level networking configuration. The {\em master} host requires at least two Ethernet interfaces with {\em eth0} connected to the local data center network and {\em eth1} used to provision and manage the cluster backend (these interface names are examples and may be different depending on local settings and OS conventions). Two logical IP interfaces are expected to each compute node: the first is the standard Ethernet interface that will be used for provisioning and resource management. The second is used to connect to each host's baseboard management controller (BMC) and is used for power management and remote console access. Physical connectivity for these two logical IP networks is often accommodated via separate cabling and switching infrastructure; however, an alternate configuration can also be accommodated via the use of a shared NIC. %, which runs a packet filter to divert %management packets between the host and BMC. For power management, we assume that the compute node BMCs are available via IPMI from the chosen master host. For file systems, the current recipe(s) document setting up the chosen master host as an NFS file system that is made available to the compute nodes. Installation information is also discussed to optionally include a Lustre~\cite{Lustre_url} file system mount. \begin{figure}[h] \includegraphics[width=0.95\linewidth]{figures/ohpc-arch-small.pdf} \caption{Overview of physical cluster infrastructure expected with OpenHPC installation recipes.} \label{fig:cluster_arch} \end{figure} \noindent{\bf Community Repo:} In cases where external network connectivity is available on the {\em master} host, OpenHPC provides an \texttt{ohpc-release} package that includes GPG keys for package signing and repository enablement. This package can be downloaded from the OpenHPC GitHub community site directly (\url{https://github.com/openhpc/ohpc}). Note that additional repositories may be required to resolve package dependencies and, in the case of CentOS, access to the EPEL~\cite{epel_url} repo is currently required. The most recent release branch for OpenHPC is version~1.1 and the output in Figure~\ref{fig:repolist} highlights the typical repository setup after installation of the \texttt{openhpc-release-1.1} RPM in a CentOS environment. Following typical OS distro conventions, two OpenHPC repositories are enabled by default: a { \bf base} repo corresponding to the original 1.1 release and an {\bf updates} repo that provides rolling fixes and enhancements against the 1.1 tree. \begin{figure}[h] \begin{lstlisting}[language=bash,keywords={}] (*\#*) yum repolist repo id repo name OpenHPC OpenHPC-1.1 - Base OpenHPC-updates OpenHPC-1.1 - Updates base CentOS-7 - Base epel Extra Packages for Enterprise... \end{lstlisting} \vspace*{-0.3cm} \caption{Typical package repository configuration after enabling OpenHPC (CentOS example).} \label{fig:repolist} \end{figure} \subsection{Packaging} \label{sec:packaging} %Being building-block oriented in nature, To highlight several aspects of the current packaging conventions, we next present several installation examples. Note that this discussion does not endeavor to replicate an entire install procedure, and interested readers are invited to consult the latest installation recipe(s) that are available on the community GitHub site (or via the installable \texttt{docs-ohpc} RPM) for more detailed instructions. Once the OpenHPC repository is enabled locally, a range of packages are available and a typical install on the {\em master} host begins with the installation of desired system administration services. In the example that follows, the Warewulf provisioning system~\cite{warewulf_url} and SLURM resource manager is installed using available convenience groups: \begin{figure}[h] \begin{lstlisting}[language=bash,keywords={}] [sms](*\#*) yum -i groupinstall ohpc-warewulf [sms](*\#*) yum -i groupinstall ohpc-slurm-server \end{lstlisting} \vspace*{-0.3cm} \caption{Example installations using convenience groups.} \label{fig:grouplinstall} \end{figure} Convenience groups like the examples above are prefixed with the ``ohpc-'' tag and install a collection of related packages. As an example, the \texttt{ohpc-warewulf} group expands to include all the packages needed to enable a Warewulf provisioning server. Similarly, the \texttt{ohpc-slurm-server} group includes the packages needed to stand up a SLURM control daemon for resource management~\cite{Jette02slurm:simple} across the cluster. Although not shown here, a related \texttt{ohpc-slurm-client} group is also available to allow for installation of a smaller set of packages needed to enable a SLURM client (typically installed in compute node images). Note that individual packages provided via OpenHPC have their names appended with the ``ohpc'' suffix. The motivation for this convention was to allow for easy wild-carding queries with package managers, and to also provide the ability to install OpenHPC-packaged versions of software alongside of alternate distro versions of the same packages (if available). Finally, while the examples here continue to use the \texttt{yum} package manager, equivalent commands can be substituted using \texttt{zypper} when using SLES. \\ \noindent {\bf Development Libraries}: In addition to providing tools primarily targeted at system administrators, OpenHPC also provides pre-packaged builds for a number of popular open-source tools and libraries used by HPC applications and developers. For example, OpenHPC includes a variety of builds for FFTW~\cite{FFTW05} and HDF5~\cite{hdf5_url} (including serial and parallel I/O support), and the GNU Scientific Library (GSL). A number of other development tools and libraries are included and the installation recipe(s) contain a detailed package manifest highlighting what is available for a given release. Note also that the list is expected to evolve and expand over time as additional software components are integrated within future releases. General purpose HPC systems often rely on multiple compiler and MPI family toolchains~\cite{tacc_sc_best_practices:2011} and OpenHPC supports this strategy via the adoption of a hierarchical build configuration that is cognizant of the need to deliver unique builds of a given software package for each desired compiler/MPI permutation. % Additional discussion on the build procedure will be highlighted in OBS section. %%% Again, multiple builds of %%% each package are available in the OpenHPC repository to support multiple %%% compiler and MPI family combinations where appropriate. The general naming convention for builds that have these toolchain dependencies %provided by OpenHPC is to append the compiler and MPI family name that the library was built against directly into the package name. For example, libraries that do not require MPI as part of the build process adopt the following RPM naming scheme: \\ \noindent \texttt{package-<comp\_fam>-ohpc-<ver>-<rel>.rpm} \\ \noindent where \texttt{<comp\_fam>} maps to the underlying compiler family and \texttt{<ver>} and \texttt{<rel>} correspond to the individual software version and build release number, respectively. Expanding on this convention, packages that also require MPI as part of the build additionally include the MPI family (\texttt{<mpi\_fam}>) name as follows: \\ \noindent \texttt{package-<comp\_fam>-<mpi\_fam>-ohpc-<ver>-<rel>.rpm} \\ \noindent Given the large number of installation permutations possible for software supporting multiple compiler/MPI toolchains, combined with the fact that HPC sites also tend to make multiple versions of a particular component available to their users, there is a clear need to support a flexible development environment for end users. A popular historical choice in this space over the years has been the use of Environment Modules~\cite{furlani_1996} to expose a \texttt{modules} command within a user's shell environment allowing them to load/unload desired software packages via management of key environment variables (e.g. \texttt{{PATH}} and \texttt{{LD\_LIBRARY\_PATH}}). Several implementations of the modules system have evolved and OpenHPC leverages a recent variant named {\em Lmod}~\cite{tacc_sc_best_practices:2011,lmod_url} which is Lua-based, and has embedded support for managing the hierarchical software matrix adopted in OpenHPC. In addition to providing a pre-packaged build of {\em Lmod}, development libraries and tools integrated within OpenHPC include the installation of companion module files. %for use with %{\em Lmod}. Consequently, once a desired package is installed, end users can then access and query the software through the underlying modules system. The packaging process includes a consistent set of environment variables for users to access a particular package's path for available header files and dynamic libraries. As an example, consider the following installation of the PETSc~\cite{PETSc_url} scientific toolkit built using the GNU compiler and MVAPICH2~\cite{mvapich2} MPI toolchain. \begin{figure}[h] \begin{lstlisting}[language=bash,keywords={}] [sms](*\#*) yum install petsc-gnu-mvapich2-ohpc \end{lstlisting} \vspace*{-0.3cm} \caption{Installation of PETSc for a particular compiler/MPI combination.} \label{fig:petscinstall} \end{figure} \noindent Next, assume an end user has a simple C code example they wish to build against the installed PETSc version. This can be accomplished as follows by leveraging the environment variables enabled through loading of the provided module file: \begin{figure}[h] \begin{lstlisting}[language=bash,keywords={},literate={-}{-}1] joeuser (*\$*) module load petsc joeuser (*\$*) mpicc -I$PETSC_INC petsc_hello.c \ -L$PETSC_LIB -lpetsc \end{lstlisting} \vspace*{-0.3cm} \caption{Example compilation using variables provided by PETSc module file.} \label{fig:petsccompile} \end{figure} \newpage Owing to the hierarchical capabilities of {\em Lmod}, if multiple PETSc permutations were installed (e.g. for different MPI toolchains), the end user would also be able to swap toolchains and the underlying modules system will automatically update the user's environment accordingly to be consistent with the currently loaded MPI family. %\subsection{Conventions} %\subsubsection{Architecture} %%We have assembled a variety of common ingredients required to deploy and manage %%an HPC Linux cluster including provisioning tools, resource management, I/O %%libraries, development tools, and a variety of scientific libraries. The %%delivery mechanism is via standard package managers (i.e., there are public %%OpenHPC repositories for both yum and zypper). OpenHPC currently supports CentOS %%7 and SUSE's SLE 12. A single RPM spec file generates packages for both base %%operating systems. This multiple target model also extends to compiler %%toolchains and MPI runtime libraries. For some components that means a spec file %%can generate 12 different versions of a package. This complexity is masked by %%yum/zypper convenience groups, macros within the build system, and hierarchical %%Lmod environment modules for users. %--example showing convenience group installation, package name convention, module %load-- % - repo layout description \subsection{Build Infrastructure} \label{sec:build_infra} To provide the public package repositories highlighted previously in \S\ref{sec:repo_enable}, OpenHPC utilizes a set of standalone resources running the Open Build Service (OBS)~\cite{OBS_url}. OBS is an open-source distribution development platform written in Perl that provides a transparent infrastructure for the development of Linux distributions and is the underlying build system for openSUSE. The public OBS instance for OpenHPC is available at \url{https://build.openhpc.community}. While OpenHPC does not, by itself, provide a complete Linux distribution, it does have in common many of the same packaging requirements and targets a delivery mechanism that adopts Linux sysadmin familiarity. OBS aids in this process by driving simultaneous builds for multiple OS distributions (e.g. CentOS and SLES), multiple target architectures (e.g. x86\_64 and aarch64), and by performing dependency analysis among components, triggering downstream builds as necessary based on upstream changes. Each build is carried out in a chroot or KVM environment for repeatability, and OBS manages publication of the resulting builds into package repositories compatible with {\texttt yum} and {\texttt zypper}. Both binary and source RPMs are made available as part of this process. The primary inputs for OBS are the instructions necessary to build a particular package, typically housed in an RPM .spec file. These .spec files are version controlled in the community GitHub repository and are templated in a way to have a single input drive multiple compiler/MPI family combinations. To illustrate this approach, the code in Figure~\ref{fig:metis_spec} highlights a small portion from the .spec file used to build METIS~\cite{Karypis:1998}, a popular domain decomposition library. The primary item of note is the the use of a \texttt{compiler\_family} macro which defaults to the gnu compiler family if not specified otherwise. This variable is then used to decide on underlying build and installation requirements chosen to match the desired runtime. \begin{figure}[h] % \begin{lstlisting}[language=bash,keywords={},basicstyle=\scriptsize\ttfamily,keepspaces] \begin{lstlisting}[language=bash,keywords={},basicstyle=\fontsize{7.8}{10}\ttfamily,keepspaces] %{!?compiler_family: %define compiler_family gnu} (*\#*) Compiler dependencies BuildRequires: lmod%{PROJ_DELIM} %if %{compiler_family} == gnu BuildRequires: gnu-compilers%{PROJ_DELIM} Requires: gnu-compilers%{PROJ_DELIM} %endif %if %{compiler_family} == intel BuildRequires: gcc-c++ intel-compilers-devel%{PROJ_DELIM} Requires: gcc-c++ intel-compilers-devel%{PROJ_DELIM} %endif \end{lstlisting} \vspace*{-0.3cm} \caption{Snippet from METIS .spec file highlighting compiler hierarchy template used during the build process.} \label{fig:metis_spec} \end{figure} To link the underlying source and build infrastructure together, OpenHPC's public OBS instance is integrated with the associated GitHub repository. A benefit of this integration is that whenever commits are made on key git branches, OBS automatically triggers corresponding package rebuilds. OBS also analyzes inter-package dependencies and downstream packages are rebuilt as well with updated packages published after all builds are completed. For builds that require MPI linkage, a companion .spec template is used which adds an \texttt{mpi\_family} variable that defaults to the OpenMPI~\cite{gabriel04:openmpi} stack unless specified otherwise. To maintain the concept of having a single maintainer commit drive multiple builds, our OBS configuration leverages the ability to {\em link} related software packages together. To illustrate this process, the text in Figure~\ref{fig:obs_link} contains the underlying OBS package configuration syntax for the PETSc toolkit built against MVAPICH2. The top line indicates that the MVAPICH2-based build is simply a link to the parent (default) package configuration that is OpenMPI based. What follows after that are stanzas that tell OBS to apply patches to the resulting .spec file prior to doing a build. In this case, the patches are trivial and simply redefine the \texttt{compiler\_family} and \texttt{mpi\_family} variables at the top of the .spec file. This linkage provides a convenient mechanism to extend the hierarchical runtime family approach to the underlying build system. \begin{figure}[h] \begin{lstlisting}[language=bash,keywords={},basicstyle=\fontsize{7.8}{10}\ttfamily,keepspaces] (*\#*) cat petsc-gnu-mvapich2/_link <link project='OpenHPC:1.1' package='petsc-gnu-openmpi'> <patches> <topadd>%define compiler_family gnu</topadd> <topadd>%define mpi_family mvapich2</topadd> </patches> </link> \end{lstlisting} \vspace*{-0.3cm} \caption{Underlying OBS package config highlighting linkage between builds using different runtime hierarchies.} \label{fig:obs_link} \end{figure} \subsection{Integration Testing} \label{sec:integ_testing} To facilitate validation of the OpenHPC distribution as a whole, we have devised a standalone integration test infrastructure. In order to exercise the entire scope of the distribution, we first provision a cluster from bare-metal using installation scripts provided as part of the OpenHPC documentation. Once the cluster is up and running, we launch a suite of tests targeting the functionality of each component. These tests are generally pulled from component source distributions and aim to insure development toolchains are functioning correctly and to ensure jobs perform under the resource manager. The intent is not to replicate a particular component's own validation tests, but rather to ensure all of OpenHPC is functionally integrated. The testing framework is publicly available in the OpenHPC GitHub repository. A Jenkins continuous integration server~\cite{jenkins_url} manages a set of physical servers in our test infrastructure. Jenkins periodically kickstarts a cluster master node using out-of-the-box base OS repositories, and this master is then customized according to the OpenHPC install guide. The \LaTeX\ source for the install guide contains markup that is used to generate a \texttt{{bash}} script containing each command necessary to provision and configure the cluster and install OpenHPC components. Jenkins executes this script, then launches the component test suite. The component test suite relies on a custom autotools-based framework. Individual runs of the test suite are customizable using familiar autoconf syntax, and \texttt{{make check}} does what one might expect. The framework also allows us to build and test multiple binaries of a particular component for each permutation of compiler toolchain and MPI runtime if applicable. We utilize the Bash Automated Testing System (BATS)~\cite{bats_url} framework to run tests on the cluster and report results back to Jenkins. An example test driver shell script for the PETSc toolkit is highlighted in Figure~\ref{fig:test_loop}. Recall that this package requires MPI linkage and the script highlights the fact that multiple tests are performed for each supported compiler and MPI toolchain. \begin{figure}[t] \begin{lstlisting}[language=bash,keywords={},keepspaces] (*\#*)!/bin/bash status=0 cd libs/petsc || exit 1 export BATS_JUNIT_CLASS=PETSc (*\#*) bootstrap the local autotools project ./bootstrap || exit 1 for compiler in $COMPILER_FAMILIES ; do for mpi in $MPI_FAMILIES ; do echo "--------------------------------------" echo "Libraries: PETSc tests: $compiler-$mpi" echo "--------------------------------------" module purge || exit 1 module load prun || exit 1 module load $compiler || exit 1 module load $mpi || exit 1 module load petsc || exit 1 ./configure || exit 1 make clean || exit 1 make -k check || status=1 save_logs_mpi_family tests $compiler $mpi make distclean done done exit ${status} \end{lstlisting} \vspace*{-0.3cm} \caption{Example test driver script for PETSc.} \label{fig:test_loop} \end{figure} As the test suite has grown over time to accommodate a growing set of integrated components, the current test harness has both {\em short} and {\em long} configuration options. The short mode enables only a subset of tests in order to keep the total runtime to approximately 10 minutes or less for more frequent execution in our CI environment. For the most recent OpenHPC release, the long mode with all relevant tests enabled requires approximate 90 minutes to complete approximately 1,900 individually logged tests. \section{Conclusions \& Future work} This paper has presented an overview of OpenHPC, a collaborative Linux Foundation project with organizational participation from academia, research labs, and industry. The building-block nature of the OpenHPC repository was highlighted along with some basic packaging conventions and an overview of the underlying build and test infrastructure. Future work by the OpenHPC Technical Steering Committee (TSC) is focused on formalizing and publishing a component selection process by which the community can request inclusion of additional software. Currently, OpenHPC provides simple configuration recipes for HPC clusters, but future efforts will focus on providing automation for more advanced configuration and tuning to address scalability, power management, and high availability concerns. We also hope to expand community cooperation between complementary efforts by %continue driving standardization across the HPC landscape, developing package dependency conventions with EasyBuild and Spack. %Finally, we %are working to deploy public continuous integration infrastructure, and %enable corresponding builds and releases for the ARM architecture (aarch64). %ACKNOWLEDGMENTS are optional \section{Acknowledgments} We would like to thank the Linux Foundation and associated members of the OpenHPC collaborative project for supporting this community effort. We are particularly grateful to the additional members of the Technical Steering Committee including Pavan Balaji, Todd Gamblin, Craig Gardner, Balazs Gerofi, Jennifer Green, Douglas Jacobsen, Chulho Kim, Thomas Moschny, Craig Stewart, and Scott Suchyta. We are also grateful to donations from Intel, Cavium, and Dell who have provided hardware to help support integration testing efforts, and the Texas Advanced Computing Center for hosting OpenHPC infrastructure. \bibliographystyle{abbrv} \bibliography{hpcsyspros} \end{document}
{ "alphanum_fraction": 0.7810317778, "avg_line_length": 47.3173216885, "ext": "tex", "hexsha": "e253b9f5a4efb8a33cd0c61e413af62b3e320548", "lang": "TeX", "max_forks_count": 224, "max_forks_repo_forks_event_max_datetime": "2022-03-30T00:57:48.000Z", "max_forks_repo_forks_event_min_datetime": "2015-11-12T21:17:03.000Z", "max_forks_repo_head_hexsha": "70dc728926a835ba049ddd3f4627ef08db7c95a0", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "utdsimmons/ohpc", "max_forks_repo_path": "docs/papers/HPCSYSPROS/hpcsyspros.tex", "max_issues_count": 1096, "max_issues_repo_head_hexsha": "70dc728926a835ba049ddd3f4627ef08db7c95a0", "max_issues_repo_issues_event_max_datetime": "2022-03-31T21:48:41.000Z", "max_issues_repo_issues_event_min_datetime": "2015-11-12T09:08:22.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "utdsimmons/ohpc", "max_issues_repo_path": "docs/papers/HPCSYSPROS/hpcsyspros.tex", "max_line_length": 114, "max_stars_count": 692, "max_stars_repo_head_hexsha": "70dc728926a835ba049ddd3f4627ef08db7c95a0", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "utdsimmons/ohpc", "max_stars_repo_path": "docs/papers/HPCSYSPROS/hpcsyspros.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-30T03:45:59.000Z", "max_stars_repo_stars_event_min_datetime": "2015-11-12T13:56:43.000Z", "num_tokens": 7631, "size": 32507 }
\documentclass[UKenglish]{beamer} \usetheme[NoLogo]{MathDept} \usepackage[utf8]{inputenx} % For æ, ø, å \usepackage{babel} % Automatic translations \usepackage{csquotes} % Quotation marks \usepackage{microtype} % Improved typography \usepackage{amssymb} % Mathematical symbols \usepackage{mathtools} % Mathematical symbols \usepackage[absolute, overlay]{textpos} % Arbitrary placement \setlength{\TPHorizModule}{\paperwidth} % Textpos units \setlength{\TPVertModule}{\paperheight} % Textpos units \usepackage{tikz} \usetikzlibrary{overlay-beamer-styles} % Overlay effects for TikZ \author{Martin Helsø} \title{Beamer example} \subtitle{Usage of the theme \texttt{MathDept}} \begin{document} \section{Overview} % Use % % \begin{frame}[allowframebreaks]{Title} % % if the TOC does not fit one frame. \begin{frame}{Table of contents} \tableofcontents[currentsection] \end{frame} \section{Mathematics} \subsection{Theorem} \begin{frame}{Mathematics} \begin{theorem}[Fermat's little theorem] For a prime~\(p\) and \(a \in \mathbb{Z}\) it holds that \(a^p \equiv a \pmod{p}\). \end{theorem} \begin{proof} The invertible elements in a field form a group under multiplication. In particular, the elements \begin{equation*} 1, 2, \ldots, p - 1 \in \mathbb{Z}_p \end{equation*} form a group under multiplication modulo~\(p\). This is a group of order \(p - 1\). For \(a \in \mathbb{Z}_p\) and \(a \neq 0\) we thus get \(a^{p-1} = 1 \in \mathbb{Z}_p\). The claim follows. \end{proof} \end{frame} \subsection{Example} \begin{frame}{Mathematics} \begin{example} The function \(\phi \colon \mathbb{R} \to \mathbb{R}\) given by \(\phi(x) = 2x\) is continuous at the point \(x = \alpha\), because if \(\epsilon > 0\) and \(x \in \mathbb{R}\) is such that \(\lvert x - \alpha \rvert < \delta = \frac{\epsilon}{2}\), then \begin{equation*} \lvert \phi(x) - \phi(\alpha)\rvert = 2\lvert x - \alpha \rvert < 2\delta = \epsilon. \end{equation*} \end{example} \end{frame} \section{Highlighting} \SectionPage \begin{frame}{Highlighting} Some times it is useful to \alert{highlight} certain words in the text. \begin{alertblock}{Important message} If a lot of text should be \alert{highlighted}, it is a good idea to put it in a box. \end{alertblock} It is easy to match the \structure{colour theme}. \end{frame} \section{Lists} \begin{frame}{Lists} \begin{itemize} \item Bullet lists are marked with a grey box. \end{itemize} \begin{enumerate} \item \label{enum:item} Numbered lists are marked with a white number inside a grey box. \end{enumerate} \begin{description} \item[Description] highlights important words with grey text. \end{description} Items in numbered lists like \enumref{enum:item} can be referenced with a grey box. \begin{example} \begin{itemize} \item Lists change colour after the environment. \end{itemize} \end{example} \end{frame} \section{Effects} \begin{frame}{Effects} \begin{columns}[onlytextwidth] \begin{column}{0.49\textwidth} \begin{enumerate}[<+-|alert@+>] \item Effects that control \item when text is displayed \item are specified with <> and a list of slides. \end{enumerate} \begin{theorem}<2> This theorem is only visible on slide number 2. \end{theorem} \end{column} \begin{column}{0.49\textwidth} Use \textbf<2->{textblock} for arbitrary placement of objects. \pause \medskip It creates a box with the specified width (here in a percentage of the slide's width) and upper left corner at the specified coordinate (x, y) (here x is a percentage of width and y a percentage of height). \end{column} \end{columns} \begin{textblock}{0.3}(0.45, 0.55) \includegraphics<1, 3>[width = \textwidth]{MathDept-images/MathDept-apollon} \end{textblock} \end{frame} \section{References} \begin{frame}[allowframebreaks]{References} \begin{thebibliography}{} % Article is the default. \setbeamertemplate{bibliography item}[book] \bibitem{Hartshorne1977} Hartshorne, R. \newblock \emph{Algebraic Geometry}. \newblock Springer-Verlag, 1977. \setbeamertemplate{bibliography item}[article] \bibitem{Helso2020} Helsø, M. \newblock \enquote{Rational quartic symmetroids}. \newblock \emph{Adv. Geom.}, 20(1):71--89, 2020. \setbeamertemplate{bibliography item}[online] \bibitem{HR2018} Helsø, M.\ and Ranestad, K. \newblock \emph{Rational quartic spectrahedra}, 2018. \newblock \url{https://arxiv.org/abs/1810.11235} \setbeamertemplate{bibliography item}[triangle] \bibitem{AM1969} Atiyah, M.\ and Macdonald, I. \newblock \emph{Introduction to commutative algebra}. \newblock Addison-Wesley Publishing Co., Reading, Mass.-London-Don Mills, Ont., 1969 \setbeamertemplate{bibliography item}[text] \bibitem{Artin1966} Artin, M. \newblock \enquote{On isolated rational singularities of surfaces}. \newblock \emph{Amer. J. Math.}, 80(1):129--136, 1966. \end{thebibliography} \end{frame} \end{document}
{ "alphanum_fraction": 0.6206296747, "avg_line_length": 27.1179245283, "ext": "tex", "hexsha": "2f9c53e8fe315a749276988c9ba2607f381168d9", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-10-03T21:06:54.000Z", "max_forks_repo_forks_event_min_datetime": "2019-10-03T21:06:54.000Z", "max_forks_repo_head_hexsha": "7043e89392e8f0da00be8d22c9bd263bd157022e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "martinhelso/MathDept", "max_forks_repo_path": "main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7043e89392e8f0da00be8d22c9bd263bd157022e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "martinhelso/MathDept", "max_issues_repo_path": "main.tex", "max_line_length": 133, "max_stars_count": 7, "max_stars_repo_head_hexsha": "7043e89392e8f0da00be8d22c9bd263bd157022e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "martinhelso/MathDept", "max_stars_repo_path": "main.tex", "max_stars_repo_stars_event_max_datetime": "2020-11-15T19:46:24.000Z", "max_stars_repo_stars_event_min_datetime": "2019-08-19T17:01:10.000Z", "num_tokens": 1656, "size": 5749 }
\chapter{Data Format Compatibility} \label{compatible} Many of the tools automatically detect the input file format and perform the same operation for a number of different formats, for example viewing of HDF 4 and NOAA 1b files. The following table shows the file format compatibility of the various tools: \\ \\ \begin{tabular}{|l|p{5cm}|p{5cm}|} \hline TOOL & INPUT & OUTPUT \\ \hline cdat & HDF 4, NetCDF 3, NetCDF 4, NOAA 1b & HDF 4, NetCDF 3, NetCDF 4, Binary, Text, ArcGIS, PNG, GIF, JPEG, PDF, GeoTIFF \\ \hline cwangles & HDF 4, NetCDF 3 & -- \\ \hline cwautonav & HDF 4, NetCDF 3 & -- \\ \hline cwcomposite & HDF 4, NetCDF 3, NetCDF 4 & HDF 4 \\ \hline cwcoverage & HDF 4, NetCDF 3, NetCDF 4, NOAA 1b & PNG \\ \hline cwdownload & -- & -- \\ \hline cwexport & HDF 4, NetCDF 3, NetCDF 4, NOAA 1b & Binary, Text, ArcGIS, NetCDF 3, NetCDF 4 \\ \hline cwgraphics & HDF 4, NetCDF 3, NetCDF 4, NOAA 1b & HDF 4 \\ \hline cwimport & HDF 4, NetCDF 3, NetCDF 4, NOAA 1b & HDF 4 \\ \hline cwinfo & HDF 4, NetCDF 3, NetCDF 4, NOAA 1b & -- \\ \hline cwmaster & HDF 4, NetCDF 3, NetCDF 4 & HDF 4 \\ \hline cwmath & HDF 4, NetCDF 3, NetCDF 4, NOAA 1b & HDF 4 \\ \hline cwnavigate & HDF 4, NetCDF 3 & -- \\ \hline cwregister & HDF 4, NetCDF 3, NetCDF 4, NOAA 1b & HDF 4 \\ \hline \end{tabular} \begin{tabular}{|l|p{5cm}|p{5cm}|} \hline TOOL & INPUT & OUTPUT \\ \hline cwrender & HDF 4, NetCDF 3, NetCDF 4, NOAA 1b & PNG, GIF, JPEG, PDF, GeoTIFF \\ \hline cwsample & HDF 4, NetCDF 3, NetCDF 4, NOAA 1b & Text \\ \hline cwstats & HDF 4, NetCDF 3, NetCDF 4, NOAA 1b & -- \\ \hline cwstatus & -- & -- \\ \hline \end{tabular} \\ \\ \\ Note that the metadata and/or versions supported by the physical file formats are as follows: \\ \\ \begin{tabular}{|l|p{10cm}|} \hline PHYSICAL FORMAT & METADATA / VERSION \\ \hline HDF 4 & CoastWatch 3.4, TeraScan (only {\tt rectangular}, {\tt polarstereo}, {\tt mercator}, {\tt emercator}, and {\tt sensor\ scan} projections), ACSPO \\ \hline NetCDF 3 & CoastWatch 3.4, CF 1.4 \\ \hline NetCDF 4 & CoastWatch 3.4, CF 1.4, ACSPO \\ \hline NOAA 1b & AVHRR, AMSU-A, AMSU-B, HIRS 4, and MHS sensors \\ & AVHRR LAC or GAC in 8/10/16-bit sensor word sizes \\ & File format versions 1 through 5, with or without archive header \\ \hline ArcGIS & 32-bit IEEE float binary grid with accompanying header file \\ \hline GeoTIFF & TIFF spec 6.0, GeoTIFF spec 1.8.2 \\ & Uncompressed, Deflate (PKZIP-style), and PackBits compression \\ & 8-bit or 24-bit \\ & Map projections supported: \\ & Alaska Conformal \\ & Albers Conical Equal Area \\ & Azimuthal Equidistant \\ & Equirectangular \\ & Gnomonic \\ & Lambert Azimuthal Equal Area \\ & Lambert Conformal Conic \\ & Mercator \\ & Miller Cylindrical \\ & Orthographic \\ & Polar Stereographic \\ & Polyconic \\ & Robinson \\ & Sinusoidal \\ & Stereographic \\ & Transverse Mercator \\ & Universal Transverse Mercator \\ & Van der Grinten \\ \hline GIF & Version 89a with LZW compression and optional world file \\ \hline JPEG & JFIF standard 1.02 with optional world file \\ \hline PDF & Version 1.4 with LZW image compression \\ \hline PNG & 8-bit, 24-bit with LZW compression and optional world file \\ \hline \end{tabular}
{ "alphanum_fraction": 0.6498657918, "avg_line_length": 28.6581196581, "ext": "tex", "hexsha": "70ebe727d9c1d16d779c0b4be02c5a0d1699d282", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b555d096edd284818fa2aa15a73f703693cd23d9", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "phollemans/cwutils", "max_forks_repo_path": "doc/users_guide/compatible.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "b555d096edd284818fa2aa15a73f703693cd23d9", "max_issues_repo_issues_event_max_datetime": "2019-09-27T04:21:19.000Z", "max_issues_repo_issues_event_min_datetime": "2019-07-16T01:45:28.000Z", "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "phollemans/cwutils", "max_issues_repo_path": "doc/users_guide/compatible.tex", "max_line_length": 87, "max_stars_count": 1, "max_stars_repo_head_hexsha": "b555d096edd284818fa2aa15a73f703693cd23d9", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "phollemans/cwutils", "max_stars_repo_path": "doc/users_guide/compatible.tex", "max_stars_repo_stars_event_max_datetime": "2019-09-09T01:38:45.000Z", "max_stars_repo_stars_event_min_datetime": "2019-09-09T01:38:45.000Z", "num_tokens": 1204, "size": 3353 }
\chapter{City Improvements}
{ "alphanum_fraction": 0.7931034483, "avg_line_length": 9.6666666667, "ext": "tex", "hexsha": "8ef81dcc7d5bbbce9a98077930a69907a9b3348a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "39f19d18b363cefb151bbbf050c6b672ee544117", "max_forks_repo_licenses": [ "DOC" ], "max_forks_repo_name": "xiaolanchong/call_to_power2", "max_forks_repo_path": "doc/user/manual/include/app_cityimprovements.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "39f19d18b363cefb151bbbf050c6b672ee544117", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "DOC" ], "max_issues_repo_name": "xiaolanchong/call_to_power2", "max_issues_repo_path": "doc/user/manual/include/app_cityimprovements.tex", "max_line_length": 27, "max_stars_count": null, "max_stars_repo_head_hexsha": "39f19d18b363cefb151bbbf050c6b672ee544117", "max_stars_repo_licenses": [ "DOC" ], "max_stars_repo_name": "xiaolanchong/call_to_power2", "max_stars_repo_path": "doc/user/manual/include/app_cityimprovements.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7, "size": 29 }
\chapter{Kernel FDA with Multi-layer Kernels} \label{chap_kfda} In this chapter we study the discriminating power of multi-layer kernels with kernel Fisher Discriminant Analysis(KFDA\nomenclature{KFDA}{Kernel Fisher Discriminant Analysis}). The analysis in this section was done on binary classification problems. This chapter is organized as follows: section \ref{chap4_kfda} gives a brief introduction of kernel Fisher discriminant Analysis, section \ref{chap4_experiment} contains the results of empirical study on \textit{rectangles-image} and \textit{convex} datasets(both are binary classification problems studied extensively in deep learning literatures), and section \ref{chap4_conc} gives the conclusion. \section{Kernel Fisher Discriminant Analysis} \label{chap4_kfda} The working principle of discriminant analysis is to find a set of features that discriminates the classes very well(\cite{kfda} et al.). Fisher Discriminant Analysis(FDA) was originally proposed for learning a set of discriminating features in the input space. Kernel FDA is a non-linear generalization of FDA, in which the discriminating features are learned in feature space. Let $X_1 = \{x_1^1, \ldots, x_{n_1}^1 \}$ and $X_2 = \{x_1^2, \ldots, x_{n_1}^2 \}$ be data samples from two classes (class 1 and class 2) and the union of two, denoted as $X = X_1 \cup X_2$ as the training set. KFDA find the directions $f$ which maximizes the cost function \begin{equation} \mathcal{J}(f) = \frac{f^TS_B^{\phi}f}{f^TS_W^{\phi}f} \label{4_jw} \end{equation} where $f \in \mathcal{F}$ and $S_B^{\phi}$ and $S_W^{\phi}$ are the between and within class scatter matrices respectively \[ S_B^{\phi} = (m_1^{\phi} - m_2^{\phi})(m_1^{\phi} - m_2^{\phi})^T \] \[ S_W^{\phi} = \sum_{i=1,2}\sum_{x \in X_i} (\phi(x)-m_i^{\phi})(\phi(x)-m_i^{\phi})^T \] where $m_i^{\phi} = \frac{1}{n_i} \sum_{j=1}^{n_i} \phi(x_j^i)$. Intuitively maximizing $\mathcal{J}(f)$ is equivalent to finding a direction $w$ which maximizes the separation of the two classes while minimizing the within class variance(\cite{kfda} et al.). We need to transform the formulation in \ref{4_jw} in terms of kernel function $k(x, y) = \phi(x) \cdot \phi(y)$ in order to use kernels. According to RKHS\nomenclature{RKHS}{Reproducing Kernel Hilbert Space} theory, any solution to the Tikhnov regularization $f \in \mathcal{F}$ must lie in the span of the feature map($\phi(\cdot)$) corresponding to training examples. Thus it can be represented as \begin{equation} f = \sum_{i=1}^n \alpha_i \phi(x_i) \label{4_wrkhs} \end{equation} combining \ref{4_wrkhs} and the definition of $m_i^{\phi}$ we have \[ f^Tm_i^{\phi} = \frac{1}{n_i} \sum_{j=1}^n \sum_{k=1}^{n_i} \alpha_j k(x_j, x_k^i) = \alpha^T M_i \] where $(M_i)_j = \frac{1}{n_i} \sum_{k=1}^{n_i} k(x_j, x_k^i)$. Define $M = (M_1-M_2)(M_1-M_2)^T$. The we have \begin{equation} f^T S_B^{\phi} f = \alpha^T M \alpha \label{4_wsbw} \end{equation} using similar transformations we have \begin{equation} f^T S_W^{\phi} f = \alpha^T N \alpha \label{4_wsww} \end{equation} where $N = \sum_{i=1,2} K_i(I - \bm{1}_{n_i})K_i^T $, $K_i$ is an $n \times n_i$ matrix with entries $(K_i)_{nm} = k(x_n, x_m^i)$(this is the kernel matrix for class $i$), $I$ is the identity matrix and $\bm{1}_{n_i}$ is the matrix with with all entries $\frac{1}{n_i}$. The derivation of this compact forms $M$ and $N$ are shown in Appendix \ref{derivation2}. Combining (\ref{4_wsbw}) and (\ref{4_wsww}) we will get an objective function in terms of $\alpha$. \[ \mathcal{J}(\alpha) = \frac{\alpha^T M \alpha}{\alpha^T N \alpha} \] This problem can be solved by finding the leading eigen vectors of $N^{-1}M$. The projection of a new pattern $x$ onto $f$ is given by \[ f \cdot \phi(x) = \sum_{i=1}^n \alpha_i k(x_i, x) \] The estimation of $N \in \mathbb{R}^{n \times n}$ from a sample of size $n$ poses an ill-posed problem(since the sample size is not high enough to get an exact covariance structure in $\mathbb{R}^{n \times n}$). This problem is solved by replacing $N$ with $N_{\mu}$ as \[ N_{\mu} = N + \mu I \] where $\mu$ is a large positive constant and $I$ is the identity matrix. This has two possible benefits \begin{itemize} \item It makes the problem numerically more stable as for large $\mu$, $N_{\mu}$ will become positive definite. \item It decreases the bias in sample based estimation of eigenvalues. \end{itemize} \section{Experiments} \label{chap4_experiment} Empirical study was conducted on two binary classification datasets namely \textit{rectangles-image} dataset and \textit{convex} dataset. A short description about \textit{rectangles-image} dataset is given in \autoref{chap_mkm}. \subsection{Convex Dataset} The \textit{convex} dataset consists of a single convex region in an image. The dataset was constructed by taking the intersection of a number of half-planes whose location and orientation were chosen uniformly at random. The classification task was to identify whether the shape enclosed in the image is convex or not. This dataset consists of 12000 training and 50000 testing samples of size 28$\times$28. Figure \ref{shape} shows some sample images from \textit{rectangles-image} and \textit{convex} datasets. In the experiments, KFDA with multi-layer arc-cosine kernels were used for feature extraction and kNN classifier was used for the classification. Table \ref{kfda_results} shows the results of the empirical study. \begin{figure*} \centering \captionsetup{justification=centering,margin=0.1cm} \includegraphics[scale=0.6]{figures/shapes} \caption{Sample images from \textit{rectangles-image}(first row) and \textit{convex}(second row) datasets.} \label{shape} \end{figure*} \renewcommand{\arraystretch}{2.3} \begin{table*} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{\textbf{Dataset}} & \multicolumn{7}{ |c| }{\textbf{Loss in Percentage}} \\ \cline{2-8} &$\textrm{SVM}_{\textrm{RBF}}$ & $\textrm{SVM}_{\textrm{Poly}}$ & NNet & DBN-3 & SAA-3 & DBN-1 & \textbf{KFDA}\\ \hline \textit{rect-image} & 24.04 & 24.05 & 33.20 & 23.69 & 24.05 & 22.50 & \textbf{21.96}\\ \hline \textit{convex} & 19.13 & 19.82 & 32.25 & 19.92 & \textbf{18.41} & 18.63 & 19.02\\ \hline \end{tabular} \caption{Experimental Results of KFDA with multi-layer kernels.} \label{kfda_results} \end{table*} \renewcommand{\arraystretch}{1} \renewcommand{\arraystretch}{2} \begin{table} \centering \begin{tabular}{|c|c|} \hline \textbf{Kernel Parameters} & \textbf{Loss in Percentage}\\ \hline 0 & 23.12\\ \hline 0,3 & 22.54\\ \hline 0,3,3 & 22.39\\ \hline 0,3,3,3 & 22.15\\ \hline 0,3,3,3,3 & 21.96\\ \hline 0,3,3,3,3,3 & 22.01\\ \hline \end{tabular} \caption{Change in classifier performance while increasing number of layers for \textit{rectangles-image} dataset} \label{chap4_tab1} \end{table} \renewcommand{\arraystretch}{1} \renewcommand{\arraystretch}{2} \begin{table} \centering \begin{tabular}{|c|c|} \hline \textbf{Kernel Parameters} & \textbf{Loss in Percentage}\\ \hline 1 & 21.94\\ \hline 1 $\times$ 3 & 21.68\\ \hline 1 $\times$ 6 & 21.46\\ \hline 1 $\times$ 9 & 19.78\\ \hline 1 $\times$ 12 & 19.52\\ \hline 1 $\times$ 15 & 19.38\\ \hline 1 $\times$ 18 & 19.30\\ \hline 1 $\times$ 21 & 19.02\\ \hline \end{tabular} \caption{Change in classifier performance while increasing number of layers for \textit{convex} dataset} \label{chap4_tab2} \end{table} \renewcommand{\arraystretch}{1} For \textit{rectangles-image} dataset, the best result was obtained for a five layer KFDA with kernel degree values in each layer was given by [0,3,3,3,3]. For \textit{convex} dataset the best result was obtained from a model having 20 layers with degree parameter equal to 1 in each layer. The variations in classifier performance as the number of layers were increased is shown in tables \ref{chap4_tab1} and \ref{chap4_tab2} for \textit{rectangles-image} and \textit{convex} datasets respectively. In table \ref{chap4_tab2}, 1 $\times$ $n$ indicates that an arc-cosine kernel of $n$ layers is used with kernel parameter is equal to `1' in each layer. \section{Conclusion} \label{chap4_conc} In this chapter we experimented on KFDA with multi-layer arc-cosine kernels. The result obtained are very promising. On \textit{rectangles-image} dataset, the classifier performed even better than a DBN based model. On \textit{convex} dataset, its performance was better than all shallow models and was comparable with that of deep models. One of the striking observation from these results is that, better performance is obtained when using either a highly non-linear arc-cosine kernel(degree $>$ 1) or a multi-layer arc-cosine kernel with very large number of layers (above 10).
{ "alphanum_fraction": 0.7228600659, "avg_line_length": 59.843537415, "ext": "tex", "hexsha": "199dd3fe38b750d311aa626bce52badd8def2269", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "cc10673e695cbc0531f6268d729760705890a116", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "akhilpm/Masters-Project", "max_forks_repo_path": "Thesis/chapter4.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "cc10673e695cbc0531f6268d729760705890a116", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "akhilpm/Masters-Project", "max_issues_repo_path": "Thesis/chapter4.tex", "max_line_length": 660, "max_stars_count": null, "max_stars_repo_head_hexsha": "cc10673e695cbc0531f6268d729760705890a116", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "akhilpm/Masters-Project", "max_stars_repo_path": "Thesis/chapter4.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2877, "size": 8797 }
\newpage \section{Results and evaluation of methods}\label{section:statistics_results} In this section, different schemes of sampling in the context of Stratified sampling are compared in the context of synthetic data, primarily in the context of Beta-distributed data, and also in a more specific Bernoulli-uniform data set. In the next subsection \ref{sec:shapley} we consider the effectiveness of these different methods of sampling specifically to approximate the Shapley Value in the context of various cooperative games. The results and analysis of these methods are discussed in the following discussion section \ref{sec:discussion}.\footnote{ Sourcecode for all the experiments in this paper are available at:\\ \href{https://github.com/Markopolo141/Stratified\_Empirical\_Bernstein\_Sampling}{https://github.com/Markopolo141/Stratified\_Empirical\_Bernstein\_Sampling }} \subsection{Benchmarks algorithms} We outline a range of benchmark algorithms used to evaluate the performance of various methods in the context of synthetic data sets. Then Section~\ref{ssec:SyntheticDists} describes two synthetic data sets and reports on the resulting distribution of errors under our benchmarks algorithms. In the numerical evaluations for synthetic data, we compare the following sampling methods: \begin{itemize} \item \textsc{SEBM} (Stratified empirical Bernstein method, without replacement): our SEBM method (per Algorithm \ref{alg2}) of iteratively choosing samples from strata to minimise the SEBB, given in Equation~\eqref{big_equation}. An initial sample of two data points from each strata is used to initialise the sample variances of each, with additional samples made to maximally minimise the inequality at each step. All samples are drawn \textit{without} replacement. \item \textsc{SEBM-W} (Stratified empirical Bernstein method with replacement): as above, with the exception that all samples are drawn \textit{with} replacement, and consequently the inequality does not utilise the martingale inequality given in Lemma~\ref{martingale0}. \item \textsc{Simple} (Simple random sampling, without replacement): simple random sampling from the population irrespective of strata \textit{without} replacement. \item \textsc{Simple-W} (Simple random sampling with replacement): simple random sampling from the population irrespective of strata \textit{with} replacement. \item \textsc{Ney} (Neyman sampling, without replacement): the method of maximally choosing samples \textit{without} replacement from strata proportional to the strata variance (via Theorem \ref{thm:neyman_selection}). \item \textsc{Ney-W} (Neyman sampling with replacement): the method of choosing samples \textit{with} replacement proportional to the strata variance (via Theorem \ref{thm:neyman_selection}). \item \textsc{SEBM*} (Stratified empirical Bernstein method with variance assistance): the method of iteratively choosing samples \textit{without} replacement from strata to minimise Equation~\eqref{eq1}, utilizing martingale Lemma~\ref{martingale0}. \item \textsc{SEBM*-W} (Stratified empirical Bernstein method with variance assistance): the method of iteratively choosing samples \textit{with} replacement from strata to minimise Equation~\eqref{eq1}. \item \textsc{SECM} (Stratified empirical Chernoff method): the method of iteratively choosing samples from strata \textit{without} replacement to minimize the SECB, given in Equation~\eqref{another_big_equation}. An initial sample of two data points from each strata is used to initialise the sample variances of each, with additional samples made to maximally minimize the inequality at each step. All samples are drawn \textit{without} replacement. \item \textsc{Hoeffding} (Unionised EBBs with Hoeffding's inequality): The method of sampling \textit{with} replacement to minimise probability bound of Theorem \ref{triangle_theorem2} applied with Hoeffding's inequality (Theorem \ref{Hoeffdings_inequality_proper}) \item \textsc{Audibert} (Unionised EBBs with Audibert et.al's EBB): The method of sampling \textit{with} replacement to minimise probability bound of Theorem \ref{triangle_theorem2} applied with Audibert et.al's EBB (Theorem \ref{AudibertsEBB}) \item \textsc{Maurer} (Unionised EBBs with Maurer \& Pontil's EBB): The method of sampling \textit{with} replacement to minimise probability bound of Theorem \ref{triangle_theorem2} applied with Maurer \& Pontil's EBB (Theorem \ref{MandPsEBB}) \item \textsc{EEBB} (Unionised EBBs with our Engineered EBB): The method of sampling to \textit{with} replacement to minimise the probability bound of Theorem \ref{triangle_theorem2} applied with our fitted EBB (Equation \ref{eq:prob_bound}) \item \textsc{Random} (stratified sampling with random samples from strata): The process of stratified sampling \textit{with} replacement, choosing random numbers of samples from each of the strata. \end{itemize} We consider that the three methods (\textsc{Ney},\textsc{Ney-W} and SEBM*) are built apon the assumption of known variances for the strata, which are supplied to them, so that they may serve as a comparrison with the performance which would be possible for various methods which such information. Additionally we note that for all other methods (where appropriate) we selected for minimising a 50\% confidence interval (i.e. constant $p=0.5$ and $t=0.5$). The differences between these methods provide comparisons of different algorithmic factors, such as the dynamics of sampling: with and without replacement; with stratification and without; between our method and Neyman sampling, and; with and without perfect knowledge of stratum variances. For these methods, we consider the effectiveness of sampling Beta distributed data and for a case of uniform-and-Bernoulli data. % Detailed analysis of the results is left to Section~\ref{sec:discussion}. \subsection{Synthetic data} \label{ssec:SyntheticDists} The most immediate way to compare the effectiveness of our method(s) is to generate sets of synthetic data, and then numerically examine the distribution of errors generated by the different methods of choosing samples. In this section, we describe two types of synthetic data sets used in this evaluation, namely: \begin{enumerate} \item Beta distributed stratum data, which are intended reflect possible real-world data, and \item a particular form of uniform and Bernoulli distributed stratum data, where our sampling method (SEBM) was expected to perform poorly. \end{enumerate} \subsubsection{Beta-distributed data}\label{sec:beta_distributed_data} The first pool of synthetic data % sets are intended to be representative of potential real-world data. % These sets have between 5 and 21 strata, with the number of strata drawn with uniform probability, and each strata sub-population has sizes ranging from 10 to 201, also drawn uniformly. The data values in each strata are drawn from Beta distributions, with classic probability density function: $$\phi(x)_{\{\alpha,\beta\}} =\frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)} x^{\alpha-1}(1-x)^{\beta-1} $$ with $\alpha$ and $\beta$ parameters drawn uniformly between 0 and 4 for each stratum, and $\Gamma$ is the gamma function. \input{figs/big_table1.tex} \input{figs/big_table2.tex} Figures \ref{Table1} and \ref{Table111} compare the distribution of absolute errors achieved by each of the sampling methods over 5000 rounds of these data sets. Each panel presents the results that the methods achieve for a given budget of samples, expressed as a multiple of the number of strata (noting that data sets where the sampling budget exceeded the volume of data were excluded). From the plots in Figures \ref{Table1} and \ref{Table111}, we can see that our sampling technique (SEBM and SEBM-W) performs comparably to % competitive-to and sometimes better-than Neyman sampling (\textsc{Ney} and \textsc{Ney-W}) despite not having access to knowledge of stratum variances. Also, there is a notable similarity between SEBM* and SEBM. As expected, sampling without replacement always performs better than sampling with replacement for the same method, and this difference is magnified as the number of samples grows large in comparison to the population size. Finally, simple random sampling almost always performs worst, because it fails to take advantage of any variance information. These results and their interpretation are discussed and detailed in Section~\ref{sec:discussion} along with results from the other test cases discussed below. \subsubsection{A uniform and Bernoulli distribution} \label{sec:dataset2} We also to examine data distributions in which our sampling method (SEBM) performs poorly, particularly compared to Neyman sampling (\textsc{Ney}). For this purpose, a data-set with two strata is generated, with each stratum containing $1000$ points. The data in the first stratum is uniform continuous data in range $[0,1]$, while the data in the second is Bernoulli distributed, with all zeros except for a specified small number $a$ of data points, with value 1. For this problem, we conduct stratified random sampling with a budget of $300$ samples, comparing the SEBM*, SEBM and \textsc{Ney} methods. The average error returned by the methods across 20,000 realisations of this problem, plotted against the number of successes $a$, are shown as a graph in Figure \ref{biggraph3}. This figure demonstrates that SEBM and SEBM* perform poorly when $a$ is small. This under-performance is not simply a result of the SEBM method oversampling in a process of learning the stratum variances (which was the intended demonstration), but the under-performance was present in SEBM* as well. The reasons for this under-performance are discussed in in more detail in Section~\ref{subsection:main_discussion}. But before this discussion, we also considered the approximation of the Shapley Value as an example application of our stratified sampling method. \input{figs/bernoulli_table.tex}
{ "alphanum_fraction": 0.802930622, "avg_line_length": 89.5714285714, "ext": "tex", "hexsha": "bb57692b20ccd2fbc87d93cc29343ac7295573c8", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "df7cffff8127641b0fed0309adf38cfc9372e618", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Markopolo141/Thesis_code", "max_forks_repo_path": "Thesis/chapters/statistics_results.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "df7cffff8127641b0fed0309adf38cfc9372e618", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Markopolo141/Thesis_code", "max_issues_repo_path": "Thesis/chapters/statistics_results.tex", "max_line_length": 296, "max_stars_count": null, "max_stars_repo_head_hexsha": "df7cffff8127641b0fed0309adf38cfc9372e618", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Markopolo141/Thesis_code", "max_stars_repo_path": "Thesis/chapters/statistics_results.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2338, "size": 10032 }
\documentclass{article} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage{a4} \usepackage[T1]{fontenc} \usepackage[cyr]{aeguill} \usepackage{graphicx} \usepackage{amsmath} \usepackage{authblk} \usepackage{listings} \usepackage{subfigure} \usepackage{float} \usepackage{geometry} \usepackage[a4paper]{geometry} % Marges plus larges \geometry{hmargin=2.5cm,vmargin=1.65cm} \usepackage[sectionbib]{chapterbib} \title{TS114 Signal processing \\ MICA Project} \author{ABIED Imad \\ Email: [email protected] \and AHALLI Mohamed \\ Email: [email protected] \and BIGI Mohamed \\ Email: [email protected] } \date{11/05/2021} \sloppy \begin{document} \maketitle \begin{center} ENSEIRB-MATMECA \end{center} \tableofcontents \newpage \section{Introduction} The goal of this project was to analyse signals obtained from measuring heart electrical activity. Such a signal is called electrocardiogram, ECG for short. ECG respects the pattern described in figure 1. Detecting this pattern consists of identifying the four characteristic points P, Q, R,S and T in each R-R interval from a real ECG signal, as shown in figure 2. The algorithm for this task is presented in the technical part below. \begin{figure}[htbp] \centerline{\includegraphics[scale = 0.5]{figure_1.png}} \caption{ECG pattern} \centerline{\includegraphics[scale = 0.3]{figure_2.png}} \caption{Real ECG signal} \end{figure} Furthermore, cardiac pathologies are defined from the PQRST mathematical properties. Therefore, the identification of cardiac pathologies can be automatic. Algorithms for automatic identification are discussed in the technical part of this report. PQRST mathematical properties can change over time for different reasons. For example, the heart rate increases after doing a physical activity. This is why the spectrogram was used during the project. The spectrogram was employed to understand how ECG characteristics change in the course of time, enabling us to choose the right time interval to apply desired algorithms. For that reason the first section of the technical part is dedicated to spectrograms. \section{Data visualization (spectrogram) } Spectrograms are used to locally represent the spectrum of a signal. As the spectrum is a statistical measurement, it’s necessary to define the term “locally” by a window of a certain length noted N. The larger the window, the greater frequency precision is obtained at the price of losing time accuracy. Conversely, The shorter window, the greater time accuracy is obtained at the cost of losing frequency precision. This is because there is more data to process in order to establish the spectre, which is a statistical measure. Nevertheless, the spectrum computed by a large window is not “very local". \begin{figure}[H] \begin{subfigure} \centerline{\includegraphics[scale = 0.6]{figure_3_large_window.png}} \caption{Spectrogram with a large window.} \end{subfigure} \begin{subfigure} \centerline{\includegraphics[scale = 0.2]{figure_4_short_window.png}} \caption{Spectrogram with a short window.} \end{subfigure} \end{figure} This duality between frequency precision and time accuracy can be shown by the example ./src/duality\_time\_frequence.m. In this script the spectrogram of the same signal is plotted using a large window in figure 3 and shorter one in figure 4. In figure 3, the yellow line is thin compared to the one in figure 3, facilitating to us the reading of frequency value. On the other hand, it’s very difficult to read the transition at 0.25s in comparison with figure 4. \section{QRS complex Detection} The QRS complex is going to be detected using the Pan and Tompkins algorithm.The first step of this algorithm eliminates the interference of T and P waves with the QRS complex. A well designed filter is proposed by Pan and Tompkins for this task. Its transfer function is given by : \begin{equation} H(z) = \frac{(1-z^{-6})^{2}}{(1-z^{-1})^{2}}\times\frac{(-1 + 32z^{-16} -32z^{-17} +z^{-32})}{(1-z^{-1})} \end{equation} Observing the fact that the R wave is very sharp,the derivative of the ECG signal is going to take big absolute values at the QRS complex.Pan and Tompkins suggests this time a filter with a transfer function equal to : \begin{equation} H(z) = \frac{1}{8T_s}\times(-z^{-2} - 2z^{-1} +2z +z^{2}) \end{equation} As a consequence, if a well chosen window is used for moving window integration,the result will be a signal which takes big values for every QRS complex. In order to enhance the QRS complex domain, a thresholding operation is applied with a threshold equal to the mean of ECG signal after integration. At this point, it’s certain that the computed maximum for every QRS domain corresponds to the unique pick called R. Once the R pick is detected, Q and S are detected by searching consecutively the first minimum on the left and the first minimum on the right. All those steps are applied to an ECG signal. The result at every step are plotted in figure 5. \begin{figure}[htbp] \centerline{\includegraphics[scale = 0.4]{figure_5.png}} \caption{ECG signal in each step.} \end{figure} \begin{tabular}{ |p{5cm}|p{5cm}|p{5cm}| } \hline \multicolumn{3}{|c|}{Used filters analyse} \\ \hline band-pass filter & high-pass filter & five-point differentiation filter\\ \hline Nature : band-pass & Nature : high-pass & Nature : differentiation\\ Type : Infinite Impulse Response & Type : Infinite Impulse Response & Type : Finite Impulse Response \\ Causal : Yes & Causal : Yes & Causal : No \\ Group delay : 5 samples &Group delay : 16.49 samples & Group delay : 0 \\ Linear phase : Yes & Linear phase : Yes & Linear phase : Yes\\ \hline \end{tabular} \newpage \section{P and T wave detection} \subsection{About P and T waves} Generally, P and T waves in an ECG (electrocardiogram) signal are lower in amplitude compared to a QRS complex, and, they are contaminated with noise from various sources. These factors make the detection of P and T waves within an ECG a challenging task. Unlike a P wave, T waves are slightly asymmetrical, the peak of the wave is a little closer to its end than to its beginning. \subsection{P and T waves detection method} T waves are considered to be the highest peak between the first R peak and 0.7 times the R-R interval. While P waves have the highest peak in the remaining of the interval, as shown in the figures below. \begin{figure}[htbp] \centerline{\includegraphics[scale=0.5]{T_70.png}} \caption{Highest peak on 70\% of R-R interval.} \end{figure} \begin{figure}[htbp] \centerline{\includegraphics[scale=0.5]{P_30_30.png}} \caption{Highest peak on the remaining 30\% of R-R interval.} \end{figure} In order to detect P and T waves, two filters are used: \begin{equation} G_1(z) = 1 -z^{-6} \end{equation} and \begin{equation} G_2(z) = \frac{1-z^{-8}}{1-z^{-1}} \end{equation} The first filter \textit{$G_1(z)$} is a differentiator, it allows the detection of maximum, minimum, as well as null values. This is achieved by determining where the signal, after applying the differentiator \textit{$G_1(z)$}, crosses the level 0. The Algorithm functions as follows: The R-R interval is divided into two parts, a part containing 70\% of the R-R interval, in which T waves are located; and another part containing the remaining 30\% of the R-R interval, in which P waves are located. In each part, the locations in which the signal (after crossing both filers (\textit{$G_1(z)$} and (\textit{$G_2(z)$}) crossed the level 0 were determined. As mentioned before, these locations correspond to either a maximum, minimum, or null values in the original ECG signal. Therefore, the locations that provide a maximum value, are where P and T waves are located. The figures below illustrate the locations of P and T waves in the original signal,as well as their location after each filter. \begin{figure}[H] \centerline{\includegraphics[scale=0.5]{ori_sig.png}} \caption{Original ECG with locations of P and T waves} \end{figure} \begin{figure}[H] \centerline{\includegraphics[scale=0.5]{sig_G1.png}} \caption{ECG after undergoing first filter} \end{figure} \begin{figure}[H] \centerline{\includegraphics[scale=0.5]{sig_G2.png}} \caption{ECG after undergoing second filter} \end{figure} \section{Tachycardia/Bradycardia} \subsection{Bradycardia} Bradycardia is defined as a heart rate (HR) of less than 50 or 60 bpm (beats per minute), compared to a normal heart rate of 60 to 100 bpm. A slow heart rate is in general a sign of good health and fitness. However, a heart rate that is too slow (\textit{ie} Bradycardia ), is a sign of a problem with the heart's electrical system. It means that the heart's natural pacemaker isn't working right or that the electrical pathways of the heart are disrupted. The heart beat can be so slow that it can't pump enough blood for the body. Which could be life-threatening if left untreated. \subsection{Tachycardia} Tachycardia is defined as a heart rate (HR) of over 100 bpm (beats per minute). With atrial or supraventricular tachycardia, electrical signals in the heart’s upper chambers fire abnormally. This interferes with the heart’s natural pacemaker, and causes abnormal heart rates. This rapid heartbeat keeps the heart’s chambers from filling completely between contractions, which compromises blood flow to the rest of the body. \subsection{Detecting cardiac rhythm anomalies: Tachycardia and Bradycardia} An algorithm that calculates the heart rate of a patient, would make it possible to detect cardiac rhythm anomalies. The R waves mark the moment in which the heart beats. Therefore, the duration between consecutive R peaks would help determine the heart rate. The value calculated is: \begin{equation} \overline{\Delta} = \frac{1}{N}\times\sum_{n=0}^{N-1}\Delta_n \end{equation} $\overline{\Delta}$ is the mean of R-R intervals duration. It's value was converted to bpm (beats per minute) by multiplying with \textit{$F_s$} and by 60. \section{Other pathologies} \subsection{Ectopic beats} An ectopic heartbeat is when the heart either skips a beat, or adds an extra beat. They are also called premature heartbeats. Ectopic heartbeats are usually not a cause for concern, and they may occur for no known reason. Despite the skipped or added beat, the heart functions normally. \subsection{Detecting ectopic beats} Detecting premature heartbeats, comes down to detecting irregularities in R-R intervals, since, as mentioned earlier (\textit{section 5.3}), R peaks characterize the moment in which the heart beats. The algorithm devised, calculated the length of different R-R intervals,and chose their maximum value, since an ectopic beat is placed either to close or too far from a regular beat, either a maximum or a minimum value would have been appropriate. By calculating the same value for normal patients, a threshold $\epsilon$ was determined as $\epsilon = 1$. This value (\textit{\textbf{max}}) was calculated for different patients, and compared with $\epsilon$. The results were consistent, \textit{\textbf{max}} was Superior to $\epsilon$ in patients with ectopic beats, and inferior or equal on other cases. \section{Fibrillation} \subsection{Atrial fibrillation} Atrial fibrillation is characterized by a strong heart rhythm irregularity until the point of considering it as white noise. The main property of a white noise is that its autocorrelation function is null except in zero. That is why autocorrelation function of the studied ECG is estimated with the formula : \begin{equation} \hat{\gamma_k} = \frac{1}{N-k-1}\times\sum_{n=0}^{N-k-1} (\Delta_{n+k} - \overline{\Delta})(\Delta_{n} - \overline{\Delta}) \end{equation} \begin{figure} \centerline{\includegraphics[scale=0.4]{figure_6_edge_effect.png}} \caption{Edge effect.} \end{figure} Figure 11 is an example of an estimated autocorrelation function. At the right of the figure, big random values are remarked because, in this area,the autocorrelation is estimated using a small number of samples so the mean is not representative enough. For that reason, the studied domain of the autocorrelation function is not going to include the values at the right edge. The heart rhythm is going to be considered as a white noise if 40\% of its autocorrelation function at zero is still a maximum of the autocorrelation function. \subsection{Ventricular fibrillation} Ventricular fibrillation, or V-fib, is considered the most serious cardiac rhythm disturbance. Disordered electrical activity causes the heart’s lower chambers (ventricles) to quiver, or fibrillate, instead of contracting (or beating) normally. This prohibits the heart from pumping blood. Thus leading to collapse and cardiac arrest \subsection{Detecting ventricular fibrillation} Two main properties are used to detect Ventricular fibrillation: The similarity between the ECG (electrocardiogram) of the patient and the pure sine; as well as a rapid heart rate between 240 and 600 bpm (beats per minute). The similarity between Ventricular fibrillation ECG and the pure sine function was detected by resorting to the Fourier transform. Plotting the two Fourier transforms, illustrates the similarity between them, as shown below. \textbf{N.B: FT = Fourier transform} \newpage \begin{figure} \centerline{\includegraphics[scale=0.6]{Fourrier_VF.png}} \caption{TF of ECG} \end{figure} \begin{figure} \centerline{\includegraphics[scale=0.6]{sin_fou.png}} \caption{TF of sine} \end{figure} \newpage The FT are similar, with two important peaks at symmetric values of frequency. In order to detect the two peaks, a threshold was applied to the FT,in order to calculate the number of values superior to $0.8\times\textit{n}$; where \textit{n} is the maximum value of the FT. An ECG with a ventricular fibrillation generally has a low number of values above $0.8\times\textit{n}$, compared to other ECGs, combining this with the condition of a heart rate between 240 and 600 bpm, a case of ventricular fibrillation can be determined \section{Conclusion} This report has discussed the implementation of algorithms used to detect PQRST waves and automatic heart pathology identification. Besides that, it has introduced the utility of spectrograms as well as their limitations. The results obtained were satisfying. However, we really wished to develop a graphic user interface to make our work valuable but we could not because we did not have enough time. \newpgae \begin{thebibliography}{2} \bibitem[1]{cle} P and T waves https://pubmed.ncbi.nlm.nih.gov/29484531/ \bibitem[2]{cle} Bradycardia https://www.uofmhealth.org/health-library/aa107571 \bibitem[3]{cle} tachy https://pubmed.ncbi.nlm.nih.gov/29484531/ \bibitem[4]{cle} ectopic https://www.medicalnewstoday.com/articles/323202#what-is-an-ectopic-heartbeat \bibitem[5]{cle} Ventr fibr https://www.heart.org/en/health-topics/arrhythmia/about-arrhythmia/ventricular-fibrillation \end{thebibliography} \end{document}
{ "alphanum_fraction": 0.7722732985, "avg_line_length": 51.2508474576, "ext": "tex", "hexsha": "7017a890a53923ad8b0b4d7f843a8bca1267d6fa", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "892f46ac33c5b2abb495c49768003642d6bf982d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ImadABID/MICA_Project", "max_forks_repo_path": "documents/Report_LaTeX/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "892f46ac33c5b2abb495c49768003642d6bf982d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ImadABID/MICA_Project", "max_issues_repo_path": "documents/Report_LaTeX/main.tex", "max_line_length": 621, "max_stars_count": null, "max_stars_repo_head_hexsha": "892f46ac33c5b2abb495c49768003642d6bf982d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ImadABID/MICA_Project", "max_stars_repo_path": "documents/Report_LaTeX/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3836, "size": 15119 }
\documentclass[letterpaper]{article} \usepackage{textcomp} \usepackage[firstinits=true, isbn=false, url=false, doi=false, style=ieee, defernumbers=true, sorting=ydnt]{biblatex} \renewbibmacro*{bbx:savehash}{} \addbibresource{myref.bib} \nocite{*} %\usepackage[style=numeric,sorting=ydnt]{biblatex} \usepackage{hyperref} \usepackage{geometry} \usepackage{tabularx} % Comment the following lines to use the default Computer Modern font % instead of the Palatino font provided by the mathpazo package. % Remove the 'osf' bit if you don't like the old style figures. \usepackage[T1]{fontenc} \usepackage[sc,osf]{mathpazo} \usepackage{etaremune} \newcommand{\ypl}[1]{\textcolor{red}{[YPL: #1]}} % Set your name here \def\name{Yoke Peng Leong} % Replace this with a link to your CV if you like, or set it empty % (as in \def\footerlink{}) to remove the link in the footer: \def\footerlink{} % The following metadata will show up in the PDF properties \hypersetup{ colorlinks = true, urlcolor = blue, pdfauthor = {\name}, pdfkeywords = {robotics,control,estimation}, pdftitle = {\name: Curriculum Vitae}, pdfsubject = {Curriculum Vitae}, pdfpagemode = UseNone } \geometry{ body={7.5in, 9.5in}, left=0.5in, top=0.75in } % Customize page headers \pagestyle{myheadings} \markright{\name} \thispagestyle{empty} % Custom section fonts \usepackage{sectsty} % \sectionfont{\rmfamily\mdseries\Large} \subsectionfont{\rmfamily\mdseries\itshape\large} \usepackage{titlesec} % Allows creating custom \section's\usepackage{url} \titleformat{\section}{\rmfamily\mdseries\Large}{}{0em}{}[\vspace{-8px} \hrulefill] % Section formatting % Other possible font commands include: % \ttfamily for teletype, % \sffamily for sans serif, % \bfseries for bold, % \scshape for small caps, % \normalsize, \large, \Large, \LARGE sizes. % Don't indent paragraphs. \setlength\parindent{0em} % Make lists without bullets %\renewenvironment{itemize}{ % \begin{list}{}{ % \setlength{\leftmargin}{1.5em} % } %}{ % \end{list} %} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} % Place name at left {\huge \name} \hrulefill \vspace{10px} % Alternatively, print name centered and bold: %\centerline{\huge \bf \name} \begin{minipage}{0.45\linewidth} %\href{http://www.unc.edu/}{California Institute of Technology} \\ California Institute of Technology\\ 1200 E. California Blvd.\\ MC 305-16 \\ Pasadena, CA 91125 \end{minipage} \begin{minipage}{0.45\linewidth} \begin{tabular}{ll} Phone: & (847) 644-2416 \\ %Fax: & (919) 962-5678 \\ Email: & \href{mailto:[email protected]}{[email protected]} \\ Website: & \href{http://www.cds.caltech.edu/~yleong/}{\tt http://www.cds.caltech.edu/$\sim$yleong/} \end{tabular} \end{minipage} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Education} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}} l l r} 2012 - Present &\textbf{California Institute of Technology} & Pasadena, CA\\ & Ph.D. in Control and Dynamical Systems & \\ & \textit{Adviser: Dr. John Doyle and Dr. Joel Burdick} & \\ \\ 2011 - 2012 &\textbf{Northwestern University} & Evanston, IL\\ & M.S. in Mechanical Engineering (Specialization: Robotics \& Control) & \\ & \textit{Thesis: Surface Feature Detection Based on Proprioception of a Robotic}&\\ & \textit{Finger during Haptic Exploration} & \\ \\ 2008 - 2012 &\textbf{Northwestern University}& Evanston, IL\\ & B.S. in Mechanical Engineering (Concentration: Mechatronics) & \\ &Minor: Economics & \\ & \textit{Summa cum Laude} & \end{tabular*} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Honors and Awards} \begin{itemize}\itemsep1pt \parskip0pt \parsep0pt \item Caltech Computing and Mathematical Sciences (CMS) Fellowship, 2012-2013 \item Tau Beta Pi Engineering Honors Society Fellow, 2012-2013 \item Malaysian Public Service Department Scholarships for Undergraduate Education Abroad at USA, 2007-2012 \end{itemize} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Current Research Projects} \begin{tabularx}{\textwidth}{@{\extracolsep{\fill}} X} \textbf{Nonlinear Optimal Control}\\ \textit{Adviser: Joel Burdick, John Doyle} \\ {- Synthesized control Lyapunov functions of stochastic nonlinear systems using Sum of Squares method} \\ {--> Constructed a novel general approach to compute suboptimal controller with guarantees on performance and approximation errors} \\ {- Solved high dimensional (> 6D) linear Hamilton Jacobi Bellman equation, a PDE, using tensor decomposition and alternating least squares in MATLAB} \\ {--> Increased the speed (hours to minutes) and stability of the alternating least squares algorithm} \\ \\ \textbf{Control Engineering in Neuroscience}\\ \textit{Adviser: Joel Burdick, John Doyle} \\ {- Designed and conducted human subject experiments to study human sensorimotor control feedback based on robust control theory} \\ {- Processed and analyzed motion capture data using Bash and MATLAB to confirm theoretical predictions} \\ {--> Discovered important trends (predicted by theoretical analysis and confirmed with experiments) that were neglected in previous studies} \end{tabularx} \newpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Research Experience} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}} l l r} May 2014 - July 2014 &\textbf{IMDEA Software Institute} & Madrid Institute for Advanced Studies\\ & \textit{Research Intern (Adviser: Dr. Pavithra Prabhakar)} & \\ & \multicolumn{2}{p{0.8\textwidth}}{- Synthesized optimal control strategy for hybrid dynamical systems using an abstraction-refinement procedure that preserves the transition cost} \\ & \multicolumn{2}{p{0.8\textwidth}}{--> Developed a tool in Python for synthesizing the controller} \\ \\ July 2012 - Aug 2012 &\textbf{Underwater Robotics Research Group} & Universiti Sains Malaysia\\ & \textit{Research Assistant (Adviser: Dr. Mohd Rizal Arshad)} & \\ & \multicolumn{2}{p{0.8\textwidth}}{- Modeled underwater acoustics wave propagation for jellyfish detection} \\ & \multicolumn{2}{p{0.8\textwidth}}{- Developed a model to estimate backscattering wave strength of a jellyfish } \\ \\ Dec 2010 - Jun 2012 &\textbf{Murphey Lab} & Northwestern University\\ & \textit{Undergraduate Researcher (Adviser: Dr. Todd Murphey)} & \\ & \multicolumn{2}{p{0.8\textwidth}}{- Created a 3D dynamic model of 3-joint finger tapping and sliding in Mathematica} \\ & \multicolumn{2}{p{0.8\textwidth}}{- Extended the hybrid system switching time optimization to systems with mixed dynamics and impulses} \\ & \multicolumn{2}{p{0.8\textwidth}}{--> Constructed a new smoothing algorithm to detect and localize surface feature from noisy proprioceptive measurements of a robotic finger using the impulsive hybrid system optimization technique} \end{tabular*} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Publications} \printbibliography[heading=subbibliography,title={Journal Articles},type=article] \printbibliography[heading=subbibliography,title={Refereed Conference Papers},type=inproceedings] \printbibliography[heading=subbibliography,title={Posters/Abstracts},type=misc] \printbibliography[heading=subbibliography,title={Master's Thesis},type=thesis] \newpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Teaching Experience} \subsection*{Teaching Assistant} \begin{itemize} \item ME 115 Introduction to Kinematic and Robotics (Spring 2015) \item CNS 186 Vision: From Computational Theory to Neuronal Mechanisms (Winter 2015) \item ACM 104 Linear Algebra (Fall 2014) \end{itemize} \subsection*{Guest Lecturer} %\subsubsection*{California Institute of Technology} \begin{itemize} \item CDS 240 Nonlinear Dynamical Systems (April 22, 2016) \item CDS 212 Introduction to Modern Control (May 14, 2015) \end{itemize} \subsection*{Students Advised} \begin{itemize} \item Elis Stefansson (KTH Institute of Technology, Caltech Summer Undergraduate Research Fellowship, 2015) \end{itemize} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\section*{Talks/Seminars/Lectures} % %\vspace{-15px} %\hrulefill %\vspace{10px} % %\begin{itemize} %\item Talk on ``The Role of Vision in Sensorimotor Control of Human Stick Balancing'' at University of California, Los Angeles (October 26, 2015) %\end{itemize} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Work Experience} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}} l l r} May 2016 - Sept 2016 &\textbf{Datascope Analytics} & Chicago, IL\\ & \textit{Data Science Intern} & \\ & \multicolumn{2}{p{0.8\textwidth}}{- Developed a survey analysis website application that can automatically generate useful data relationships using Django and AngularJS} \\ & \multicolumn{2}{p{0.8\textwidth}}{- Facilitated group discussions with the executive team of a client in a brainstorming workshop} \\ & \multicolumn{2}{p{0.8\textwidth}}{- Created a website application that displays the train rumbling by the office using the Chicago Transit Authority's Train Tracer API} \\ & \multicolumn{2}{p{0.8\textwidth}}{- Released a Python package that simplifies analysis of time series data at irregular time intervals} \\ & \multicolumn{2}{p{0.8\textwidth}}{- Wrote a blog post that discusses the rise of the Internet of Things} \\ \\ Sept 2010 - Jun 2012 &\textbf{Northwestern University Athletic Department} & Evanston, IL\\ & \textit{N'CAT Tutor} & \\ & \multicolumn{2}{p{0.8\textwidth}}{- Assisted student athletes in improving their academics performances in various freshman engineering classes (e.g. MATLAB, Linear Algebra, Physics) and Mechanical Engineering classes (e.g. Fluid Mechanics, Thermodynamics) via weekly one-to-one tutoring sessions} \\ & \multicolumn{2}{p{0.8\textwidth}}{- Motivated student athletes to do well in both sports and academics by giving advice on time management and stress management} \\ \\ Feb 2009 - Jun 2012 &\textbf{Northwestern University Information Technology} & Evanston, IL\\ & \textit{Technology Lab Consultant of Academic \& Research Technology (A\&RT)} & \\ & \multicolumn{2}{p{0.8\textwidth}}{- Aided users with A\&RT-supported applications and utilities including Internet based applications, word processing, spreadsheet generation and manipulation, document format conversion, and information recovery} \\ & \multicolumn{2}{p{0.8\textwidth}}{- Developed a student job applications website which involves database management, browser scripting, and server scripting} \\ \\ Jun 2011 - Sept 2011 &\textbf{Murphey Lab} & Evanston, IL\\ & \textit{Undergraduate Researcher} & \\ & \multicolumn{2}{p{0.8\textwidth}}{(Experience summarized above)} \\ \\ \end{tabular*} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}} l l r} Jun 2010 - Sept 2010 &\textbf{Ethos \& Company} & Malaysia\\ & \textit{Strategy \& Management Consulting Intern} & \\ & \multicolumn{2}{p{0.8\textwidth}}{- Collaborated with colleagues on two projects: } \\ & \multicolumn{2}{p{0.8\textwidth}}{\quad (a) Developed a framework to capture key synergies within the national automotive industry } \\ & \multicolumn{2}{p{0.8\textwidth}}{\quad (b) Assisted a global agribusiness corporation to achieve 5-year growth and profitability target} \\ & \multicolumn{2}{p{0.8\textwidth}}{- Conducted company/industry research and performed data analysis using Excel to discover trends and test hypotheses} \\ & \multicolumn{2}{p{0.8\textwidth}}{- Developed and conducted presentations for both the client and project team} \end{tabular*} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Leadership Experience} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}} l l} Jun 2014 - May 2016 &\textbf{Caltech Graduate Student Council}\\ & \textit{CDS Option Representatives} \\ & \textit{Research Communication Chair (2015-2016)} \\ & \multicolumn{1}{p{0.8\textwidth}}{- Advocated for graduate students in CDS option (major) and international graduate students } \\ &\multicolumn{1}{p{0.8\textwidth}}{- Organized the 2016 GSC Graduate Student Poster Session} \\ &\multicolumn{1}{p{0.8\textwidth}}{- Organized two lunches for ``Take an alumni to lunch'' series } \\ &\multicolumn{1}{p{0.8\textwidth}}{- Coordinated off campus concert trips for 20 - 30 graduate students per trip } \\ \\ \end{tabular*} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}} l l} Summer 2014, 2015 &\textbf{Caltech Teaching Conference}\\ & \textit{Committee Member} \\ &\multicolumn{1}{p{0.8\textwidth}}{- Organized and facilitated a session that discusses teaching and mentoring (2015) } \\ &\multicolumn{1}{p{0.8\textwidth}}{- Facilitated a session on creating an academic career portfolio (2014) } \\ \\ Feb 2011 - Feb 2012 &\textbf{Tau Beta Pi Engineering Honors Society, IL-Gamma Chapter}\\ & \textit{Recording Secretary} \\ & \multicolumn{1}{p{0.8\textwidth}}{- Reformed project management and record-keeping of the group using Google products for more efficient communication and exec board transition} \\ &\multicolumn{1}{p{0.8\textwidth}}{- Organized various community service activities} \\ &\multicolumn{1}{p{0.8\textwidth}}{- Created a graduate school mentoring program for members interested in pursuing a graduate degree} \\ \\ Sept 2009 - Apr 2011 &\textbf{Engineers for a Sustainable World}\\ & \textit{Webmaster \& Project Team Member} \\ & \multicolumn{1}{p{0.8\textwidth}}{- Designed a lever mechanism which assists technicians in priming a ram pump using NX for a hydraulic ram pump installation project in Philippines} \\ &\multicolumn{1}{p{0.8\textwidth}}{- Redesigned layout of ESW's official website (\href{http://www.eswnu.org}{http://www.eswnu.org}) to ease user navigation} \\ &\multicolumn{1}{p{0.8\textwidth}}{- Reconstructed the website by incorporating CSS in style designing and PHP in scripting} \\ \\ Sept 2009 - Dec 2010 &\textbf{Gateway Science Workshop}\\ & \textit{Facilitator (Engineering Analysis)} \\ & \multicolumn{1}{p{0.8\textwidth}}{- Facilitated weekly two-hour group study workshops for engineering freshmen enrolled in Engineering Analysis (MATLAB, Linear Algebra, Mechanics, Ordinary Differential Equations)} \\ &\multicolumn{1}{p{0.8\textwidth}}{- Engaged students in group discussions to encourage critical thinking on engineering concepts and applications} \\ &\multicolumn{1}{p{0.8\textwidth}}{- Monitored students' progress and made changes to the workshop accordingly} \\ \\ Sept 2008 - Jun 2009 &\textbf{Northwestern University Solar Car Team} (NUsolar)\\ & \textit{Electrical Team Member \& Business Team Member} \\ & \multicolumn{1}{p{0.8\textwidth}}{- Worked on a solar powered car that won 3rd place in the Formula Sun Gran Prix 2009} \\ &\multicolumn{1}{p{0.8\textwidth}}{- Designed and built circuitry for the solar car's new electrical system} \\ &\multicolumn{1}{p{0.8\textwidth}}{- Researched sponsorship opportunities for the solar car project} \end{tabular*} \newpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Community Service} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}} l l} Dec 2008 - Present &\textbf{Alternative Student Breaks}\\ & \multicolumn{1}{p{0.8\textwidth}}{- Participated in a week-long service learning trip during school breaks} \\ &\multicolumn{1}{p{0.8\textwidth}}{- Volunteered at children hospital, national parks, and various local non-profit organizations in the United States} \\\\ Oct 2013 - Present &\textbf{Caltech RISE Program}\\ & \multicolumn{1}{p{0.8\textwidth}}{- Assisted high school students who are weak in mathematics and sciences to learn the subjects} \\ \\ \end{tabular*} %\newpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Language Skill} English (Fluent), Mandarin (Fluent), Cantonese (Native), Malay (Fluent), Japanese (Basic) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Computer Skill} \textbf{Advanced:} Mathematica, MATLAB \\ \textbf{Intermediate:} Python, Javascript, HTML, CSS \\ \textbf{Basic:} Simulink, C/C++, Bash, NX (Unigraphics), ANSYS % Footer \begin{center} \begin{footnotesize} \vspace{20px} Last updated: \today \\ \href{\footerlink}{\texttt{\footerlink}} \end{footnotesize} \end{center} \end{document}
{ "alphanum_fraction": 0.6770638993, "avg_line_length": 46.4823848238, "ext": "tex", "hexsha": "a470a0c901633826b04709215cf8c07e183f6ccd", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "58d9b2675a8079c2c6b075c707234adca6f22407", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ypleong/CV", "max_forks_repo_path": "ypleong_CV.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "58d9b2675a8079c2c6b075c707234adca6f22407", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ypleong/CV", "max_issues_repo_path": "ypleong_CV.tex", "max_line_length": 304, "max_stars_count": null, "max_stars_repo_head_hexsha": "58d9b2675a8079c2c6b075c707234adca6f22407", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ypleong/CV", "max_stars_repo_path": "ypleong_CV.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4610, "size": 17152 }
\documentclass{article}[a4paper,12pt] \usepackage[utf8]{inputenc} \usepackage{amsmath,amssymb,amsthm,amsfonts,mathtools} \usepackage[inline]{enumitem} \usepackage{soul} \usepackage{cancel} \usepackage{hyperref} \usepackage{centernot} \usepackage{pifont} \usepackage{changepage} \usepackage{subcaption} \usepackage[section]{placeins} \usepackage{lipsum, graphicx, caption} \usepackage{array} \usepackage{float} \usepackage{commath} \usepackage{wrapfig} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \theoremstyle{definition} \newtheorem{innercustomgeneric}{\customgenericname} \providecommand{\customgenericname}{} \newcommand{\newcustomtheorem}[2]{% \newenvironment{#1}[1] {% \renewcommand\customgenericname{#2}% \renewcommand\theinnercustomgeneric{##1}% \innercustomgeneric } {\endinnercustomgeneric} } \newcustomtheorem{customthm}{Theorem} \newcustomtheorem{customlem}{Lemma} \newcustomtheorem{customdefn}{Definition} \newcustomtheorem{customprop}{Proposition} \newcustomtheorem{customexer}{Exercise} \renewcommand{\qedsymbol}{$\blacksquare$} \setlength\parindent{0pt} \let\emptyset\varnothing \usepackage{geometry} \geometry{ a4paper, portrait, total = {170mm,257mm}, left = 20mm, top = 20mm, } \usepackage{xcolor} \usepackage{pagecolor} \pagecolor{white} \color{black} \title{\textbf{AI For Everyone}} \author{ \textbf{Om Prabhu}\\ 19D170018\\ Undergraduate, Department of Energy Science and Engineering\\ Indian Institute of Technology Bombay\\} \date{Last updated \today} \begin{document} \maketitle \vspace{-12pt} \hrulefill \vspace{6pt} \textbf{NOTE:} This document is a brief compilation of my notes taken during the `AI For Everyone' course by \texttt{\href{https://www.deeplearning.ai/}{deeplearning.ai}}. You are free to read and modify it for personal use. You may check out the course here: \texttt{\href{https://www.coursera.org/learn/ai-for-everyone}{https://www.coursera.org/learn/ai-for-everyone}}. \hrulefill \tableofcontents \vspace{6pt} \hrulefill \pagebreak \section{Introduction} \subsection{About myself} Hello. I am Om Prabhu, currently an undergrad at the Department of Energy Science and Engineering, IIT Bombay. If you have gone through my website (\texttt{\href{https://omprabhu31.github.io/}{https://omprabhu31.github.io/}}) earlier, which is probably where you found this document too, you will know that I am quite a bit into programming and tinkering with code to try and do cool stuff. Additionally, I love playing video games, listening to music and engaging in a little bit of creative writing as and when I get time. With this brief self-introduction, let us get into why I decided to pursue this course. \subsection{About this course} As you probably know, AI (artificial intelligence) is rapidly changing the way we work and live. It is difficult to name industries which are not likely to be impacted by AI in the near future (I initially thought of the textile industry as an example, but a simple Google search proved exactly how wrong I was). AI is generating huge amounts of industrial revenue per year and is likely to create 13 trillion US dollars per year by the time we reach 2030 (source: McKinsey Global Institute). \vspace{6pt} Hence, it is important to gain at least a general overview of the what makes AI such a powerful tool. Right off the bat, one of the major reasons why AI has taken off recently is due to the rise of neural networks and deep learning. But this is not all. One needs to learn what types of data are valuable to AI and how the type and amount of data influences the performance of a neural network. Further, it is also important to know how AI can be used to build personal as well as company projects. Lastly, it is also important to know how AI will affect society and jobs so that one is better able to understand AI technology and navigate this rise of AI. \vspace{6pt} With all this said, let us try to understand what AI really is and accomplish the above objectives. \hrulefill \pagebreak \section{What is AI?} AI is a very happening industry today and there is a lot of excitement among people as to how the rise of AI will map out. While this has boosted development of AI technologies even further, it has also lead to irrational fears among society. One of the major reasons for this is because not many people realize that AI can actually be put into 2 separate categories: \begin{itemize} \item ANI (artificial narrow intelligence): can do one thing (eg: smart speaker, self driving car, web search algorithms); incredibly valuable in specific industries due to narrow application \item AGI (artificial global intelligence): can do anything a human can do (perhaps even more things) \end{itemize} While the world has seen tremendous progress with ANIs, the same cannot be said for AGIs. This lack of distinction between ANIs and AGIs is what has led to fears of super-intelligent robots taking over the world. \vspace{6pt} In this section, we will be mainly looking at what ANIs can do and how to apply them to real-world problems. \subsection{Machine Learning} The rise of AI has been driven by one major tool known as Machine Learning. While the term might give a characteristic of omniscience to machines, this is far from true. In fact, the most used form of ML is what is known as Supervised Learning (or A $\rightarrow$ B mapping): \begin{itemize} \item learning a function that maps input to output based on a database of example I/O pairs \item learning algorithm analyses example training data to generate a function to map new examples \item eventually the algorithm can \textit{learn} to predict the target output in previously unseen situations \end{itemize} One major application of this technology is in the online advertising industry. The input is in the form of advertisement details \& some user info based on which the AI algorithm tries to figure out if the user will click on the ad or not. This is how users are shown only a certain set of advertisements online. \vspace{6pt} Another application of supervised learning lies in self driving cars. The input is a set of images \& some radar info from sensors on the car. The AI uses this data to output the position of nearby cars and/or obstacles so that the self-driving car can avoid them. \vspace{6pt} Surely the concept of merely taking the input to construct an output seems limiting in the general sense of the scope of AI, however it can evidently be very valuable once a suitable application scenario is found. Now while the idea of supervised learning has been around for decades, it has taken off only in the recent years. This is mainly because of the limitation of technology to train large sets of neural networks to process huge amounts of data while also improving AI performance. This can be illustrated through a graph as follows: \begin{center}\includegraphics[width=\textwidth]{data_vs_performance.png}\end{center} \begin{itemize} \item for traditional AI systems, the data vs performance graph maxes out pretty early \item on training increasingly complex neural networks with higher amounts of data, performance keeps on getting better for much longer \end{itemize} Hence to achieve the highest performance levels, we need two things. Firstly, it helps to have a lot of data - this is where terms like `big data' come in. Additionally, we need the ability to train large sets of neural networks, which is made possible by specialized processors like advanced GPUs. \subsection{Data} Data is one of the 2 main things required to improve performance of AI systems. But simply having lots of data is not always helpful - we need to have the right type of data in the right format (structured or unstructured). Let's take a look at an example of a `dataset' (or a table of data): \begin{center} \begin{tabular}{|c|c|c|} \hline \textsc{Size of house (sqft)} & ... & \textsc{Price of house} (1000\$)\\ \hline 548 & & 119\\ \hline 679 & & 167\\ \hline 834 & & 233\\ \hline 1209 & & 342\\ \hline 1367 & & 399\\ \hline \end{tabular} \end{center} In practice, we will need a lot more than just 5 data entries to build an AI system, but let's work with this for now. The above dataset could work for an AI which checks whether houses are priced appropriately or not. In this case, the input would be the size of the house and the output would be the price. Going further, we might try to improve our AI by adding more input data fields such as the number of bedrooms, location, etc. \vspace{6pt} Another application of the same dataset would be figuring out the most appropriate house for a consumer on a fixed budget. In this case, our input and output fields will be exactly the opposite compared to the first application. What this means in essence is that, given a dataset, it is up to us to decide what is the input and what is the output, and how to choose these definitions to bring out the maximum value to our product. \subsubsection{Collection of data} So far, we have established that data is an important tool for AI systems and that there is a certain flexibility regarding the choice of input and output. But how do we get data? To discuss this, let us now switch to the more traditional example in machine learning of an algorithm designed to recognize images of cats: \begin{itemize} \item manual labelling: collect a set of pictures and manually label them as `cat' or `not cat' \begin{itemize} \item[$-$] tried and true way of obtaining a highly reliable dataset having both input and output fields \item[$-$] difficult, given the huge amount of data required (usually on the scale of several thousands of entries) \end{itemize} \item observing behaviours \begin{itemize} \item[$-$] user behaviour: e-commerce websites keeping a tab on prices offered to users and whether they bought the product or not, something like this: \end{itemize} \end{itemize} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \textsc{User ID} & \textsc{Time} & \textsc{Price} (\$) & \textsc{Purchased}\\ \hline 0156 & Feb 22, 09:19:19 & 19.07 & No\\ \hline 1548 & Apr 01, 23:34:56 & 23.01 & Yes\\ \hline 4898 & May 23, 11:59:02 & 18.72 & Yes\\ \hline 8896 & Jul 10, 17:42:37 & 16.55 & No\\ \hline \end{tabular} \end{center} \begin{itemize} \item[] \begin{itemize} \item[$-$] machine behaviour: fault prediction in machines based on operating conditions, something like this: \end{itemize} \end{itemize} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \textsc{Machine ID} & \textsc{Temperature (K)} & \textsc{Pressure (atm)} & \textsc{Fault}\\ \hline 55132 & 332 & 10.96 & Yes\\ \hline 29475 & 378 & 8.22 & Yes\\ \hline 00826 & 489 & 5.78 & No\\ \hline 19475 & 653 & 2.99 & No\\ \hline \end{tabular} \end{center} \begin{itemize} \item download from internet/partnerships: pre-compiled datasets that can be readily downloaded from the web (after obtaining required licenses if any), or obtained from partners (eg: a company obtaining fault analysis datasets from machine manufacturers) \end{itemize} \subsubsection{Misconceptions about data} When thinking about the use of data, many people believe that they should have a lot of data on hand before feeding it into an AI system, and that having vast amounts of data will ensure the success of the AI system. Here's why this may not always work in practice: \begin{itemize} \item more data does NOT mean a perfect dataset \begin{itemize} \item better to start relatively small and keep on continuously feeding data to the AI team \item more often than not, the AI team provides dynamic feedback to the IT team regarding what data is useful and what type of IT infrastructure to invest in \end{itemize} \item do NOT assume the success of an AI team just because it has a lot of data $-$ garbage in, garbage out \begin{itemize} \item not all data is valuable, `bad' data will lead to AI learning inaccurate things \item processing huge amounts of data needs appropriate infrastructure to complement it \end{itemize} \end{itemize} There are many more data problems that may arise in practice, like: \begin{itemize} \item incorrect/missing values, or incomplete data: refer to the below example \end{itemize} \begin{center} \begin{tabular}{|c|c|c|} \hline \textsc{Size of house (sqft)} & \textsc{No. of bedrooms} & \textsc{Price of house} (1000\$)\\ \hline 548 & 1 & 119\\ \hline 679 & 1 & 0.001\\ \hline 834 & 2 & unknown\\ \hline unknown & 3 & 342\\ \hline 1367 & 54 & 399\\ \hline \end{tabular} \end{center} \begin{itemize} \item multiple types of data: AI algorithms work very well for all types of data, but techniques for dealing with them might vary \begin{itemize} \item unstructured data: images, audio, video, text documents \item structured data: tables, spreadsheets, databases \end{itemize} \end{itemize} \subsection{AI terminology} Up till now, we've been throwing around terms like AI, machine learning, neural networks, etc. Let's briefly explore what these terms actually mean. \subsubsection{Machine learning vs. data science} There is a thin line between what can be interpreted as machine learning and as data science. Let's say we have a dataset of houses like the one below: \begin{center} \begin{tabular}{|p{7em}|p{6em}|p{6em}|p{6em}|p{8em}|} \hline \textsc{Size of house (sqft)} & \textsc{No. of bedrooms} & \textsc{No. of bathrooms} & \textsc{Newly renovated} & \textsc{Price of house} (1000\$)\\ \hline 548 & 1 & 2 & N & 119\\ \hline 679 & 1 & 2 & N & 167\\ \hline 834 & 2 & 3 & Y & 233\\ \hline 1209 & 3 & 4 & Y & 342\\ \hline 1367 & 4 & 2 & N & 399\\ \hline \end{tabular} \end{center} An application of this dataset to help construction companies price houses appropriately with the first 4 columns as input and the price as output would be a machine learning system, and particularly a supervised learning system. ML often results in running AI systems used on a mass scale. \vspace{6pt} In contrast, another application of the dataset is to actually let a team analyse the data in order to gain insights. They might come up with certain conclusions based on this, for example `Houses with 3 bedrooms are pricier than 2 bedrooms of a similar size'. This can help companies take decisions on whether to build houses with 2 or 3 bedrooms, whether to renovate houses in order to sell them for a higher price, etc. This is an example of a data science project, where the output is a set of conclusions that helps companies take business decisions. \vspace{6pt} Let's take the online advertising industry as another example. Personalized ads powered by AI systems (that take ad info and user info as input and determine if the user will click on the ad or not) are machine learning systems. However when business teams analyse trends in the industry and come up with conclusions like `the textile industry is not buying a lot of ads, but could be convinced otherwise with the right sales approach', it becomes a part of data science. \subsubsection{Deep learning} Let's take the same example of pricing houses. We take the 4 columns on the left as input. One of the most effective ways of generating the output would be to feed it into what are called neural networks. \begin{center}\includegraphics[width=\textwidth]{neural_network.png}\end{center} These are pretty similar to the network of neurons spread across the human body (which is also why they are referred to as artificial neural nets). This representation of ANNs bears some resemblance to the brain in that the blue circles are called artificial neurons which relay information across the network. And the resemblance ends right here. The details of how ANNs work are completely unrelated to how the human brain works. \vspace{6pt} At the end, what an ANN boils down to is nothing but a big mathematical equation that leads the system to reach the output based on a set of input parameters. This makes them very effective for learning A $\rightarrow$ B mappings. The terms `deep learning' and `neural networks' are used almost interchangeably today. \subsubsection{The larger picture} If we were to construct a Venn diagram showing all the concepts above, we would probably have something like this: \begin{center}\includegraphics[scale=0.8]{ai_terminology.png}\end{center} To date, there is a discrepancy about how data science fits into this picture. Some say AI is a subset of data science while others say the opposite. However, it is better seen as a cross-cutting subset that comprises of all these tools from AI and also other tools that drive business insights. \subsection{AI companies} In this era, it is possible for almost any company to employ a few deep learning algorithms. However, that by itself does not necessarily make it an AI company. AI companies specialize in the following: \begin{itemize} \item strategic data acquisition: many AI companies have free products solely for the purpose of acquiring data that can be better monetized elsewhere \item unified data warehouses: pre-emptive investments in bringing data together to a unified warehouse/a small set of connected warehouses \item pervasive automation: inserting AI algorithms to automate certain generic tasks in order to apply human labour \& intelligence in more specialised work roles \item specialized roles: such as machine learning engineers (MLEs); allows for better division of labour and assigning specialized tasks to increase efficiency \end{itemize} It turns out that there is a systematic process using which companies can implement many of the above strategies to ensure that they use AI to their maximum benefit: \begin{enumerate} \item execute pilot projects to gain momentum and get a better sense of what AI can and cannot do, what types of data are useful, etc \item bring together an AI team and provide extensive AI training to engineers as well as managers \& executives \item develop an AI strategy and build IT infrastructure based on dynamic feedback from the AI team \item align internal and external communications so that others in the company hierarchy (shareholders, customers, etc) know how to navigate the rise of AI \end{enumerate} \subsection{Limitations of AI} Before committing to an AI project, it is important to check whether it is feasible. While the success stories we read in articles might make it sound like AI knows no bounds, this is far from reality. There are several limitations (currently, at least) as to what AI can and cannot do. \vspace{6pt} As an imperfect rule of thumb, anything that a human can do within a few seconds of thought can probably be automated using AI - for example, telling whether a phone is scratched/dented, looking around and determining positions of cars, deciphering audio, etc. In constrast, an AI probably cannot write a 50-page report based on in-depth analysis of the stock market. Let us take a look at some more examples: \begin{itemize} \item customer support automation: can sort incoming emails and redirect them to appropriate sections of customer support; cannot type out personalized responses \end{itemize} \begin{center}\includegraphics[width=\textwidth]{customer_support_automation.png}\end{center} What if we try to do this anyway? Say we have a deep learning algorithm ready and a decent sized dataset of 1000 user emails and appropriate responses. We would get something like this: \begin{center} ``My product is damaged." $\rightarrow$ ``Thank you for your email." ``Where can I write a review?" $\rightarrow$ ``Thank you for your email." ``How do I send a product as a gift?" $\rightarrow$ ``Thank you for your email." \end{center} It turns out that a sample size of 1000 (or even 100,000 as a matter of fact) is just not enough for an AI algorithm to write out appropriate and empathetic responses. In some cases, the AI may even generate gibberish which is clearly not desired. \begin{itemize} \item self-driving car: can use sensors and photographs to figure out relative positions of other cars; cannot respond appropriately to human gestures \end{itemize} \begin{center}\includegraphics[width=\textwidth]{self_driving_car.png}\end{center} One of the main reasons why this is so difficult to do is due to the sheer amount of possible hand gestures that could be made by humans. It is difficult to collect data of tens of thousands of people performing gestures. And again if we try to do it anyway, the consequences would be even harsher than in the scenario of customer support above. \begin{itemize} \item X-ray diagnosis: can diagnose diseases from around 10,000 labelled images (difficult to collect for rare diseases); cannot diagnose diseases based on a small set of images in a medical textbook (a human doctor can do this, however) \end{itemize} In the context of X-ray diagnosis, we can make out another weakness of AI, that is when it is asked to work with new types of data. Let's say the sample data contains high quality X-ray images. The AI algorithm will most likely fail when faced with poor-quality X-ray scans or images from even a slightly defective machine. \vspace{6pt} In the end, there are no hard and fast rules about the stuff that AI can or cannot do. Most of the times, AI projects require some weeks of technical diligence to figure out their feasibility. However, keeping the above points in mind, one should be able to get a fair judgement regarding the same. \subsection{Understanding deep learning} The terms deep learning and neural networks are used almost interchangeably in AI. Let us use an example of demand prediction to try and understand what neural networks really are. \vspace{6pt} Suppose a t-shirt company wants to know how many units they can expect to sell based on their selling price. The required dataset might be a in the form of a demand curve, where the higher the price the lesser the demand. This form of curve can be used to train what is perhaps the simplest possible neural network. \begin{center}\includegraphics{deep_learning1.png}\end{center} All this single-neuron network does is compute the curve shown and `learn' it in order to map any value of price to the appropriate value of demand. A single neuron can be thought of as a Lego brick, and a neural network as a very complicated stack, often in multiple layers, of such bricks. \vspace{6pt} Let's look at a more complicated example. Suppose that instead of just the price, we have more variables like shipping cost, marketing cost and material. Then we will have multiple factors that influence demand like affordability, consumer awareness and perceived quality. We might then have a slightly more complicated neural net like the one below: \begin{center}\includegraphics{deep_learning2.png}\end{center} This slightly more complicated neural network maps the 4 input parameters to the output that is the demand. \vspace{6pt} From that the way in which we have discussed neural networks above, it appears as if we have to actually figure out the key factors as affordability, awareness and perceived quality. However, things do not work this way. One of the best things about neural networks is that we only have to provide it the input and the output $-$ all of the stuff in the middle, it figures out by itself. It automatically `learns' and completely trains itself to find the most accurate possible function that maps from the input to the output. \vspace{6pt} With this slightly advanced definition of neural networks, let us try to understand an actual practical application of neural networks in face recognition. \begin{center}\includegraphics{face_recognition.png}\end{center} When we look at a face, we see certain features like eyes, expression, etc. What a neural network sees is millions of RGB values for each and every pixel in the image. Typically, when you give it an image, the neurons in the earlier parts of the network will learn to detect edges in pictures and later learn to detect parts of objects. After they learn to detect eyes and noses and the shape of cheeks and the shape of mouths, the neurons in later parts of the network will learn to detect different shapes of faces and finally, will put all this together to output the identity of the person in the image. \hrulefill \begin{center}\textbf{END OF WEEK 1}\end{center} This is the end of the course notes from Week 1. Keep on reading further for notes from further weeks, or spend some time gaining further insight into the previously discussed topics. \hrulefill \section{Building AI Projects} So far we have covered the basics of AI and machine learning. But how do we put this technology to use in a project? Let's take a look. \subsection{Workflow of a machine learning project} There are 3 basic steps when building a machine learning project. Let us take speech recognition as an example, particularly Google speech recognition. How do you build a system that recognizes the words `Ok Google'? \begin{enumerate} \item Collect data: involves collecting some audio clips of people saying the words `Google' and `Ok' (and lots of other words too - we want speech beyond `Ok Google' to be recognized too) \item Train the model: use a ML algorithm to learn input to output mappings \begin{itemize} \item[$-$] often the first attempt doesn't work very well \item[$-$] need to keep on iterating over the algorithm until it is good enough \end{itemize} \item Deploy the model: package the software into a product and ship it \begin{itemize} \item[$-$] may not work as well initially due to lot of new data (eg: if the sample dataset was from American users, the AI may not be able to recognize `Ok Google' from Indian users as well initially) \item[$-$] get back user data (while maintaining privacy regulations) to maintain and update the model \end{itemize} \end{enumerate} Let us revisit this process in another example of self-driving cars: \begin{enumerate} \item Collect data: sample images and, for each of the images, position of nearby cars \item Train the model: invariably the first model won't work well (eg: may detect trees or rocks as cars initially); need to reiterate until the model is good enough according to safety standards \item Deploy the model: must be done in ways that preserve safety; get new data back (eg: new types of vehicles like tow trucks, auto rickshaws, etc) and update the model continually to the point that it can be released to the commercial market \end{enumerate} \subsection{Workflow of a data science project} Unlike a ML project, the output of a data science project is a set of insights that can be used to influence business decisions. Naturally it follows that they have a different workflow compared to ML projects. Let us take the example of an e-commerce platform that sells coffee mugs. There might be several steps a user has to perform to buy the product. \begin{center} \includegraphics[width=\textwidth]{ecommerce_steps.png} \end{center} As a salesperson, it is our job to make sure that the majority of users get through all these steps. This analysis is done in a series of steps: \begin{enumerate} \item Collect data: gather user info (country, time, product they checked out, price they were offered, where they quit in the buy process) \item Analyse data: get a data science team to work on the dataset \begin{itemize} \item[$-$] initially, the team might have a lot of ideas as to why users are not motivated to buy the product \item[$-$] need to iterate the analysis to get good insights and find out the major causes (eg: shipping costs are too high, so users quit at the checkout page) \end{itemize} \item Suggest hypothesis \& actions: data science team presents the insights and suggests any suitable actions (eg: incorporate part of shipping costs into product cost) \begin{itemize} \item[$-$] deploy changes to the product design and get new user data back (eg: users overseas may now buy more, but locals may not due to rise in base price) \item[$-$] re-analyse new data continuously to possibly come up with even better hypothesis/suggestions \end{itemize} \end{enumerate} Again, let us briefly discuss this framework in another context of optimizing a manufacturing line for coffee mugs. We will want to make sure that as little defective mugs are produced as possible: \begin{center}\includegraphics[scale=0.9]{manufacturing_steps.png}\end{center} \begin{enumerate} \item Collect data: data about different types of clay (suppliers, mixing times, moisture content, etc) and regarding different batches of mugs (temperature of kiln, duration in kiln, defects per batch, etc) \item Analyse data: ask the data science team to analyse data to find the major source of defects (eg: too high temperature might lead to softening of clay and cause mugs to crack) \item Suggest hypothesis \& actions: change operations (eg: vary humidity and temperature based on time of day), get dynamic feedback from manufacturing output and take further actions if necessary \end{enumerate} \subsection{Impact of data on job functions} The digitization of society means that more and more data is being stored in digital formats. Due to this, almost all jobs have been or will be impacted by the advent of machine learning and data science. Let us see briefly how data science \& ML have made or are making their way into different industries: \begin{center} \begin{tabular}{|p{10em}|p{17em}|p{17em}|} \hline & \textbf{Data Science} & \textbf{Machine Learning} \\ \hline \textsc{Sales} & optimising a sales tunnel (refer Section 3.2) & automated lead sorting - prioritize certain marketing leads over others (contact CEO of large company over an intern at small company)\\ \hline \textsc{Manufacturing} & optimising a manufacturing line (refer Section 3.2) & visual product inspection (AI algorithms can learn to figure out if products are defective or not)\\ \hline \textsc{Recruiting} & optimising the recruiting process (analysing why people are not making it to certain stages or why too many people are making it) & automated resume screening based on sample dataset of resumes and whether to select candidate or not (fair and ethical screening free of any bias)\\ \hline \textsc{Marketing} & A/B testing - launching 2 versions of a website to find out what appeals to consumers & customized product recommendations to significantly increase sales\\ \hline \textsc{Agriculture} & crop analytics (find out what to plant and when to plant based on market conditions, soil \& weather conditions, etc) & precision agriculture (recognize presence of weeds through images/video and spray an appropriate amount of weed killers)\\ \hline \end{tabular} \end{center} Of course, this list is not exhaustive and there are many more industries which have seen or will see the impact of AI soon enough. \subsection{Choosing an AI project} There are definitely a lot of things we can try to do with AI, but how to we choose an AI project? Let us discuss a general framework on how to choose an AI project. Many of these points have already been discussed, but let us revisit them in this context: \begin{itemize} \item feasibility and value addition: \begin{itemize} \item must be feasible to do with AI and also add value to the business/application \item brainstorming with cross-functional teams (comprising both AI experts as well as business domain experts) to narrow down projects \end{itemize} \item brainstorming framework: \begin{itemize} \item think about automating tasks rather than entire jobs (eg: for radiologists, AI might be useful in X-ray diagnosis but not as useful in consulting with other doctors or patients) \item consider main drivers in business value and try to augment them using AI to increase the scale and productivity of the business \item consider tasks which are particularly painstaking for humans and try to automate them if possible \end{itemize} \item do not insist on acquisition of big data \begin{itemize} \item try to make progress with small datasets and get dynamic feedback from the AI team as to what type of data to obtain further and what type of IT infrastructure to build \end{itemize} \end{itemize} After brainstorming and narrowing down to a certain list of projects, it is time to pick one to work with. Committing to an AI project requires a lot of work to see it through and hence, it is important to conduct due diligence on it. Technical diligence is used to make sure the task is feasible to carry out using AI, while business diligence revolves more around deciding how much value it is going to add and if it is worth the effort. \begin{center} \begin{tabular}{|p{21.5em}|p{21.5em}|} \hline \textsc{Technical Diligence} & \textsc{Business Diligence}\\ \hline can an AI system meet the desired performance & will automation lower costs enough?\\ \hline how much data is needed & how much revenue/efficiency it will increase\\ \hline engineering timeline (how long and how many people it will take) & will launching this project bring enough value to the business?\\ \hline & building spreadsheet financial models to estimate the value quantitatively\\ \hline \end{tabular} \end{center} Another type of diligence one should try to perform is ethical diligence (i.e. is the society/environment being harmed). Any AI project should ideally add value to society and if not, it should not cause harm at least. \vspace{6pt} Another factor we need to consider is whether we want to build or buy (or maybe do a combination of both): \begin{itemize} \item outsourcing ML projects can lead to easier access to datasets \item data science projects are more commonly done in-house since they are highly tied to the business (it takes very deep insider knowledge about the business which is unlikely to occur through outsourcing) \item try and build things that will be specialized to the project \item avoid building things that are an industry standard (eg: storage servers, computer hardware, etc) \end{itemize} \subsection{Working with an AI team} After a project is finalised, one must know how to work with an AI team in order to make sure that the project runs smoothly. Normally a business should have already an AI team but even if not, it is fairly easy to either hire AI engineers or learn a thing or two about AI yourself and get enough knowledge to get started. After this, we need to consider the following points while working with an AI team: \begin{itemize} \item specify the acceptance criteria (eg: goal is to detect defects in coffee mugs with 95\% accuracy) and provide the dataset to the AI team to measure the accuracy \begin{itemize} \item training set $-$ dataset containing both input and output using which the AI learns the mapping \item test set $-$ dataset with new input data which the AI team gives the algorithm to see what it outputs \end{itemize} \item do not expect 100\% accuracy $-$ AI has its limitations and so does the amount of data being fed into the training set \item keep taking feedback from the AI team as to whether the training data needs to be improved, etc \end{itemize} It is often a good idea to be in constant touch with the AI team to try and find a reasonable level of accuracy that passes technical as well as business diligence (since it might not be worth it if the AI has low accuracy). \pagebreak \subsection{Technical tools for AI teams} Let us finally discuss some of the most commonly used tools by AI teams to train and test learning algorithms. It is important to have a little knowledge about this since it gives us a better insight when communicating with the AI team. \begin{itemize} \item machine learning framework: open source frameworks like PyTorch, TensorFlow, etc make writing software for ML systems much more efficient \item research publications: AI technology breakthroughs published freely on the internet (a major source is Arxiv) and on GitHub repositories \item CPUs and GPUs: CPU does majority of the computational work, while GPU hardware is very powerful for building large neural networks \item cloud deployments: refers to renting computer servers such as from AWS, Azure, etc to use external servers to do the computation (as opposed to on-premises deployment, which means buying your own servers) \item edge deployment: putting a processor right where data is collected to process data and make quick decisions \begin{itemize} \item for a self-driving car, there is not enough time to record data, send data to a cloud server and receive a response from the server \item computation must happen quickly inside the car \end{itemize} \end{itemize} \hrulefill \begin{center} \textbf{END OF WEEK 2} \end{center} This is the end of the course notes from Week 2. Keep on reading further for notes from further weeks, or spend some time gaining further insight into the previously discussed topics. \hrulefill \pagebreak \section{Building AI in a Company} Up till now, we have discussed what AI is and how to build an AI project. Let us now discuss some complex AI projects in depth and find out how AI projects fit in the context of a company. \subsection{Smart speakers: a case study} Let us go through a brief case study of how AI software is written to respond to voice commands like `Ok Google, tell me a joke'. There is a sequence of steps (known as an AI pipeline) needed to process the command: \begin{center} \includegraphics[width=\textwidth]{smart_speaker_steps1.png} \end{center} Quite often, AI teams are split into various specialized groups and each of them focus on specific tasks in the AI pipeline. The AI pipeline is reasonably flexible in that it might change on occurrence of a slightly more complicated command like `Ok Google, set a timer for 3 minutes'. \vspace{6pt} In such a case, majority of the earlier process remains the same except the final step. The execution step will be separated into 2 further steps: \begin{itemize} \item extract duration $-$ look at the text and pull out the phrase corresponding to the time duration \item command execution $-$ specialized software component that can start a timer with the set duration \end{itemize} Smart speakers have a lot more functions apart from the above two and given the function we want to perform, we can figure out what the AI pipeline will look like. A major challenge for teams building a smart speaker lies in the intent recognition stage - a user can ask for a particular command in many ways (eg: tell me a joke, do you know any good jokes?, say something funny, make me laugh, etc). \vspace{6pt} To further solidify our understanding of the AI pipeline, let us look at a second case study on self-driving cars. \subsection{Self-driving cars: a case study} Let us try to construct the AI pipeline of how the AI in a self-driving car might make decisions on how to drive. \begin{center}\includegraphics[scale=0.8]{self_driving_car_steps.png}\end{center} Let us look into some details regarding car detection, pedestrian detection and motion planning: \begin{itemize} \item car detection $-$ supervised learning algorithm to input images from all sides of the car, radar info and output positions of nearby cars \item pedestrian detection $-$ very similar to car detection techniques \item motion planning $-$ outputs the path the car should take as well as safe speed to avoid collisions with other cars (as well as overtaking parked cars) \end{itemize} This is a fairly simplified pipeline of how a self-driving car might work. Most self-driving cars would have an auto navigator that relies on GPS, and also use other stuff like gyroscopes. \vspace{6pt} Another component in self-driving cars is that of trajectory prediction. Instead of just knowing the current position of cars, it is more useful to be able to predict how they are likely to move in the next few seconds (by focusing on turn indicators, brake lights, etc) to be able to avoid them even as they are moving. To drive safely we need to consider a lot of other factors, which might lead to a fairly complicated AI pipeline as below: \begin{center} \includegraphics[width=\textwidth]{self_driving_pipeline.png} \end{center} \subsection{Major roles in an AI team} In the previous example of a self-driving car, we saw a pretty complicated AI pipeline. To pull off a project like this, there are usually large AI teams divided into small groups to work on specialized tasks. Let's take a look into the major roles in such a team: \begin{itemize} \item software engineer: write specialized software for command execution \item machine learning engineer: write software responsible for A $\rightarrow$ B mappings, collect data and train neural networks by interating over the algorithm \item machine learning researcher: perform research on how to extend state-of-the-art in ML, publish papers and maintain documentation \item machine learning scientist: go through academic \& research literature to find ways to implement state-of-the-art technology to the current application \item data scientist: examine data and gain insights; further discuss and present the insights to the team \item data engineer: organize data and saving it in an easily accessible, secure and cost-effective way \item AI product manager: help decide what is feasible and valuable, and what to finally build; manage deadlines and coordinating with the team \end{itemize} Even though having a large team generally helps due to the possibility of specialized jobs, it is possible to get started with a relatively small team of about 3-5 people. Even a basic knowledge can help to start analysing small amounts of data and training basic ML models. \vspace{6pt} Up till now, we have seen what an AI team comprises, but AI teams often need to work with other departments in the company to bring out maximum efficiency. This is where the AI transformation playbook comes in. \subsection{The AI transformation playbook} Just having a large AI team at its disposal does not make a company a good AI company. Many of the top AI companies follow most of the points in the AI transformation playbook in some way of the other (this is a term coined by Andrew Ng, a pioneer in ML technology who also happens to be the instructor for this course, but is essentially a list of points that, if followed, is likely to make a company good at AI): \begin{enumerate} \item Execute pilot projects to gain momentum \begin{itemize} \item initial project should be successful rather than valuable (helps gain the confidence of other departments in the company and investors) \item project should preferably show quick traction (within 6-12 months) since it helps gain the required momentum and experience for future projects \item can outsource data initially to work upon it and gain experience before building an in-house team \end{itemize} \item Build an in-house AI team \begin{itemize} \item centralized AI team instead of each business unit having their own AI component \item build company-wide platforms (software tools, data infrastructure, etc) \end{itemize} \item Provide broad AI training \begin{itemize} \item executives and seniors should know the basics of AI and how to make resource allocation decisions \item AI division leaders should know how to conduct diligence on projects before selecting them and monitor the progress of different teams working on the project \item AI engineers/trainees should be able to collect the right type of data, build algorithms based off of it and then assemble various modules of the project to ship it \end{itemize} \item Develop an AI strategy \begin{itemize} \item leverage AI to create an advantage specific to the industry sector \item this point is not at number one because before planning out a strategy, it is important to have basic knowledge regarding AI (for example, one may put forth a strategy to collect a lot of data, but is that data really useful?) \item the virtuous cycle of AI: \begin{itemize} \item better product $\rightarrow$ more users $\rightarrow$ more user data (which can be used to update the model) $\rightarrow$ better product (and so on) \item difficult for newer companies to enter the market since already existing companies are likelier to have a better product and larger user base \item small teams can take advantage of this by pushing into lesser explored industries and capitalizing later on their specialization \end{itemize} \item strategic data acquisition (eg: launching free services solely for data collection purposes) \item unified data warehouses let engineers connect the dots (eg: in case of customer complaints, it is better to have the manufacturing data in the same place to find out exactly what caused the fault) \end{itemize} \item Develop internal and external communications \begin{itemize} \item how AI will benefit the company must be communicated to investors so relations are maintained \item communicate with the government to introduce AI solutions in areas like health care, and even other areas where user privacy regulations are strict \item user education as to how to deal with introduction of new technology \item may also help in recruitment procedures for the company \end{itemize} \end{enumerate} If you wish to go through the entire AI transformation playbook in much greater detail, you may do so here: \texttt{\href{https://landing.ai/ai-transformation-playbook/}{https://landing.ai/ai-transformation-playbook/}}. \subsection{First steps and pitfalls} Even after following all the points in the AI transformation playbook, there are companies which are subject to common pitfalls while taking their first steps towards an AI project. Let us look at some of them: \begin{itemize} \item expecting AI to solve everything (instead, one should give due time to technical and business diligence to figure out if the project is actually possible and valuable using AI) \item hiring employees of only one type i.e. only ML engineers or only software engineers, etc (there should be a cross-functional team that pairs business talent with engineering talent) \item expecting AI projects to work right away (more often the first attempt is a disaster, but continuous iteration of training data over the algorithm can refine it a lot) \item expecting traditional business approaches to work (instead, one should build a separate AI team and work with them for a more dynamic progress) \item feeling the need to have a large team and a huge amount of data (often small teams working with small datasets can gain better insights before investing further in data collection) \end{itemize} Much of what we have discussed so far might sound great but can feel daunting when it comes to putting things to practice. In order to take the first step into the world of AI, it is better to get a certain group (maybe a group of friends) to learn about AI instead of going in solo. It is also important to brainstorm on different projects, even if they seem to small - it is better to start small and succeed rather than to go big and fail. \vspace{6pt} Now that we have looked through the basics of AI technology and how to build and choose AI projects, let us now go through slightly more advanced concepts like reinforcement learning, computer vision, natural language processing, etc. \subsection{Major AI application areas} AI today is being successfully applied in many industries involving image \& video data, language data, speech data, etc. Let's take a deeper look into a brief survey of of how AI is applied to these different areas. \subsubsection{Computer vision} One of the major successes of deep learning has been computer vision. To understand what computer vision is, let us look at some of its applications. \vspace{6pt} One of these applications is image classification/object recognition. For example, AI can take a picture and tell the user if it is of a cat. This technology is actually much more advanced than just recognizing cats $-$ there are applications to detect certain species of flowers, types of food, etc. And of course, one of the major implementations of this lies in face recognition. \vspace{6pt} Another type of computer vision algorithm is that of object detection. Hence rather than just recognize an object and label the whole image, we're trying to detect the different types of objects in the same image. For example, a self-driving car AI does not only recognize cars \& pedestrians but also shows their position. It is also possible to track the live position of cars, etc as they are moving. \vspace{6pt} Image segmentation takes this one step further $-$ it tells us not just where the cars are but also for every single pixel in the image, if that pixel is a part of a car or of a pedestrian, etc. This is commonly used in X-ray scans to detect particular organs. \subsubsection{Natural language processing} NLP refers to AI understanding everyday human language. One application of NLP lies in text classification (eg: to take an email and categorize it as spam or not spam, or take a product description and figure out which category it belongs to). \vspace{6pt} One type of text classification used widely is sentiment recognition \& analysis. This type of AI can take input like ``The food was good, but the ambience can be improved'' and output an expectation of the star rating the restaurant might get. \vspace{6pt} A second type of NLP is information retrieval, of which web search is probably the best example. This AI will take in a search string as input and help the user find relevant documents. Companies also use this internally to search within particular sets of documentation. \vspace{6pt} Name entity recognition is another type of NLP, which helps filter out certain types of names in large documents. For example, the input `I am fluent in English, Spanish and Japanese' should lead the NLP algorithm to filter out English, Spanish and Japanese as language names. They can also recognize phone numbers, countries, etc. \vspace{-6pt} NLP also features in applications of machine translation, where the AI translates an input phrase or document into a specific language that the user wants. Apart from the above, NLP can be used in a wide range of other applications like parsing, part-of-speech tagging (classify which words in a string are nouns, verbs, etc), etc. \subsubsection{Speech} Modern AI technology has completely transformed how speech is processed by computers. A microphone records rapid variations in surrounding air pressure. These variations can be represented on a plot against time on a computer (this is the information a typical digital audio waveform contains). Speech recognition deals with taking inputs of a plot like this and figure out what the user said. \vspace{6pt} As we discussed earlier, one particular type of speech recognition is trigger word/wakeword detection. Another type of speech recognition used is speaker ID, where the problem is to take an audio clip and figure out the identity of the speaker. \vspace{6pt} Recently, text-to-speech has also gained a lot of traction. TTS is used in audio books and as aid in communication for people lacking the ability of speech. \subsubsection{Robotics} One of the applications in robotics we have already seen is that of a self-driving car. One of the main challenges with robotics are perception (figuring out what is in the world around us), motion planning (computing a path for the robot to follow) and control (sending commands to motors \& other electric components to execute the calculated path). \subsubsection{General machine learning} A majority of the applications we saw dealt with unstructured data. However ML works just as well for structured data. Structured data processing is often more specific to a company and it is harder for humans to understand the data at first glance and/or empathize with it. \subsection{Major AI techniques} In the previous sections, we have mainly discussed only supervised learning. This term almost invites the question as to what is unsupervised learning? Apart from this, there are other AI techniques like reinforcement learning as well. \subsubsection{Unsupervised learning} The best known example of unsupervised learning is that of clustering. Suppose there is a shop that specializes in selling chocolates. It may sell cheap, low quality chocolates as well as expensive but high quality ones. If consumer buying patterns are plotted on a graph, it may reveal that buyers are separated into 2 groups $-$ some buy a lot of cheap chocolates while others buy a few expensive chocolates. A clustering algorithm automatically sorts this data into 2 clusters and this data can be used for market segmentation (for example, in an area with a nearby university, college students are more likely to buy cheaper chocolates so it would make more sense for the shop to stock the cheap ones). \vspace{6pt} Unlike supervised learning, an unsupervised learning algorithm does not tell a system exactly what output it wants. Instead what it does is feed the data and ask the system to find something peculiar or meaningful. In the above example, the algorithm just finds what are the different market segments without being told they belong to college students or working people, etc. \vspace{6pt} Unsupervised learning overcomes one of the major criticisms against supervised learning, which is that it requires a lot of labelled data. This opens a possibility to AI being able to learn much more like a human from much lesser data in the future. \subsubsection{Transfer learning} Say we have built an algorithm for self-driving cars and start deploying it. There might be regions with different types of vehicles like golf carts. Even though our system has been trained with a lot of data, it will contain very few images of golf carts. Instead of building another algorithm for golf cart detection from the ground up, we can use the technique of transfer learning. This technique lets us learn from task A and use the knowledge to help with task B. It particularly works very well if the dataset from task A is large. \subsubsection{Reinforcement learning} Let's say we want to write an algorithm for automating a helicopter. It is difficult to do this by supervised learning because, while in the case of card we had only a few types of objects to deal with, here we have a lot more. It is very difficult to specify what is the best way to fly an helicopter when it is in a certain position. This is where reinforcement learning comes in. \vspace{6pt} It is quite similar to how one would train a pet. A simulation is created to let the helicopter fly around. Whenever the AI flies the helicopter well, we give it a positive feedback and when it crashes, we give it a negative feedback. Over time, the AI learns how to fly the helicopter as it keeps on getting dynamic feedback. More formally, reinforcement learning uses a `reward signal' to tell the AI whether it is doing well or poorly $-$ it automatically learns to maximize its positive rewards. \vspace{6pt} In addition to this, reinforcement learning has also helped build algorithms for playing strategic games like Othello or chess, and also for playing video games. One of the weaknesses of reinforcement learning is that it is difficult to control the amount of data the system feeds to its own learning algorithm. \subsubsection{GANs (generative adversarial networks)} This AI technique was only recently designed in 2014. GANs are very good at synthesizing images from scratch. This aspect has caused GANs to gain a lot of traction in everything involved with computer graphics like video games and other media. \vspace{6pt} The general idea of GANs is of 2 networks (generative and discriminative) working hand in hand. The generative network generates stuff while the discriminative network evaluates it. The job of the generative network is to increase the error rate of the discriminative network (i.e. to try and fool the discriminative network as much as possible by producing novel images based on a true data distribution that the discriminative network thinks are not actually synthesized). \subsubsection{Knowledge graph} One of the more underrated AI techniques are knowledge graphs. A web search about a famous personality often leads in a huge string of links and a panel to the right showing some of their general details. These details are derived from a knowledge graph, which is nothing but a database that lists people and key information about them. Many companies build such databases of celebrities, movies, hotels, etc and can help extract a lot of important data relatively quickly. \hrulefill \begin{center} \textbf{END OF WEEK 3} \end{center} This is the end of the course notes from Week 3. Keep on reading further for notes from further weeks, or spend some time gaining further insight into the previously discussed topics. \hrulefill \pagebreak \section{AI and Society} We have already seen how much AI is capable of. It can affect people's lives on a mass scale, especially when huge companies implement AI solutions. While all this sounds great, is it also important to consider the social impact of an AI project and make sure that the project will not harm society/environment in any manner. \subsection{A realistic view of AI} Given the huge impact AI is having on society, it is important for us to have a realistic view of AI and be neither too pessimistic nor too optimistic. AI has been hyped up to the point that a lot of people believe super-intelligent AI robots will completely take over the world and that it is important to invest into defending the world against evil AI bots. It is only causing distractions and unnecessary fears. \vspace{6pt} On the flip side, there are people who believe that we expect too much of AI and another `AI winter' is coming (this refers to past events where people were too excited for AI but then later found it couldn't do what they wanted, and so abandoned it altogether). The difference between AI then and now is that AI today is creating a huge economic value and it is possible to clearly visualize how it will affect industries in the near future. \vspace{6pt} We previously discussed in detail the performance limitations of AI, but here are some more that affect how society views AI: \begin{itemize} \item many high-performing AI systems are black boxes $-$ since people have not actually worked with the AI team, they have no first-hand proof of reliability of the AI system (i.e. how the AI system does what it does) and hence end up rejecting the value it can bring \item biased AI through biased data $-$ behaviour of the AI is dependant on the data using which it is trained (hence if fed biased data, the resulting AI will also naturally learn this bias) \item some AI systems might be open to adversarial attacks, i.e. if someone deliberately tries to fool the AI \end{itemize} \subsection{Discrimination and biases} Many AI teams today rely on the internet for access to cheap data. It is no secret that the internet is full of biases and unhealthy stereotypes, and this really affects many AI products. For example, a recent research by Microsoft concludes the following: \begin{center} on asking the AI $\rightarrow$ man : woman as father : (answer), the AI answers `mother' (okay so far) on asking the AI $\rightarrow$ man : woman as king : (answer), the AI answers `queen' (still reasonable) on asking the AI $\rightarrow$ man : programmer as woman : (answer), the AI answers `homemaker' \end{center} No AI team has intentions of training a biased AI system, but such things can really hurt how AI is viewed by society. Let us take a deeper look at how AI develops such biases: \begin{itemize} \item AI stores words as a set of numbers (these numbers are auto-generated by the AI based on statistics of how that word is used on the internet) \item `Man' may be stored as (1,1) $-$ in practice there might be thousands of numbers, but let us work with a set of 2 numbers for now \item let's say `Programmer' : (3,2) and `Woman' : (2,3) which, if plotted, looks like this \end{itemize} \begin{center}\includegraphics[scale=0.4]{word_plot.png}\end{center} What the AI system will do to map `Woman' to an answer is construct a parallelogram as above and search for the word stored in (4,4). Based on files from the internet, this word happens to be `Homemaker'. \vspace{6pt} There are many situations where such a bias matters. For example, it is unfair if a resume screening AI for a job application has such biases. Other types of biases also exist: \begin{itemize} \item face recognition seems to work more accurately for light-skinned people compared to those with darker skin \item bank loan approval systems have ended up discriminating against some ethnic minorities \item web search algorithms showing men in leading job positions may have a negative effect on women trying to pursue similar careers \end{itemize} The AI community is continuously working to make AI systems as free from bias as possible. Researchers have learned that when an AI learns a word as a set of numbers, only a few numbers out of those contribute to the bias. If we zero out these numbers, the bias can be greatly reduced. Another solution is to use less biased data or data that has reasonable sized samples from multiple ethnicities and all genders as well. \vspace{6pt} Secondly, many AI teams are subjecting their systems to auditing processes, where-in a third-party auditing team evaluates the fairness of the AI system and provides suggestions on what types of biases exist. This greatly increases the odds of a bias (or any other performance issue as well) being spotted so that it can be fixed. \vspace{6pt} Finally, it also helps if AI teams have a diverse workforce. This allows people to spot problems that might not look like problems to other people. With these implementations, the bias in AI can be greatly mitigated. \subsection{Adversarial attacks on AI} While AI is particularly good at reading stuff that is illegible to humans like barcodes and QR codes, it can be fooled by slight changes to data that no human would be fooled by. A major reason for this is that AI looks at data in a discrete manner (i.e. discrete numerical values). For example, even minor changes in pixel values of an image can cause an AI to classify objects as something else entirely. \begin{center}\includegraphics[scale=0.6]{adversarial_attacks.png}\end{center} There might be other ways in which people can fool AI systems. For example, some people were able to design a pair of funky glasses that made a face recognition system falsely recognise a person as the actress Milla Jovovich. As another example, AI sometimes fails to recognize stop signs if there are stickers or graffiti applied over it. \vspace{6pt} There are various defenses against adversarial attacks, but these incur high costs (and also cause the system to possibly run slower). Neural networks can be modified dynamically to make them somewhat harder to attack. Unfortunately, this is an area where no amount of advancement is enough since attackers will constantly come up with strategies to try and fool AI systems. \subsection{Adverse uses of AI} While most of AI is designed to make the society better off, AI can also be used in adverse situations like those listed below: \begin{itemize} \item DeepFakes $-$ synthesizing videos of people doing things they never did to target individuals/companies \item generating fake comments $-$ especially harmful in cases of business statements and political matters \item oppressive surveillance $-$ performing surveillance even in cases where it violates laws of privacy \end{itemize} It is very difficult to stop use cases of AI like this. Take the first example. Even though technologies exist to identify deepfakes, fake news generally spreads much quicker than the truth due to social media platforms. \subsection{AI and developing economies} While big AI products are generally built in developed countries, AI is making it into developing countries and impacting them as well. In fact, many developing economies have seen success by using AI to capitalize more on their existing strengths (eg: a country may be good at agriculture, textile and so on). This allows them to shift human resources to build infrastructure in other sectors that they might possibly be weak at. \vspace{6pt} Because education in AI is still immature, it may also help developing economies to invest in AI education so that it can build its own AI workforce and contribute to the overall economic value created by AI in the future. \subsection{AI and jobs} Even before the advent of AI, automation had a large impact on jobs. Now, with AI, the set of things we can automate is suddenly a whole lot more. Due to this, many jobs are being replaced through automation while many jobs are also being simultaneously created. According to a study by McKinsey Global Institute, by 2030, AI will have displaced between 400-800 million jobs while also having created between 555-890 million jobs. \vspace{6pt} Of course, certain jobs are more likelier to be displaced by AI. Since AI projects usually aim at applying AI to certain tasks instead of entire jobs, it is quite less likely that highly complicated jobs will be displaced by AI in the near future. Here is a brief list of jobs sorted according to their likelihood of being displaced by AI: \begin{center}\includegraphics[scale=0.6]{automation_risk.png}\end{center} It should not come as a surprise that many of the top jobs in the list above involve routine repetitive tasks. On the other hand, jobs involving fairly diverse tasks or social interaction are much less susceptible to automation. \vspace{6pt} Finally, let us discuss some solutions that can help people navigate the impact of AI on jobs: \begin{itemize} \item conditional basic income $-$ governments can provide a basic income or change tax patterns to give people an incentive to learn and invest in their own development \item lifelong learning $-$ people should keep on learning to be in a better position to adapt and take advantage of new jobs being created \item political solutions $-$ incentives for new job creation (i.e. people to move into AI-related jobs) \end{itemize} If a person has the ability to combine their knowledge of AI with their work-related knowledge, it makes them more uniquely qualified to do very valuable and efficient work. \hrulefill \begin{center} \textbf{END OF WEEK 4} \end{center} This is the end of the notes for this course. If you have any feedback or wish to browse through notes for other courses, you may do so using the links on my website - \texttt{\href{https://omprabhu31.github.io/}{https://omprabhu31.github.io/}}. \hrulefill \end{document}
{ "alphanum_fraction": 0.7884773294, "avg_line_length": 87.1014304291, "ext": "tex", "hexsha": "5f060bd696fc97be4e1ba8ee7aeb80433673498e", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-09-08T09:18:00.000Z", "max_forks_repo_forks_event_min_datetime": "2021-09-08T09:18:00.000Z", "max_forks_repo_head_hexsha": "c9b5747607d4a8cccfe68410bf4deb9a15960636", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "omprabhu31/omprabhu31.github.io", "max_forks_repo_path": "academics/notes/ai_for_everyone/AI for Everyone.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c9b5747607d4a8cccfe68410bf4deb9a15960636", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "omprabhu31/omprabhu31.github.io", "max_issues_repo_path": "academics/notes/ai_for_everyone/AI for Everyone.tex", "max_line_length": 707, "max_stars_count": 1, "max_stars_repo_head_hexsha": "c9b5747607d4a8cccfe68410bf4deb9a15960636", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "omprabhu31/omprabhu31.github.io", "max_stars_repo_path": "academics/notes/ai_for_everyone/AI for Everyone.tex", "max_stars_repo_stars_event_max_datetime": "2021-02-03T10:43:43.000Z", "max_stars_repo_stars_event_min_datetime": "2021-02-03T10:43:43.000Z", "num_tokens": 15238, "size": 66981 }
%% Beginning of file 'sample631.tex' %% %% Modified 2021 March %% %% This is a sample manuscript marked up using the %% AASTeX v6.31 LaTeX 2e macros. %% %% AASTeX is now based on Alexey Vikhlinin's emulateapj.cls %% (Copyright 2000-2015). See the classfile for details. %% AASTeX requires revtex4-1.cls and other external packages such as %% latexsym, graphicx, amssymb, longtable, and epsf. Note that as of %% Oct 2020, APS now uses revtex4.2e for its journals but remember that %% AASTeX v6+ still uses v4.1. All of these external packages should %% already be present in the modern TeX distributions but not always. %% For example, revtex4.1 seems to be missing in the linux version of %% TexLive 2020. One should be able to get all packages from www.ctan.org. %% In particular, revtex v4.1 can be found at %% https://www.ctan.org/pkg/revtex4-1. %% The first piece of markup in an AASTeX v6.x document is the \documentclass %% command. LaTeX will ignore any data that comes before this command. The %% documentclass can take an optional argument to modify the output style. %% The command below calls the preprint style which will produce a tightly %% typeset, one-column, single-spaced document. It is the default and thus %% does not need to be explicitly stated. %% %% using aastex version 6.3 % \documentclass[linenumbers]{aastex631} \documentclass{aastex63} %% The default is a single spaced, 10 point font, single spaced article. %% There are 5 other style options available via an optional argument. They %% can be invoked like this: %% %% \documentclass[arguments]{aastex631} %% %% where the layout options are: %% %% twocolumn : two text columns, 10 point font, single spaced article. %% This is the most compact and represent the final published %% derived PDF copy of the accepted manuscript from the publisher %% manuscript : one text column, 12 point font, double spaced article. %% preprint : one text column, 12 point font, single spaced article. %% preprint2 : two text columns, 12 point font, single spaced article. %% modern : a stylish, single text column, 12 point font, article with %% wider left and right margins. This uses the Daniel %% Foreman-Mackey and David Hogg design. %% RNAAS : Supresses an abstract. Originally for RNAAS manuscripts %% but now that abstracts are required this is obsolete for %% AAS Journals. Authors might need it for other reasons. DO NOT %% use \begin{abstract} and \end{abstract} with this style. %% %% Note that you can submit to the AAS Journals in any of these 6 styles. %% %% There are other optional arguments one can invoke to allow other stylistic %% actions. The available options are: %% %% astrosymb : Loads Astrosymb font and define \astrocommands. %% tighten : Makes baselineskip slightly smaller, only works with %% the twocolumn substyle. %% times : uses times font instead of the default %% linenumbers : turn on lineno package. %% trackchanges : required to see the revision mark up and print its output %% longauthor : Do not use the more compressed footnote style (default) for %% the author/collaboration/affiliations. Instead print all %% affiliation information after each name. Creates a much %% longer author list but may be desirable for short %% author papers. %% twocolappendix : make 2 column appendix. %% anonymous : Do not show the authors, affiliations and acknowledgments %% for dual anonymous review. %% %% these can be used in any combination, e.g. %% %% \documentclass[twocolumn,linenumbers,trackchanges]{aastex631} %% %% AASTeX v6.* now includes \hyperref support. While we have built in specific %% defaults into the classfile you can manually override them with the %% \hypersetup command. For example, %% %% \hypersetup{linkcolor=red,citecolor=green,filecolor=cyan,urlcolor=magenta} %% %% will change the color of the internal links to red, the links to the %% bibliography to green, the file links to cyan, and the external links to %% magenta. Additional information on \hyperref options can be found here: %% https://www.tug.org/applications/hyperref/manual.html#x1-40003 %% %% Note that in v6.3 "bookmarks" has been changed to "true" in hyperref %% to improve the accessibility of the compiled pdf file. %% %% If you want to create your own macros, you can do so %% using \newcommand. Your macros should appear before %% the \begin{document} command. %% \newcommand{\vdag}{(v)^\dagger} \newcommand\aastex{AAS\TeX} \newcommand\latex{La\TeX} %% Reintroduced the \received and \accepted commands from AASTeX v5.2 %\received{March 1, 2021} %\revised{April 1, 2021} %\accepted{\today} %% Command to document which AAS Journal the manuscript was submitted to. %% Adds "Submitted to " the argument. %\submitjournal{PSJ} %% For manuscript that include authors in collaborations, AASTeX v6.31 %% builds on the \collaboration command to allow greater freedom to %% keep the traditional author+affiliation information but only show %% subsets. The \collaboration command now must appear AFTER the group %% of authors in the collaboration and it takes TWO arguments. The last %% is still the collaboration identifier. The text given in this %% argument is what will be shown in the manuscript. The first argument %% is the number of author above the \collaboration command to show with %% the collaboration text. If there are authors that are not part of any %% collaboration the \nocollaboration command is used. This command takes %% one argument which is also the number of authors above to show. A %% dashed line is shown to indicate no collaboration. This example manuscript %% shows how these commands work to display specific set of authors %% on the front page. %% %% For manuscript without any need to use \collaboration the %% \AuthorCollaborationLimit command from v6.2 can still be used to %% show a subset of authors. % %\AuthorCollaborationLimit=2 % %% will only show Schwarz & Muench on the front page of the manuscript %% (assuming the \collaboration and \nocollaboration commands are %% commented out). %% %% Note that all of the author will be shown in the published article. %% This feature is meant to be used prior to acceptance to make the %% front end of a long author article more manageable. Please do not use %% this functionality for manuscripts with less than 20 authors. Conversely, %% please do use this when the number of authors exceeds 40. %% %% Use \allauthors at the manuscript end to show the full author list. %% This command should only be used with \AuthorCollaborationLimit is used. %% The following command can be used to set the latex table counters. It %% is needed in this document because it uses a mix of latex tabular and %% AASTeX deluxetables. In general it should not be needed. %\setcounter{table}{1} %%%%% AUTHORS - PLACE YOUR OWN PACKAGES HERE %%%%% % Only include extra packages if you really need them. Common packages are: \usepackage{graphicx} % Including figure files \usepackage{amsmath} % Advanced maths commands % \usepackage{amssymb} % Extra maths symbols %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%% AUTHORS - PLACE YOUR OWN COMMANDS HERE %%%%% % Please keep new commands to a minimum, and use \newcommand not \def to avoid % overwriting existing commands. Example: % Only include extra packages if you really need them. %\usepackage{float} \usepackage[normalem]{ulem} \usepackage{booktabs} \usepackage{mathtools} \usepackage{adjustbox} % to adjust table sizes only when they otherwise go beyond textwidth \usepackage{tabularx} % % \usepackage{array} % for making reference with '&' symbol in tables work \usepackage{color} \usepackage{acronym} \usepackage{xspace} \usepackage{enumitem} % to make enumerate normal nrs http://ctan.org/pkg/enumitem \usepackage{calc}% http://ctan.org/pkg/calc \usepackage[caption=false]{subfig} \usepackage{ifthen} % to switch on/off comments with \newcommand \usepackage{wrapfig} % for wrapping figures \usepackage{import} % to import tex files as subsections within this documents \usepackage{multirow} % to allow multirows and columns in tables %% If you want to create your own macros, you can do so %% using \newcommand. Your macros should appear before %% the \begin{document} command. %% % \newcommand{\vdag}{(v)^\dagger} % \newcommand\aastex{AAS\TeX} % \newcommand\latex{La\TeX} %%%%%%%%%%%%%%%%%%%%%%%% \newcommand{\floor}[1]{\textbf{\textcolor{magenta}{#1}}} % \renewenvironment{floor}{}{} % to deactivate \floor{} % \newcommand{\todo}[1]{\textcolor{red}{[To do: #1]}} \newcommand{\question}[1]{\textcolor{green}{[Question: #1]}} \newcommand{\edo}[1]{\textcolor{Dandelion}{[Edo: #1]}} \definecolor{tomcol}{rgb}{0.53,0.00,1.00} \newcommand{\tom}[1]{\textcolor{tomcol}{[Tom: #1]}} \newcommand{\ilya}[1]{\textcolor{blue}{#1}} \newcommand{\Mch}[1]{{\color{cyan}#1}} \newcommand{\MCh}[1]{{\color{cyan}#1}} \definecolor{ochre}{rgb}{0.8, 0.47, 0.13} \newcommand{\avg}[1]{{\color{ochre}#1}} % % make comments invisible % \newcommand{\switch}[1]{% % \ifthenelse{\equal{#1}{0}}{\renewcommand{\floor}[1]{}}{} % \ifthenelse{\equal{#1}{0}}{\renewcommand{\edo}[1]{}}{} % \ifthenelse{\equal{#1}{0}}{\renewcommand{\todo}[1]{}}{} % \ifthenelse{\equal{#1}{0}}{\renewcommand{\question}[1]{}}{} % \ifthenelse{\equal{#1}{0}}{\renewcommand{\tom}[1]{}}{} % \ifthenelse{\equal{#1}{0}}{\renewcommand{\ilya}[1]{}}{} % \ifthenelse{\equal{#1}{0}}{\renewcommand{\Mch}[1]{}}{} % \ifthenelse{\equal{#1}{0}}{\renewcommand{\MCh}[1]{}}{} % \ifthenelse{\equal{#1}{0}}{\renewcommand{\avg}[1]{}}{}} % \switch{0} \usepackage{outlines} % for itemize in itemize %%\newcommand{\floor}[1]{\textcolor{blue}{#1}} %Todo/remarks by Floor %commands \newcommand\Fiducial{\texttt{Fiducial }} \newcommand\rate{\mathcal{R}} \newcommand\COMPAS{{\sc{COMPAS }}} \newcommand{\standard}{\texttt{standard }} \newcommand{\pluseq}{\mathrel{+}=} %-- Constants \newcommand\hubbleTimeGyrs{14.03} \newcommand{\monei}{\ensuremath{m_{1,\rm{ZAMS}}}\xspace} \newcommand{\mtwoi}{\ensuremath{m_{2,\rm{ZAMS}}}\xspace} \newcommand{\monef}{\ensuremath{m_{1,\rm{f}}}\xspace} \newcommand{\mtwof}{\ensuremath{m_{2,\rm{f}}}\xspace} \newcommand{\ai}{\ensuremath{a_{\rm{ZAMS}}}\xspace} \newcommand{\qi}{\ensuremath{q_{\rm{ZAMS}}}\xspace} % \newcommand{\Zi}{\ensuremath{Z_{\rm{i}}}\xspace} \newcommand{\Zi}{\ensuremath{Z}\xspace} \newcommand{\vk}{\ensuremath{v_{\rm{k}}}\xspace} \newcommand{\thetak}{\ensuremath{{\theta}_{\rm{k}}}\xspace} \newcommand{\phik}{\ensuremath{{\phi}_{\rm{k}}}\xspace} \newcommand{\ei}{\ensuremath{{e}_{\rm{i}}}\xspace} \newcommand{\Rsun}{\ensuremath{\,\rm{R}_{\odot}}\xspace} \newcommand{\km}{\ensuremath{\,\rm{km}}\xspace} \newcommand{\kms}{\ensuremath{\,\rm{km}\,\rm{s}^{-1}}\xspace} \newcommand{\Msun}{\ensuremath{\,\rm{M}_{\odot}}\xspace} \newcommand{\Lsun}{\ensuremath{\,\rm{L}_{\odot}}\xspace} \newcommand{\kpc}{\ensuremath{\,\rm{kpc}}\xspace} \newcommand{\Mpc}{\ensuremath{\,\rm{Mpc}}\xspace} \newcommand{\Zsun}{\ensuremath{\,\rm{Z}_{\odot}}\xspace} %\newcommand{\fbin}{\ensuremath{f_{\rm{bin}}}} %\newcommand{\vdag}{(v)^\dagger} \newcommand{\AU}{\ensuremath{\,\mathrm{AU}}\xspace} \newcommand{\Myr}{\ensuremath{\,\mathrm{Myr}}\xspace} \newcommand{\yr}{\ensuremath{\,\mathrm{yr}}\xspace} \newcommand{\yrs}{\ensuremath{\,\mathrm{yr}}\xspace} \newcommand{\Myrs}{\ensuremath{\,\mathrm{Myr}}\xspace} \newcommand{\Gyr}{\ensuremath{\,\mathrm{Gyr}}\xspace} \newcommand{\Gyrs}{\ensuremath{\,\mathrm{Gyr}}\xspace} \newcommand{\Kelvin}{\ensuremath{\,\mathrm{K}}\xspace} \newcommand{\yearmin}{\ensuremath{\,\rm{yr}^{-1}}\xspace} \newcommand{\MpcminThree}{\ensuremath{\,\rm{Mpc}^{-3}}\xspace} \newcommand{\GpcminThree}{\ensuremath{\,\rm{Gpc}^{-3}}\xspace} \newcommand{\Hubblemin}{\ensuremath{\mathcal{H}_0^{-1}}\xspace} \newcommand{\Hubble}{\ensuremath{\mathcal{H}_0}\xspace} \newcommand{\MSFR}{\ensuremath{{M}_{\rm{SFR}}}\xspace} % \newcommand{\SFRD}{\text{SFRD}\ensuremath{(Z_{\rm{i}},z)}\xspace} \newcommand{\SFRD}{\text{SFRD}\ensuremath{(Z,z)}\xspace} \newcommand{\tdelay}{\ensuremath{{t}_{\rm{delay}}}\xspace} \newcommand{\tDCO}{\ensuremath{{t}_{\rm{DCO}}}\xspace} \newcommand{\ts}{\ensuremath{{t}_{\rm{s}}}\xspace} \newcommand{\tevolve}{\ensuremath{{t}_{\rm{evolve}}}\xspace} \newcommand{\tform}{\ensuremath{{t}_{\rm{form}}}\xspace} \newcommand{\tmerger}{\ensuremath{{t}_{\rm{m}}}\xspace} \newcommand{\tinspiral}{\ensuremath{{t}_{\rm{inspiral}}}\xspace} \newcommand{\thubble}{\ensuremath{{t}_{\mathcal{H}}}\xspace} \newcommand{\tdet}{\ensuremath{{t}_{\rm{det}}}\xspace} \newcommand{\Nform}{\ensuremath{{N}_{\rm{form}}}\xspace} \newcommand{\Ndet}{\ensuremath{{N}_{\rm{det}}}\xspace} \newcommand{\Nmerger}{\ensuremath{{N}_{\rm{merger}}}\xspace} \newcommand{\Pdet}{\ensuremath{{P}_{\rm{det}}}\xspace} \newcommand{\fbin}{\ensuremath{{f}_{\rm{bin}}}\xspace} \newcommand{\Mchirp}{\ensuremath{{\mathcal{m}}_{\rm{chirp}}}\xspace} \newcommand{\Vc}{\ensuremath{{V}_{\rm{c}}}\xspace} \newcommand{\DL}{\ensuremath{{D}_{\rm{L}}}\xspace} \newcommand{\Dc}{\ensuremath{{D}_{\rm{c}}}\xspace} %$M_{\rm{SFR}}$ $t_{\rm{delay}} = t_{\rm{form}} + t_{\rm{merger}}$ %\ts d \Vc \newcommand\myeq{\stackrel{\mathclap{\normalfont\mbox{def}}}{=}} \newcommand*\diff{\mathop{}\!\mathrm{d}} \newcommand{\CMP}{C21} \newcommand{\PI}{paper~I} % \newcommand{\PIII}{Broekgaarden et al. (in prep.)} % final properties of BHNS mergers: \newcommand{\mnsf}{\ensuremath{m_{\rm{NS}}}\xspace} \newcommand{\mnsfone}{\ensuremath{m_{\rm{NS,1}}}\xspace} \newcommand{\mnsftwo}{\ensuremath{m_{\rm{NS,2}}}\xspace} \newcommand{\mbhf}{\ensuremath{m_{\rm{BH}}}\xspace} \newcommand{\mbhfone}{\ensuremath{m_{\rm{BH,1}}}\xspace} \newcommand{\mbhftwo}{\ensuremath{m_{\rm{BH,2}}}\xspace} \newcommand{\mtotf}{\ensuremath{m_{\rm{tot}}}\xspace} % \newcommand{\mchirpf}{\ensuremath{{\mathcal{M}}_{\rm{c}}}\xspace} \newcommand{\mchirpf}{\ensuremath{{m}_{\rm{chirp}}}\xspace} \newcommand{\af}{\ensuremath{a_{\rm{f}}}\xspace} \newcommand{\qf}{\ensuremath{q_{\rm{f}}}\xspace} \newcommand{\ef}{\ensuremath{{e}_{\rm{f}}}\xspace} \newcommand{\chibh}{\ensuremath{{\chi}_{\rm{1}}}\xspace} \newcommand{\Rns}{\ensuremath{{R}_{\rm{NS}}}\xspace} \newcommand{\Rgwone}{\ensuremath{\mathcal{R}_{\rm{GW200115}}}\xspace} \newcommand{\Rgwzero}{\ensuremath{\mathcal{R}_{\rm{GW200105}}}\xspace} \newcommand{\Rbhns}{\ensuremath{\mathcal{R}_{\rm{BHNS}}}\xspace} \newcommand{\Rbhbh}{\ensuremath{\mathcal{R}_{\rm{BHBH}}}\xspace} \newcommand{\Rnsns}{\ensuremath{\mathcal{R}_{\rm{NSNS}}}\xspace} %% MODELS \newcommand{\mAzero}{\ensuremath{\rm{A}000}\xspace} \newcommand{\mAxyz}{\ensuremath{\rm{A}xyz}\xspace} \newcommand{\Nmodels}{\ensuremath{560}\xspace} \newcommand{\NmodelsBPS}{\ensuremath{20}\xspace} \newcommand{\NmodelsMSSFR}{\ensuremath{28}\xspace} %%% RATE COMMAND \newcommand{\RateIntrinsicZero}{\ensuremath{\mathcal{R}_{\rm{m}}^{0}}\xspace} \newcommand{\RateObserved}{\ensuremath{\mathcal{R}_{\rm{det}}}\xspace} \newcommand{\gwone}{\ensuremath{\rm{GW200115}}\xspace} \newcommand{\gwzero}{\ensuremath{\rm{GW200105}}\xspace} \newcommand{\Gpcyr}{\ensuremath{\,\rm{Gpc}^{-3}\,\rm{yr}^{-1}}\xspace} \newcommand{\model}{P112\xspace} %-- abbreviations %List of abbreviations \acrodef{GSMF}{galaxy stellar mass function, the number density of galaxies per logarithmic mass bin,} \acrodef{MZR}{mass-metallicity relation} \acrodef{SFRD}{star formation rate density} \acrodef{BHNS}{black hole--neutron star} \acrodef{NSNS}{binary neutron star} \acrodef{BHBH}{binary black hole} \acrodef{DCO}{double compact object} \acrodef{NS}{neutron star} \acrodef{BH}{black hole} \acrodef{BH--NS}{black hole-neutron star} \acrodef{GRB}{gamma-ray burst} \acrodef{RLOF}{Roche-lobe overflow} \acrodef{CE}{common envelope} \acrodef{GW}{gravitational-wave} \acrodefplural{GW}[GWs]{gravitational waves} \acrodef{SN}{supernova} \acrodefplural{SN}[SNe]{supernovae} \acrodef{ECSN}{electron-capture SN} \acrodef{PISN}{pair-instability SN} \acrodefplural{ECSN}[ECSNe]{electron-capture SN} \acrodef{USSN}{ultra-stripped SN} \acrodefplural{USSN}[USSNe]{ultra-stripped SN} \acrodef{CCSN}{core-collapse SN} \acrodefplural{CCSN}[CCSNe]{core-collapse SN} \acrodef{COMPAS}{ Compact Object Mergers: Population Astrophysics and Statistics} \acrodef{SFRD}{metallicity-specific star formation rate density} \acrodef{ZAMS}{zero-age main sequence} \hyphenation{COMPAS} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%% TITLE PAGE %%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% %% The following section outlines numerous optional output that %% can be displayed in the front matter or as running meta-data. %% %% If you wish, you may supply running head information, although %% this information may be modified by the editorial offices. \shorttitle{Formation of GW200115 and GW200105} \shortauthors{Broekgaarden $\&$ Berger} %% \begin{document} % Title of the paper, and the short title which is used in the headers. % Keep the title short and informative. \title{Formation of the First Two Black Hole – Neutron Star Mergers (GW200115 and GW200105) from Isolated Binary Evolution} % The list of authors, and the short list which is used in the headers. % If you need two or more lines of authors, add an extra line using \newauthor \author[0000-0002-4421-4962]{Floor S. Broekgaarden}\thanks{E-mail: [email protected]} % and Edo Berger,$^{1}$\\ % List of institutions \affiliation{Center for Astrophysics | Harvard $\&$ Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA} \author[0000-0002-9392-9681]{Edo Berger} \affiliation{Center for Astrophysics | Harvard $\&$ Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA} \begin{abstract} In this work we study the formation of the first two \ac{BHNS} mergers detected in gravitational waves (\gwone and \gwzero) from massive stars in wide isolated binary systems -- the \textit{isolated binary evolution channel}. We use \Nmodels \ac{BHNS} binary population synthesis model realizations from \citet{ZenodoDCOBHNS:2021} and show that the system properties (chirp mass, component masses and mass ratios) of both \gwone and \gwzero match predictions from the isolated binary evolution channel. We also show that most model realizations can account for the local \ac{BHNS} merger rate densities inferred by LIGO-Virgo. However, to simultaneously also match the inferred local merger rate densities for BHBH and NSNS systems we find we need models with moderate kick velocities ($\sigma\lesssim 10^2$\kms) or high common-envelope efficiencies ($\alpha_{\rm{CE}}\gtrsim 2$) within our model explorations. We conclude that the first two observed \ac{BHNS} mergers can be explained from the isolated binary evolution channel for reasonable model realizations. % % \end{abstract} % Select between one and six entries from the list of approved keywords. % Don't make up new ones. \keywords{ (transients:) black hole - neutron star mergers -- gravitational waves -- stars: evolution} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%% BODY OF PAPER %%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%% INTRODUCTION %%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Introduction} \label{sec:introduction} In June 2021, \citet{Abbott:2021-first-NSBH} announced the first observations of \acp{GW} from two \ac{BHNS} merger events -- \gwone and \gwzero{} -- during the third LIGO-Virgo-Kagra (LVK) observing run (O3). \gwone was detected by all three detectors from LIGO and Virgo and has chirp mass\footnote{Throughout this paper we use the reported `high spin' source parameters. This assumption does not significantly impact our results. The values reported are the median and $90\%$ credible intervals.} $\mchirpf = 2.42_{-0.07}^{+0.05}$\Msun, total mass $\mtotf = 7.1_{-1.4}^{+1.5}$\Msun, component masses $\mbhf = 5.{7}_{-2.1}^{+1.8}$ $\Msun$ and $\mnsf = 1.{5}_{-0.3}^{+0.7}$\Msun and mass ratio $\qf \equiv (\mnsf / \mbhf) = 0.26_{-0.10}^{+0.35}$. \gwzero was effectively only observed by LIGO Livingston as LIGO Hanford was offline and the signal-to-noise ratio in Virgo was below the threshold of 4.0. \gwzero has $\mchirpf = 3.41^{+0.08}_{-0.07}$, $\mtotf = 10.9_{-1.2}^{+1.1}$\Msun, $\mbhf = 8.{9}_{-1.5}^{+1.2}$ $\Msun$, $\mnsf = 1.{9}_{-0.2}^{+0.3}$ $\Msun$ and $\qf = 0.22_{-0.04}^{+0.08}$. From these observations \citet{Abbott:2021-first-NSBH} infer a local \ac{BHNS} merger rate density of $\Rbhns = {45}_{-33}^{+75}$\Gpcyr when assuming that \gwone and \gwzero are solely representative of the entire \ac{BHNS} population; and $\Rbhns = {130}_{-69}^{+112}$\Gpcyr when assuming a broader distribution of component masses \citep{Abbott:2021-first-NSBH}. For the individual \gwone and \gwzero events, the authors quote inferred local merger rate densities of $\Rgwone = {36}_{-30}^{+82}$\Gpcyr and $\Rgwzero = {16}_{-14}^{+38}$\Gpcyr, respectively. In \citet{GWTC2:pop}, LVK reported a \ac{NSNS} merger rate of $\Rnsns = 320^{+490}_{-240}$\Gpcyr, and four different \ac{BHBH} $90\%$ credible rate intervals spanning $\Rbhbh\approx 10.3-104$\Gpcyr. The main formation channel leading to merging \ac{BHNS} systems (and BHBH and NSNS) is still under debate. A widely studied channel is the formation of \ac{BHNS} mergers from massive stars that form in (wide) isolated binaries and evolve typically including a common-envelope (CE) phase \citep[e.g.,][]{Neijssel:2019,Belczynski:2020,Shao:2021}. Other possible channels include formation from close binaries that can evolve chemically homogeneously \citep{MandelDeMink:2016,Marchant:2017}, metal-poor population III stars that formed in the early Universe \citep[e.g.][]{Belczynski:2017popIII}, stellar triples \citep[][]{FragioneLoeb:2019a,HamersThompson:2019}, or from dynamical or hierarchical interactions in globular clusters \citep[][]{Clausen:2013, ArcaSedda:2020, Ye:2019}, nuclear star clusters \citep[][]{PetrovichAntonini:2017, McKernan:2020, Wang:2020} and young and/or open star clusters \citep[e.g.,][]{Ziosi:2014,Rastello:2020}. We refer the reader to \citet[][]{MandelBroekgaardenReview:2021} for a living review of these various formation channels. Here we focus on addressing the key question: {\it Could GW200115 and GW200105 have been formed through the isolated binary evolution scenario?} To investigate this we use the simulations from \citet{ZenodoDCOBHNS:2021} to study the formation of merging \ac{BHNS} systems from pairs of massive stars that evolve through the isolated binary evolution scenario. The paper is structured as follows. In \S\ref{sec:method} we describe our method and models. In \S\ref{sec:results-intrinsic-merger-rates} we show that most of our models do match the inferred \ac{BHNS} rate densities, but that only models with higher \ac{CE} efficiencies or moderate \ac{SN} kicks are also consistent with the inferred \Rbhbh and \Rnsns. In \S\ref{sec:results-matching-the-GW-properties} we compare the properties of \gwone and \gwzero to the overall expected \ac{GW}-detectable \ac{BHNS} population. We end with a discussion in \S\ref{sec:discussion} and present our conclusions in \S\ref{sec:conclusions}. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%% METHOD %%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Method} \label{sec:method} % % \subsection{Binary population synthesis set-up} We use the publicly available binary population synthesis simulations from \citet[][presented in \citealt{BroekgaardenDCOmergers:2021}]{ZenodoDCOBHNS:2021}, to study the formation of \gwone and \gwzero from the isolated binary evolution channel. The simulations used in this work add new model realizations compared to \citet{Broekgaarden:2021}, and also consider merging \ac{BHBH} and \ac{NSNS} systems. The simulations are performed using the rapid binary population synthesis code {\sc{COMPAS}}\footnote{Compact Object Mergers: Population Astrophysics and Statistics, \url{https://compas.science}.} \citep[][]{Stevenson:2017, Barrett:2017, VignaGomez:2018, Broekgaarden:2019, Neijssel:2019}, which is used to model the evolution of the binary systems and determine the source properties and rates of the double compact object mergers. The \ac{BHNS} population data set contains a total of \Nmodels model realizations to explore the uncertainty in the population modelling. Namely, \NmodelsBPS different binary population synthesis variations (varying assumptions for common envelope, mass transfer, supernovae and stellar winds) and \NmodelsMSSFR model variations in the metallicity-specific star formation rate density model, \SFRD (varying assumptions for the star formation rate density, mass-metallicity relation and galaxy stellar mass function), which is a function of birth metallicity ($Z$) and redshift ($z$). The population synthesis simulations are labeled A, B, C, ... T, with each variation representing one change in the physics prescription compared to the fiducial model `A' (see Table~1 in \citealt{BroekgaardenDCOmergers:2021}); the \SFRD models are labelled with 000, 111, 112, ... 333 (see Table~3 \citealt[][]{Broekgaarden:2021}). To obtain high resolution simulations, \citet{BroekgaardenDCOmergers:2021} simulated for each population synthesis model a million binaries for 53 $\Zi$ bins and used the adaptive importance sampling algorithm STROOPWAFEL \citep{Broekgaarden:2019} to further increase the number of \ac{BHNS} systems in the simulations. Doing so, resulted in a total dataset consisting of over 30 million \ac{BHNS} systems, making it the most extensive simulations of its kind to date. We define \ac{BHNS} systems in our simulations to match the observed \gwone and \gwzero if their \mchirpf, \mtotf, \monef, \mtwof \'{a}nd \qf lie within the inferred $90\%$ credible intervals (\S\ref{sec:introduction}). We note that \citet{Abbott:2021-first-NSBH} also inferred $90\%$ credible intervals for the spins of both \ac{BHNS} systems, but due to the large uncertainties in the measurements and the theory of spins we leave this topic for discussion in \S\ref{sec:discussion} and do not explicitly take spins into account for the \ac{BHNS} system selection. We calculate \Rbhns using Equation~2 in \citet{Broekgaarden:2021}, where we assume a local redshift $z\approx 0$, and discuss these intrinsic merger rates in \S\ref{sec:results-intrinsic-merger-rates}. We obtain the detection-weighted distributions for the \ac{BHNS} mergers using Equation~3 from \citet{Broekgaarden:2021} and discuss the results in \S\ref{sec:results-matching-the-GW-properties}. To calculate the detectable \ac{GW} population we assume the sensitivity of a GW detector network equivalent to advanced LIGO in its design configuration \citep{2015CQGra..32g4001L, 2016LRR....19....1A, 2018LRR....21....3A}, a reasonable proxy for O3. For the purpose of comparison, we use the LIGO-Virgo posterior samples for \gwone and \gwzero from \citet{Abbott:2021-open-GWTC-data}. \section{Predicted BHNS Merger Rates and Properties} \label{sec:results} \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{figures/Rates_intrinsic_single_panel.pdf} \caption{Predicted local \ac{BHNS} merger rate density, \Rbhns, for our \Nmodels model variations. % The shaded horizontal bars mark the corresponding \ac{GW}-inferred $90\%$ credible intervals for the merger rate densities from \citet[][]{Abbott:2021-first-NSBH}: $\Rbhns = {45}_{-33}^{+75}$\Gpcyr and $\Rbhns = {130}_{-69}^{+112}$\Gpcyr. % We connect simulation predictions that use the same \SFRD model with a line for visual clarity only. Two \SFRD variations, 231 (dashed) and 312 (dotted), are highlighted. % Model realizations matching the inferred \Rbhns and also the $90\%$ credible intervals for the \ac{BHBH} and \ac{NSNS} merger rates from \citet[][]{GWTC2:pop} are marked with red crosses. % In the top we added colored labels to indicate what physics assumptions are varied compared to our fiducial assumptions in the models. An arrow points to model \model (\S\ref{sec:results-matching-the-GW-properties}). } \label{fig:Rates-Intrinsic} \end{figure*} \subsection{Local BHNS merger rates} \label{sec:results-intrinsic-merger-rates} In Figure~\ref{fig:Rates-Intrinsic} we show the predicted local merger rate densities from our \Nmodels model realizations for the overall \ac{BHNS} population, in comparison to the $90\%$ credible intervals from \citet{Abbott:2021-first-NSBH}. We find that the majority of the \Nmodels model realizations match one of the two observed \ac{BHNS} merger rate densities. Model realizations that under-predict the observed rates include most \SFRD variations of model G ($\alpha_{\rm{CE}}=0.1$) corresponding to inefficient \ac{CE} ejection, which increases the number of stellar mergers during the \ac{CE} phase (our fiducial model uses $\alpha_{\rm{CE}}=1$), and about half of the \SFRD variations of model D, which assumes a high mass transfer efficiency ($\beta = 0.75$), as opposed to our fiducial model that assumes an adaptive $\beta$ based on the stellar type and thermal timescale and typically results in $\beta\lesssim 0.1$ for systems leading to \ac{BHNS} mergers. Conversely, some model realizations over-predict the observed rates, in particular about half of the \SFRD variations of models P, Q and R. These models have moderate or low \ac{SN} natal kick magnitudes, increasing the number of \ac{BHNS} systems that stay bound during the \acp{SN}. The \SFRD variations that over-predict the observed rates correspond to lower average metallicities, thereby increasing the formation efficiency of \ac{BHNS} mergers \citep{Broekgaarden:2021}. On the other hand, we find that only a small subset of the \Nmodels model realizations (shown with red crosses in Figure~\ref{fig:Rates-Intrinsic}) also match the inferred $90\%$ credible intervals of the observed \ac{BHBH} and \ac{NSNS} merger rate densities (\S\ref{sec:introduction}; \citealt{BroekgaardenDCOmergers:2021}\footnote{We note that \Rbhbh, \Rbhns and \Rnsns could have (large) contributions from formation channels other than the isolated binary evolution channel.}, namely models I, J, P and Q in conjunction with a few of the \SFRD variations. Both the higher $\alpha_{\rm{CE}}$ values in models I and J ($\alpha_{\rm{CE}}\gtrsim 2$), and the low \ac{SN} natal kicks in models P and Q ($\sigma\approx 30$ or $100\kms$, where $\sigma$ is the one-dimensional root-mean-square velocity dispersion of the Maxwellian distribution used to draw the \ac{SN} natal kick magnitudes), result in relatively higher \ac{NSNS} rates that can match\footnote{Most isolated binary evolution predictions (including most of our model variations) underestimate the inferred \ac{NSNS} merger rate (e.g., \citealt{Chruslinska:2018,MandelBroekgaardenReview:2021}).} the high observed \Rnsns. Requiring a match with the observed \Rbhbh mostly constrains the \SFRD models to those with moderate average star formation metallicities, as our models with typically low \Zi (e.g., 231) overestimate the inferred \Rbhbh \citep{BroekgaardenDCOmergers:2021}. Within the matching models, models I, P and Q match the inferred \Rbhns that is based on a broader \ac{BHNS} mass distribution, whereas the matching model J variations overlap only with the observed rate based on a \gwone- and \gwzero-like population. We note, however, that our binary population synthesis models in all cases predict a broader mass distribution compared to just \gwone- and \gwzero-like events. We investigate this in detail in Figure~\ref{fig:chirp-mass-cdf-matching-models}, where we plot the cumulative \ac{BHNS} chirp mass distributions of our model variations, in comparison to the chirp masses spanned by \gwone and \gwzero, $2.35\lesssim \mchirpf / \Msun \lesssim 3.49$. We find that $\approx 60\%$ of the \ac{GW}-detectable \ac{BHNS} systems in model J are expected to have \mchirpf outside of this range, while for matching models I, P and Q this is about $60\%$, $50\%$ and $50\%$, respectively. For models I, P and Q this result is expected since they match the \Rbhns range that is based on a broader mass distribution, but for model J the low percentage of $60\%$ conflicts the match with \Rbhns based on a \ac{BHNS} population defined by \gwone- and \gwzero-like events. From Figure~\ref{fig:chirp-mass-cdf-matching-models} it can be seen that besides models I, J, P and Q all other model realizations generally predict \ac{BHNS} populations with broader chirp mass distributions compared to the range spanned by \gwone and \gwzero alone. The models using the rapid supernova prescription (model L) predict the highest fraction ($\approx 75\%$) of \ac{BHNS} systems with $2.35\lesssim \mchirpf / \Msun \lesssim 3.49$, whereas the model assuming that case BB mass transfer is always unstable (model E) results in the lowest percentages ($\approx 8\%$). \begin{figure*} \centering \includegraphics[width=.75\textwidth]{figures/CDF_matching_models_Mchirp.png} \caption{Cumulative distributions of the chirp mass for the models matching \Rbhns, \Rbhbh and \Rnsns (colored lines) and all of the other \Nmodels model realizations (light gray lines). We also show the $90\%$ credible intervals for \gwone and \gwzero (vertical bars; \citealt{Abbott:2021-first-NSBH}). The legend indicates the label names of the matching models, while the arrows point to models E and L, which predict the lowest and highest fraction of \ac{BHNS} mergers within the chirp mass range spanning \gwone and \gwzero, respectively. } \label{fig:chirp-mass-cdf-matching-models} \end{figure*} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%% at MERGER %%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Properties of the BHNS systems} \label{sec:results-matching-the-GW-properties} % In the following discussion we focus on the specific model 'P112', as an example of a model realization that matches all of the various observed merger rate densities. We take this approach for simplicity, but note that we are not claiming that only this model realization represents the correct isolated binary evolution pathway to the observed \ac{GW} mergers. Below we examine the properties of the systems at the time of merger (chirp mass, component masses and mass ratio), as well as at the time of formation on the \ac{ZAMS} (e.g., ZAMS masses, mass ratio). \subsubsection{BHNS properties at merger} \label{results:BHNS-properties-at-merger} \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{figures/Scatter_Final_kde_with_LV_112_P.png} \caption{Corner plot showing the 1-D and 2-D distributions of the properties of the detectable \ac{BHNS} mergers from our binary population synthesis model \model. We show the chirp mass, \ac{BH} mass, \ac{NS} mass and the mass ratio at the time of merger. In gray we show the overall \ac{BHNS} population, whereas in blue (orange) we show \ac{BHNS} systems that have properties matching \gwone (\gwzero). Our \gwone (\gwzero) predictions are shown with blue (orange) scatter points and dotted histograms, whereas the posterior samples from \citet{Abbott:2021-first-NSBH} are shown with $90\%$ contour levels in the 2-D plots and with filled histograms in the 1-D panels. The gray contours show the percentage of the detectable \ac{BHNS} systems enclosed. All distributions are weighted using the \ac{GW}-detection probability. The 1-D distributions are normalized such that the peak is equal to one. } \label{fig:Triangle-final} \end{figure*} In Figure~\ref{fig:Triangle-final} we show the 1-D and 2-D distributions of the predicted properties for the \ac{GW}-detectable \ac{BHNS} population for all \ac{BHNS} systems (gray contours and 1-D distributions) and for \gwone- and \gwzero-like \ac{BHNS} systems (blue and orange scatter points and dotted histograms, respectively). The LIGO-Virgo inferred posterior samples for \gwone and \gwzero are shown with orange and blue $90\%$ credible contours in the 2-D histograms and with filled histograms in the 1-D plots, respectively. We show \mchirpf, \mbhf, \mnsf and \qf. In the top panels we normalize each 1-D distribution to peak at a value of 1. Overall, we find that model \model predicts the majority ($90\%$ percentiles) of the \ac{GW}-detectable \ac{BHNS} mergers to have $2 \lesssim \mchirpf /\Msun \lesssim 4.6$, $4.1 \lesssim \mbhf / \Msun \lesssim 14.7$, $1.3\lesssim \mnsf / \Msun \lesssim 2.4$, and $0.1\lesssim\qf\lesssim 0.4$. We emphasize that the neutron star mass and lower black hole mass boundaries of $1$\Msun and $2.5$\Msun, respectively, are set by our binary population synthesis assumptions for the lower and upper \ac{NS} mass from the delayed \citet{Fryer:2012} remnant mass prescription. In detail, we find several interesting features in the model distributions compared to the observed \ac{BHNS} mergers. First, we note that the inferred properties of \gwone and \gwzero lie well within the predicted population of the \ac{GW}-detectable \ac{BHNS} population. In particular, the \gwone and \gwzero credible intervals typically overlap with the highest probability region for the corresponding distribution of the predicted \ac{BHNS} population. % We stress that this result does not follow trivially from the match of model \model with the inferred \Rbhns (\S\ref{sec:results-intrinsic-merger-rates}) as the properties of the intrinsic and detectable \ac{BHNS} populations \emph{could} be significantly different due to the strong bias in the sensitivity of \ac{GW} detectors for more massive systems, meaning that the underlying intrinsic mass distributions can be significantly different from the observed mass distributions. Only for \mnsf the posterior samples of \gwone reach well below the predicted distribution of our models, but this is due to the remnant mass prescription, which has an artificial lower \mnsf limit of about $1.3$\Msun. The overlap between our predictions and the inferred posterior distributions can also be seen from the matches between the LVK distributions and our model-weighted distributions for \gwone and \gwzero. Second, we find that model \model suggests the existence of a small, positive, $\mbhf$--$\mnsf$ correlation in the GW-detectable \ac{BHNS} population (a similar correlation is also visible in the $\mchirpf$--$\mnsf$ distribution, but we note that the chirp mass is dependent on \mnsf). This means that we expect, on average, that \ac{BHNS} with more massive \acp{BH} have more massive \acp{NS}. Interestingly, this correlations also holds for \gwone and \gwzero. This correlation is visible in most of our other model variations, and was also noted by earlier work, including \citet{Kruckow:2018} and \citet{Broekgaarden:2021}. The correlation is due to the preference in the isolated binary evolution channel for more equal mass binaries. The \ac{BHNS} with more massive \ac{BH} typically form from binaries with a more massive primary (the initially more massive star), and such systems also have on average more massive secondaries at ZAMS (cf., \citealt{Sana:2012}). In addition, the more massive secondaries at ZAMS typically lead to binaries with more equal mass ratios at the moment of the first mass transfer, making it likely more stable and successfully leading to a \ac{BHNS} \citep{Broekgaarden:2021}. This results on average in a more massive \ac{NS} in binaries with a more massive \ac{BH}. Finally, we note that several of the panels in Figure~\ref{fig:Triangle-final} show sharp gaps or peaks in the distributions, particularly visible in the scatter points and 1-D histograms. These gaps are artificial discontinuities present in some of the prescriptions in our COMPAS model and are explained in detail in \citet[][and references therein]{Broekgaarden:2021}. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%% at ZAMS %%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsubsection{BHNS properties at ZAMS } \label{results:BHNS-properties-at-ZAMS} \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{figures/Scatter_ZAMS_3_112_P.png} \caption{Same as Figure~\ref{fig:Triangle-final} but showing the binary system properties at \ac{ZAMS} for the detectable \ac{BHNS} population of model \model. We show the primary mass, secondary mass, semi-major axis and mass ratio. We do not show inferred credible intervals from LIGO-Virgo in this figure as the ZAMS properties are not measurable through \acp{GW}. } \label{fig:Triangle-ZAMS} \end{figure*} In Figure~\ref{fig:Triangle-ZAMS} we show the \ac{ZAMS} properties of the binary systems that successfully form detectable \ac{BHNS} mergers: primary mass (\monei), secondary mass (\mtwoi), semi-major axis (\ai) and mass ratio ($\qi \equiv \mtwoi/\monei$). In blue (orange) we show the ZAMS properties of binaries in our simulation that eventually form \ac{BHNS} matching the inferred credible intervals of \gwone (\gwzero). The distributions are weighted for the sensitivity of a \ac{GW}-detector network. Several features can be seen that we describe below. First, we find that \gwone- and \gwzero-like \ac{GW} mergers form from binaries that have 1D distributions ($90\%$ percentiles) in the range $26 \lesssim \monei / \Msun \lesssim 112$, $13 \lesssim \mtwoi / \Msun \lesssim 25$, $10^{0.04} \lesssim \ai / \AU \lesssim 10^{1.5}$ and $0.15 \lesssim \qi \lesssim 0.75$. From the histograms it can be seen that the initial conditions of the binaries that form \gwone- and \gwzero-like mergers are representative of the overall \ac{BHNS} forming population. Second, when comparing \gwone and \gwzero, we find that our model predicts that both systems formed from binaries with similar primary star masses. However, for the other ZAMS properties the model predicts that \gwzero-like \ac{BHNS} mergers form from binaries with slightly larger \mtwoi, \ai and \qi, compared to \gwone-like \ac{BHNS} mergers. The larger secondary masses for \gwzero are required to form the more massive \ac{NS} in this system. The larger secondary mass also cause the slight preference for larger \ai at \ac{ZAMS} as the increased secondary mass impacts the timing of mass transfer in several ways, including the time at which the primary will fill its Roche lobe, and the common-envelope phase later on (more/less shrinking due to a different envelope mass). As a result we find that \gwzero-like mergers form from slightly larger \ai compared to \gwone-like mergers. Third, it can be seen that several of the distributions in Figure~\ref{fig:Triangle-ZAMS} show small gaps in ZAMS space that form \ac{BHNS} with combinations of \ac{BH} and \ac{NS} masses that do not match \gwone or \gwzero. These are mostly a consequence from small regions in \monei, \mtwoi and \qi that map to specific \ac{BH} masses in our stellar evolution prescriptions that do not match \gwone or \gwzero. Finally, in the $\ai$--$\qi$ plane, we note a small population of \ac{BHNS} systems around $\log(\ai) \sim -1$ and $\qi \gtrsim 0.6$ that do not form \gwone and \gwzero-like mergers. These are a small subset of \ac{BHNS} systems that form through an early mass transfer episode initiated by the primary star when it is still core-hydrogen burning (case A mass transfer). These systems are the main contributor to the small population of \ac{BHNS} in which the \ac{NS} forms first and with $\mbhf\gtrsim 10\Msun$ (see \citealt{Broekgaarden:2021} for further details). %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%% DISCUSSION %%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Discussion} \label{sec:discussion} % % \subsection{Black Hole Spins \& Neutron Star Tidal Disruption} % \citet{Abbott:2021-first-NSBH} report the inferred $90\%$ credible interval for the primary spin magnitude $\chibh$ (i.e., spin of the \ac{BH}), of \gwone (\gwzero) to be $\chibh = 0.33_{-0.29}^{+0.48}$ ($\chibh = 0.08_{-0.08}^{+0.22}$), while the spin of the \acp{NS} are unconstrained. Both reported \chibh values are consistent with zero. However, for \gwone the authors report moderate support for negative \textit{effective} inspiral spin $\chi_{\rm{eff}} = -0.19_{-0.35}^{+0.23}$, indicating a negatively aligned spin with respect to the orbital angular momentum axis. Theoretical studies of spins in \ac{BHNS} systems formed through isolated binary evolution are still inconclusive. It has been argued that the black holes are expected to have $\chibh \approx 0$ due to efficient angular momentum transport during the star's evolution \citep[e.g.][]{FragosMcClintock:2015,FullerMa:2019}. Typically, no anti-aligned spins are expected \citep[but see e.g., the discussion in ][]{Wysocki:2017}. Studies including \citet[][]{Qin:2018} and \citet{Bavera:2020} argue that if the \ac{BH} is formed second, it can tidally spin up as a Wolf-Rayet (WR) star if the binary evolves through a tight \ac{BH}--WR phase. The same might be true for tight \ac{NS}--WR systems that can form \ac{BHNS} with a spun up \ac{BH} if the \ac{BH} forms after the \ac{NS} (e.g. if the system inverts its masses early in its evolution). However, we find that none of the \gwone and \gwzero-like \ac{BHNS} mergers in model \model do so, and hence we predict $\chibh = 0$ for both events, consistent with the LIGO-Virgo inferred credible intervals. Using the ejecta mass prescription from \citet[][Equation~4]{Foucart:2018} and the \ac{BHNS} properties from model \model, we can crudely calculate whether our simulated \ac{BHNS} systems tidally disrupt the \ac{NS} outside the \ac{BH} innermost-stable orbit and, if so, the amount of baryon mass outside the \ac{BH}. We find that when assuming $\chibh=0$ none of the \gwone- and \gwzero-like \ac{BHNS} systems have ejecta masses of $\gtrsim 10^{-6}$\Msun \citep[cf.][]{Abbott:2021-first-NSBH, Zhu:2021} for reasonable $\Rns = 11-13\km$. \subsection{Other Formation Channels} Previous predictions for \Rbhns from isolated binary evolution and alternative formation pathways have been made (see \citealt{MandelBroekgaardenReview:2021} for a review). The various isolated binary evolution studies have predicted rates ranging from a few tenths to $\sim 10^3$\Gpcyr, and a subset can match one of the LIGO-Virgo inferred \ac{BHNS} rates \citep[e.g.][]{Neijssel:2019,Belczynski:2020}. For the other formation channels, there are some studies that predict agreeable rates for formation from triples \citep[e.g.][]{HamersThompson:2019}, formation in nuclear star clusters \citet[][but see also \citealt{PetrovichAntonini:2017, Hoang:2020}]{McKernan:2020}, dynamical formation in young star clusters \citep{Rastello:2020,Santoliquido:2020} and primordial formation \citep{Wang:2021}. On the other hand, much lower \ac{BHNS} rates ($\Rbhns\lesssim 10$\Gpcyr), which do not match the observed rate, are expected from binaries that evolve chemically homogeneously \citep{Marchant:2017}, from population III stars \citep{Belczynski:2017popIII} and through dynamical formation in globular clusters \citep{Clausen:2013,ArcaSedda:2020,Hoang:2020,Ye:2019}. \ac{GW} observations of \ac{BHNS} might therefore provide a useful tool to distinguish between formation channels. We stress, however, that models should not only match the rates, but also the inferred mass and spin distributions of \ac{BHNS} mergers. This is particularly valuable as some of the formation channels predict \ac{BHNS} distributions with distinguishable features (e.g., a tail with larger \ac{BH} masses, $\mbhf\gtrsim 15-20\Msun$ in dynamical formation \citealt[][]{ArcaSedda:2020,Rastello:2020}) that could help constrain formation channels \citep[e.g.,][]{Stevenson:2017spin}. \subsection{Other Potential BHNS Merger Events} % Besides \gwone and \gwzero, LVK reported four potential \ac{BHNS} candidates \citep{GWTC2,GWTC2point1}: % %%%%% \begin{itemize} % \item GW190425 is most likely a NSNS merger, but a \ac{BHNS} origin cannot be ruled out. If it is a \ac{BHNS} then $\mbhf = 2.0^{+0.6}_{-0.3}$\Msun and $\mchirpf = 1.44^{+0.02}_{-0.02}$\Msun are uncommon in our simulated \ac{BHNS} population (e.g., Figure~\ref{fig:Triangle-final} and \citealt[][]{Broekgaarden:2021}). % \item GW190814 is most likely a BHBH merger, but a \ac{BHNS} origin cannot be ruled out. If so, it has $\mnsf = 2.59^{+0.008}_{-0.009}$\Msun. In \citet{Broekgaarden:2021} we noted that only our model K (which assumes a maximum NS mass of $3\Msun$) produces such heavy \ac{NS} masses, but that it does not form many GW190814-like \ac{BHNS} systems as GW190814's reported $\mchirpf =6.09^{+0.06}_{-0.06}$, $\mtotf = 25.8^{+1.0}_{-0.9}$ and $\mbhf=23.2^{+1.1}_{-1.0}$ are rare within the model \ac{BHNS} population. % \item GW190426$\_$152155 is a \ac{BHNS} candidate event, but with a marginal detection significance. If this event is real, it is inferred to have \ac{BHNS} properties very similar to \gwone \citep[see Figure~4 in][]{Abbott:2021-first-NSBH} %, namely $\mchirpf = 2.41^{+0.08}_{-0.08}$\Msun, $\mtotf = 7.2^{+3.5}_{-1.5}$\Msun, $\mbhf = 5.7^{+3.9}_{-2.3}$\Msun and $\mnsf = 1.5^{+0.8}_{-0.5}$\Msun. We therefore predict it to be (similarly) common in our simulations. % \item GW190917 is reported in the GWTC2.1 catalog, but the nature of its less massive component cannot be confirmed from the current data, and it was only classified as a BHBH event (i.e., $p_{\rm{BHNS}} = 0$) by the pipeline that detected it. If real, it might be a \ac{BHNS} with $\mchirpf = 3.7^{+0.2}_{-0.2}$, $\mtotf =11.4^{+3.0}_{-2.9}$, $\mbhf = 9.3^{+3.4}_{-4.4}$, $\mnsf = 2.1^{+1.5}_{-0.5}$ and $\qf = 0.23^{+0.52}_{-0.09}$. These properties are somewhat similar to \gwzero (although both the medians of \mbhf and \mnsf for GW190917 are slightly heavier), and we therefore predict it to be (similarly) common in our simulations. % \end{itemize} %%%%%%%%%%%%%%%% \section{Conclusions} \label{sec:conclusions} In this paper we studied the formation of the first two detected \ac{BHNS} systems (\gwone and \gwzero) in the isolated binary evolution channel using the \Nmodels binary population synthesis model realizations presented in \citet{BroekgaardenDCOmergers:2021}. We investigate the predicted \Rbhns, as well as the \ac{BHNS} system properties (at merger and at ZAMS) and compare these with the data from LIGO-Virgo \citep{Abbott:2021-first-NSBH}. Our key findings are: \begin{itemize} \item We find that the majority of our \Nmodels model realizations can match one of the inferred credible intervals for \Rbhns from \citet{Abbott:2021-first-NSBH}. We further find that models with higher \ac{CE} efficiency ($\alpha_{\rm{CE}}\gtrsim 2$; models I and J) or moderate \ac{SN} natal kick velocities ($\sigma\lesssim 100\kms$; models P and Q) also match the inferred $90\%$ credible intervals for \Rbhbh and \Rnsns. \item Using model \model as an example, we find that the isolated binary evolution channel predicts a \ac{GW}-detectable \ac{BHNS} population that matches the observed properties (chirp mass, component masses and mass ratios) of \gwone and \gwzero, although we expect a somewhat broader population than just \gwone and \gwzero-like \ac{BHNS} systems. \item We find that \gwone and \gwzero-like \ac{BHNS} mergers form in model \model from binaries with ZAMS properties ($90\%$ percentiles) in the range $26 \lesssim \monei / \Msun \lesssim 112$, $13 \lesssim \mtwoi / \Msun \lesssim 25$, $10^{0.04} \lesssim \ai / \AU \lesssim 10^{1.5}$ and $0.15 \lesssim \qi \lesssim 0.75$. \gwone and \gwzero-like \ac{BHNS} systems have a similar range of primary star masses, but we expect \gwzero-like \ac{BHNS} mergers to form from binaries with slightly larger \mtwoi, \ai and \qi, compared to \gwone-like systems. \item We note that if \gwone and \gwzero were formed through isolated binary evolution, then we expect their \ac{BH} to have a spin of 0, and neither system to have produced an electromagnetic counterpart. \item We discuss the four other \ac{BHNS} candidates reported by LIGO-Virgo, and find that the properties of GW190425 and GW190814 do not match our predicted \ac{BHNS} population, making them instead more likely to be NSNS and BHBH mergers, respectively. On the other hand, the properties of the \ac{BHNS} candidates GW190426$\_$152155 and GW190917 do match our predicted \ac{BHNS} population, but were reported by LIGO-Virgo with low signal-to-noise ratios. \end{itemize} We thus conclude that \gwone and \gwzero can be explained from formation through the isolated binary evolution channel, at least for some of the model realizations within our range of exploration. With a rapidly increasing population of \ac{BHNS} systems expected in Observing Run 4 and beyond, it will be possible to carry out a more detailed comparison to model simulations, and to eventually determine the evolutionary histories of \ac{BHNS} systems. \section*{Acknowledgements} We thank Gus Beane, Debatri Chattopadhyay, Victoria DiTomasso, Griffin Hosseinzadeh, Ilya Mandel, Simon Stevenson, Tom Wagg, Michael Zevin and the members of TeamCOMPAS for useful discussions and help with the manuscript. This work was supported in part by the Prins Bernhard Cultuurfonds studiebeurs awarded to FSB and by NSF and NASA grants awarded to EB. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % %%%%%%%%%%%%%%%%%%%% \section*{Data Availability} % We used the \ac{BHNS} simulations from \citet{BroekgaardenDCOmergers:2021}, which are publicly available on \url{https://doi.org/10.5281/zenodo.5178777} \citep{ZenodoDCOBHNS:2021}. All code to reproduce the figures and calculations in this work are publicly available at \url{https://github.com/FloorBroekgaarden/NSBH_GW200105_and_GW200115}. Simulations in this paper made use of the COMPAS rapid binary population synthesis code, which is freely available at \url{http://github.com/TeamCOMPAS/COMPAS} \citep{stevenson2017formation,Barrett:2017,VignaGomez:2018,Broekgaarden:2019, Neijssel:2019}. The simulations performed in this work were simulated with a COMPAS version that predates the publicly available code but is most similar to version 02.13.01. This research has made use of the posterior samples for \gwone and \gwzero provided by the Gravitational Wave Open Science Center (\url{https://www.gw-openscience.org/}), a service of LIGO Laboratory, the LIGO Scientific Collaboration and the Virgo Collaboration. LIGO Laboratory and Advanced LIGO are funded by the United States National Science Foundation (NSF) as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded, through the European Gravitational Observatory (EGO), by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, Spain. %%%%%%%%%%%%%%%%%%%% REFERENCES %%%%%%%%%%%%%%%%%% % The best way to enter references is to use BibTeX: \bibliographystyle{aasjournal} \bibliography{BHNS-MSSFR}{} % if your bibtex file is called example.bib %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%% APPENDICES %%%%%%%%%%%%%%%%%%%%% % \appendix %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Don't change these lines % \bsp % typesetting comment % \label{lastpage} \end{document} % End of mnras_template.tex
{ "alphanum_fraction": 0.7291943514, "avg_line_length": 76.4756097561, "ext": "tex", "hexsha": "feb386db79e5789e54d75e6ed19ee14605849175", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f342b047deef4dbfcc061840bf1978c765502105", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "FloorBroekgaarden/NSBH_GW200105_and_GW200115", "max_forks_repo_path": "paper/LVK-BHNS.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f342b047deef4dbfcc061840bf1978c765502105", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "FloorBroekgaarden/NSBH_GW200105_and_GW200115", "max_issues_repo_path": "paper/LVK-BHNS.tex", "max_line_length": 1786, "max_stars_count": null, "max_stars_repo_head_hexsha": "f342b047deef4dbfcc061840bf1978c765502105", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "FloorBroekgaarden/NSBH_GW200105_and_GW200115", "max_stars_repo_path": "paper/LVK-BHNS.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 16240, "size": 56439 }
\documentclass{article} %%%%%% Include Packages %%%%%% \usepackage{sectsty} \usepackage{amsmath,amsfonts,amsthm,amssymb} \usepackage{fancyhdr} \usepackage{lastpage} \usepackage{setspace} \usepackage{graphicx} %%%%%% Formatting Modifications %%%%%% \usepackage[margin=2.5cm]{geometry} %% Set margins \sectionfont{\sectionrule{0pt}{0pt}{-8pt}{0.8pt}} %% Underscore section headers \setstretch{1.2} %% Set 1.2 spacing %%%%%% Set Homework Variables %%%%%% \newcommand{\hwkNum}{5} \newcommand{\hwkAuthors}{Ben Drucker} %%%%%% Set Header/Footer %%%%%% \pagestyle{fancy} \lhead{\hwkAuthors} \rhead{Homework \#\hwkNum} \rfoot{\textit{\footnotesize{\thepage /\pageref{LastPage}}}} \cfoot{} \renewcommand\headrulewidth{0.4pt} \renewcommand\footrulewidth{0.4pt} %%%%%% Document %%%%%% \begin{document} \title{Homework \#\hwkNum} \author{\hwkAuthors} \date{} \maketitle %%%%%% Begin Content %%%%%% \section*{4.2} \subsection*{20} \subsubsection*{a)} $F(Y) = \begin{cases} 0 \leq y \leq 5 &\Rightarrow \int_0^y \frac{y}{25}dy = \frac{y^2}{50} \\ 5 \leq y \leq 10 &\Rightarrow \int_0^y f(y)dy = \int_0^5 f(y)dy + \int_5^y f(y)dy \\ &= \frac{1}{2} + \int_0^y\left [ \frac{2}{5} - \frac{y}{25} \right ] dy = \frac{2y}{5} - \frac{y^2}{50} -1 \end{cases}$ \\ \\ \\ \includegraphics[height=2in]{4-2--20a} \subsubsection*{b)} $ 0 < p \leq .5 \Rightarrow p=F(y_p) = \frac{y_p^2}{50} \rightarrow y_p = \sqrt{50p} \\ .5 < p \leq 1 \Rightarrow p = \frac{2y_p}{5} - \frac{y_p}{50}-1 \rightarrow y_p = 10-5\sqrt{2-2p}$ \subsubsection*{c)} $E(Y) =\int_0^5 y\frac{y}{25}dy + \int_5^{10}y \left ( \frac{2}{5} - \frac{y}{25} \right )dy =5 \\ V(Y) = \left(\int_0^5 \frac{y^3}{25} \, dy+\int_5^{10} y^2 \left(\frac{2}{5}-\frac{y}{25}\right) \, dy\right)-5^2 = \frac{25}{6} $\\ For a single bus, the values are simply halved. So: $E(X) =2.5, V(X) = \frac{25}{12}$ \section*{4.3} \subsection*{40} \subsubsection*{a)} $P(X \leq 40) = P \left (Z \leq \frac{40-43}{4.5} \right ) \approxeq 0.2546 \\ P(X >60) = 1- P(Z < \frac{60-43}{4.5}) \approxeq 1- 0.999... $ \subsubsection*{b)} $ P(Z < z) = .75 \rightarrow z = .67 \rightarrow .67 = \frac{x-43}{4.5} \Rightarrow x = 46.015$ \subsection*{46)} \subsubsection*{a)} $$P(67 \leq X \leq 75) = P \left ( \frac{67-70}{3} < Z < \frac{75-70}{3} \right ) \approxeq .953 - .159 = .794$$ \subsubsection*{b)} $Z_{.05/2} = Z_{.025} = 1.96; 1.96 * 3 = 5.88. $ \subsubsection*{c)} $E(RV) = .794 * 10 = 7.94$ \subsubsection*{d)} $P(X \leq 73.84) = 0.89973\\ P(p = 0.9, n = 10, x = 9) = .387 \\ P(p = 0.9, n = 10, x = 10) = 0.349 \\ p = 1-0.387-.349 = .264$ \subsection*{48} \subsubsection*{a)} $p(1.72) - p(.55) = .2485\\ p(.55)-p(0))+(p(1.72)-p(0))$ %%%%%% End Content %%%%%% \end{document}
{ "alphanum_fraction": 0.5729939908, "avg_line_length": 31.7865168539, "ext": "tex", "hexsha": "01d7ed558e1a4951a8e3992c95eb466673883d99", "lang": "TeX", "max_forks_count": 8, "max_forks_repo_forks_event_max_datetime": "2020-02-09T01:38:54.000Z", "max_forks_repo_forks_event_min_datetime": "2018-01-24T17:48:13.000Z", "max_forks_repo_head_hexsha": "0661e729fa0c7cb792fc31a2da77f2b44874d8a1", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "bendrucker/columbia", "max_forks_repo_path": "Spring 2013/STAT W1211 - Statistics/Homework/Homework 5/Homework 5.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0661e729fa0c7cb792fc31a2da77f2b44874d8a1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "bendrucker/columbia", "max_issues_repo_path": "Spring 2013/STAT W1211 - Statistics/Homework/Homework 5/Homework 5.tex", "max_line_length": 135, "max_stars_count": 10, "max_stars_repo_head_hexsha": "0661e729fa0c7cb792fc31a2da77f2b44874d8a1", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "bendrucker/columbia", "max_stars_repo_path": "Spring 2013/STAT W1211 - Statistics/Homework/Homework 5/Homework 5.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-23T03:38:03.000Z", "max_stars_repo_stars_event_min_datetime": "2017-05-09T03:30:41.000Z", "num_tokens": 1263, "size": 2829 }
\chapter{Introduction} \label{chap1} This document describes SPF2 (Social Proximity Framework 2), the new major release of the software written for the master thesis available at http://hdl.handle.net/10589/106727. Before you could understand this document you should really read the first one. This document will be more implementation-level then the master thesis and for this reason I won't repeat the theory behind SPF. \noindent The main objectives of this second major release are: \begin{itemize} \item update the entire project to Android Studio and Gradle. \item split the 3 demo applications into different projects. \item put \textsf{SPFShared} and \textsf{SPFLib} into a Maven repository to simplify the creation of SPF's applications. \item update all GUIs to Material Design and in particular using the necessary support libraries by Google. \item officially support Android 6.0. \item completely remove AllJoyn/AllSeen Middleware and replace it with a pure and complete Wi-Fi Direct Middleware. \item improve Wi-Fi Direct Middleware's architecture and sourcecode improving the reliability. \item support Wi-Fi Direct groups made by 2 or more devices. \end{itemize} \noindent But why I chose these particular objectives? Because, I decided to: \begin{itemize} \item update the project to the new Android's IDE and its build tool, because Google decided to stops the development of ADT for Eclipse. \item improve the separation of concepts and the related projects. \item simplify the usage of SPF by app's developers. \item simplify the future work of SPF developers, if they will want to add other GUI elements and so on \dots In fact, updating all projects to support libraries will make easier to update SPF to newer Android's versions and Themes. \item use the Wi-Fi Direct protocol, created exactly for mobile devices. \item use my previous experiences working on Wi-Fi Direct to improve the stability and reliability of Wi-Fi Direct Middleware. \item support Wi-Fi Direct groups specifying the device's type from the UI. \end{itemize} All projects are available on GitHub, licensed under \emph{LGPLv3} and I also included \emph{Travis Continuous Integration} to auto-compile the source code at every \textsf{git push} command, in particular to build the stable releases. The official repositories are: \begin{enumerate} \item https://github.com/deib-polimi/SPF2 \item https://github.com/deib-polimi/SPF2CouponingProviderDemo \item https://github.com/deib-polimi/SPF2CouponingClientDemo \item https://github.com/deib-polimi/SPF2ChatDemo \end{enumerate} \begin{lstlisting}[caption={settings.gradle},label=lst:settingsgradle, language=Java] include ':sPFShared' include ':sPFFramework' include ':sPFWFDMid' include ':sPFLib' include ':sPFApp' \end{lstlisting} \begin{lstlisting}[caption={build.gradle},label=lst:buildgradle, language=Java] dependencies { compile project(':sPFShared') compile 'com.google.code.gson:gson:2.4' compile 'com.android.support:support-v4:23.1.0' provided 'org.projectlombok:lombok:1.16.6' } \end{lstlisting} The first one is the main project, with also \textsf{SPFShared} and \textsf{SPFLib}. All submodules are connected using normal Gradle dependencies. I.e. there is a \textsf{settings.gradle}'s file where I declared all submodule's names and in every local module's \textsf{build.gradle} I specified the dependencies from other submodules. For example, in Listing \ref{lst:settingsgradle} there is the \textsf{settings.gradle} that I used, and in Listing \ref{lst:buildgradle} the \textsf{build.gradle} of the submodule called \textsf{sPFFramework}. As you can see, there are local module's dependencies like \textsf{gson}, \textsf{support-v4} and a provided dependency like \textsf{lombok}. At the end, there is \textsf{sPFShared} module, not online like the others, but locally into the Android Studio Project. In fact, all the dependencies in \textsf{SPFApp} are local, because it's the place where you can develop their. In external apps, like \emph{SPFCouponing} and \emph{SPFChat}, all dependencies are remote, using the deployed versions of \textsf{SPFShared} and \textsf{SPFLib}. This is the correct way to manage dependencies for SPF. A very big improvement should be the total separation of \textsf{SPFApp} from \textsf{sPFFramework} and all other submodules. But this is another topics and I'll show more details and hints at the end of this document, because this should be the main objective for the next major release. \section{The new project structure} After the first overview, it's time to explain in detail the new project structure for \textsf{SPFApp}. Like described before, all applications based on SPF should only add the Maven dependencies for \textsf{SPFShared} and \textsf{SPFLib}. \section*{sPFApp module} For the main project, the structure is more complicated. There is an Android application called \textsf{SPFApp} (the main module), because in its Gradle file there is this line: ``apply plugin: 'com.android.application'``. In this file I specified some informations, like \textsf{versionCode} and \textsf{versionName}. These two values are important, because when you want to release another version, you should increase the first one and change the the second one, for example with ``3.0.0" or ``2.1.0". \begin{lstlisting}[caption={Gradle wapper},label=lst:gradlewrapper, language=Java] task wrapper(type: Wrapper) { gradleVersion = "2.8" } \end{lstlisting} An important part to maintain updated this application is to force the \textsf{compileSdkVersion}, \textsf{buildToolsVersion} and \textsf{targetSdkVersion} to the latest available values. Also, it's a good practice to use always the latest \emph{Gradle Wrapper}, forcing the version into all Gradle files as in Listing \ref{lst:gradlewrapper}. However, the most important part are the dependencies. Inside this block there are all the local dependencies for this module. In particular, for SPFApp, the list is very long, because there are all the libraries to create the GUI, like \textsf{MaterialDrawer}, \textsf{Iconics}, \textsf{Butterknife}, \textsf{CircleImageView}, \textsf{Soundcloud-crop}, \textsf{AboutLibrary}. It's very important to update these libraries, because some of them receive a huge amount of updates with fixes and improvements. Also, I used other dependencies like Lombok, to use annotations like @Getter and @Setter. At the moment, Lombok require a config file called \textsf{lombok.config} to be able to build the project. Finally there are all the Google dependencies to be able to target different Android versions using the latest GUI components. You should remember to use classes from the support libraries to be able to show the latest and modern GUIs elements in all Android versions, when it's possibile. For example, you should use the \textsf{Fragment} class from \textsf{appcompat-v4}, or \textsf{AppCompatActivity} and not simply \textsf{Activity}, or \textsf{Loader} and \textsf{AsyncTaskLoader} from \textsf{appCompat-v7} and so on. If you'll follow these simple suggestions, you'll able to release a new update very quickly. Also remember to choose the correct Android Theme in \textsf{src/res/values/styles.xml}, like \textsf{Theme.AppCompat.Light.NoActionBar}. This is absolutely required by Android when you want to use support libraries. Pay attention that I'm using \emph{NoActionBar}, because I prefer the newer and extremely versatile \emph{Toolbar}, instead of the older \emph{ActionBar}. Finally, to be able to compile with \emph{Travis CI}, you must provide the correct \textsf{.travis.yml} file specifying exactly the same version written in your Gradle files. For example, if you are targeting API 23, into your \textsf{.travis.yml}, you must write ``- android-23". But, where are all the dependencies described here? They are in a Maven Repository. This means that they are online or local if you have previously downloaded those files. Instead, the dependencies specified with \textsf{compile project(':moduleName')} are the modules into the AndroidStudio Project. Obviously, this means that are local. \section*{sPFFramework and sPFWFDMid modules} These are \textsf{com.android.library} modules. For this reason, you can't start their as Android Applications. The structure is quite the same, always with remote and local dependencies. \section*{sPFLib and sPFShared modules} These are different modules, because they have also \textsf{apply plugin: 'maven'} to be able to push \textsf{.aar} files on a local Maven server. In my case, I chose \emph{Sonatype Nexus OSS} and I specified inside these two \textsf{build.gradle} files the \textsf{uploadeArchives} task to be able to push on the local server the compiled libraries. This is very useful during development. In this way, you aren't obliged to push all snapshots into a remote Maven Repository, but you can test your code locally. To be able to use this, you must download the server from \seqsplit{http://www.sonatype.org/nexus/go/}. After that, you can start the server, following the informations on the official websites and you can open it with \seqsplit{http://localhost:8081/nexus} and login as administrator. \begin{lstlisting}[caption={gradle.properties},label=lst:gradleproperties, language=Java] nexusUrl=http://localhost:8081/nexus nexusUsername=admin nexusPassword=admin123 \end{lstlisting} To be able to push directly to this server from AndroidStudio, you should create a \textsf{gradle.properties} inside the root project and write inside the content of the Listing \ref{lst:gradleproperties}. Please, remember to don't push this file remotely, because this should be only a local configuration for your machine. After that, build the entire project and now you are able to upload \textsf{SPFLib} and \textsf{SPFShared} into this server executing from Android Studio the \emph{uploadArchives} task. Remember that at every upload, you should increase the version number into the \textsf{build.gradle} to prevent caching's problems. \begin{lstlisting}[caption={Main build.gradle},label=lst:main-build.gradle] buildscript { repositories { jcenter() maven { url "http://localhost:8081/nexus/ content/groups/public" } } dependencies { classpath 'com.android.tools.build:gradle:1.3.1' } } allprojects { repositories { jcenter() maven { url "http://localhost:8081/nexus/ content/groups/public" } } } \end{lstlisting} With these informations you can build \textsf{SPFApp}, but what you should do if you want to build an SPF application that requires \textsf{SPFShared} and \textsf{SPFLib}? If you want to use my stable versions, you can simply include these libraries from the Maven remote Repository. But which is the correct procedure to use unstable versions from \emph{Sonatype Nexus OSS}? It's very simple, because you can simply use the main \textsf{build.gradle} file in Listing \ref{lst:main-build.gradle}. Obviously, also in this file you should update the Gradle build tools, to be able to use all latest libraries and functions of the IDE (for example in the future, Android Data Binding will be released. To be able to use this library, you'll must update the build tools to compile. The same thing happened some months ago with the NDK support). \begin{lstlisting}[caption={Module build.gradle's example},label=lst:module-build.gradle] compile 'it.polimi.spf:spflib:2.0.0.0@aar' compile 'it.polimi.spf:spfshared:2.0.0.0@aar' \end{lstlisting} After that, you can simply put the dependencies in Listing \ref{lst:module-build.gradle} into the local module's \textsf{build.gradle} file. Obviously, to be able to compile, \emph{Sonatype Nexus OSS} must be running with these files. With all these informations, you can simply build all your SPF applications and updates all dependencies and tools to the latest versions.
{ "alphanum_fraction": 0.7857622565, "avg_line_length": 81.5890410959, "ext": "tex", "hexsha": "e7ba1605d44ab22048214003cae61a89a517d313", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "776876dfae954d633a3cb5623c8ebd65b2b175b0", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "Ks89/SPF2_Documentation", "max_forks_repo_path": "chap1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "776876dfae954d633a3cb5623c8ebd65b2b175b0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "Ks89/SPF2_Documentation", "max_issues_repo_path": "chap1.tex", "max_line_length": 619, "max_stars_count": null, "max_stars_repo_head_hexsha": "776876dfae954d633a3cb5623c8ebd65b2b175b0", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "Ks89/SPF2_Documentation", "max_stars_repo_path": "chap1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2906, "size": 11912 }
\section{Epilogue} \subsection{Further research} \begin{frame}{Open research} On \url{https://academia.edu/12376921} you can freely download: \begin{itemize} \item handout \item presentation \item transcription \item database (subset view) \end{itemize} \vfill \begin{center} \includegraphics[width=2ex]{../cc.eps}\hspace{1ex}\includegraphics[width=2ex]{../by.eps}\\ {\small Creative Commons Attribution} \end{center} \end{frame} \subsection{Conclusion} \begin{frame}{Retrospective} \begin{itemize} \item formal method, anchored in the text \item exemplification of some aspects of the complex linguistic relationship between the Hebrew and the Welsh texts \end{itemize} \end{frame} \begin{frame}{} \begin{center} \fbox{{\includegraphics[height=5cm]{images/Morgan.jpg}}} \vfill Bishop William Morgan.\\ Imaginary portrait by T.\ Prytherch, 1907 \end{center} \end{frame}
{ "alphanum_fraction": 0.7322404372, "avg_line_length": 21.2790697674, "ext": "tex", "hexsha": "db621ddd49f45ab77169e7de6d99b8b9d7ff809a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0ab03741f6c8eabd82cf7f40f4323af8b1087428", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "rwmpelstilzchen/ICCS15", "max_forks_repo_path": "presentation/sections/epilogue.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0ab03741f6c8eabd82cf7f40f4323af8b1087428", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "rwmpelstilzchen/ICCS15", "max_issues_repo_path": "presentation/sections/epilogue.tex", "max_line_length": 117, "max_stars_count": null, "max_stars_repo_head_hexsha": "0ab03741f6c8eabd82cf7f40f4323af8b1087428", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "rwmpelstilzchen/ICCS15", "max_stars_repo_path": "presentation/sections/epilogue.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 284, "size": 915 }
%% %% This is file `askinclude-b22.tex', %% \chapter{Chapter B} \expandafter\let\csname fileb22\endcsname=Y \endinput %% %% End of file `askinclude-b22.tex'.
{ "alphanum_fraction": 0.6981132075, "avg_line_length": 15.9, "ext": "tex", "hexsha": "3dd095d387ef33b1cf0f50445c43feddf0e543be", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "339e49523ed780542aa2d29d07d4156a45ffaa9f", "max_forks_repo_licenses": [ "LPPL-1.3c" ], "max_forks_repo_name": "ho-tex/askinclude", "max_forks_repo_path": "testfiles-noxetex/support/askinclude-b22.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "339e49523ed780542aa2d29d07d4156a45ffaa9f", "max_issues_repo_issues_event_max_datetime": "2020-04-13T12:52:53.000Z", "max_issues_repo_issues_event_min_datetime": "2020-04-11T16:51:24.000Z", "max_issues_repo_licenses": [ "LPPL-1.3c" ], "max_issues_repo_name": "ho-tex/askinclude", "max_issues_repo_path": "testfiles-noxetex/support/askinclude-b22.tex", "max_line_length": 43, "max_stars_count": null, "max_stars_repo_head_hexsha": "339e49523ed780542aa2d29d07d4156a45ffaa9f", "max_stars_repo_licenses": [ "LPPL-1.3c" ], "max_stars_repo_name": "ho-tex/askinclude", "max_stars_repo_path": "testfiles-noxetex/support/askinclude-b22.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 53, "size": 159 }
\section{Introduction} \label{sec:Introduction} This document provides a detailed description of the SR@ML plugin for the RAVEN~\cite{RAVEN,RAVENtheoryMan} code. The features included in this plugin are: \begin{itemize} \item [] \end{itemize}
{ "alphanum_fraction": 0.7836734694, "avg_line_length": 27.2222222222, "ext": "tex", "hexsha": "345914e838988bf95589d03a64ffbf36526ac41f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2aa5e0be02786523cdeaf898d42411a7068d30b7", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "idaholab/SR2ML", "max_forks_repo_path": "doc/theory_manual/Introduction.tex", "max_issues_count": 32, "max_issues_repo_head_hexsha": "2aa5e0be02786523cdeaf898d42411a7068d30b7", "max_issues_repo_issues_event_max_datetime": "2022-02-17T19:45:27.000Z", "max_issues_repo_issues_event_min_datetime": "2021-01-12T18:43:29.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "idaholab/SR2ML", "max_issues_repo_path": "doc/theory_manual/Introduction.tex", "max_line_length": 113, "max_stars_count": 5, "max_stars_repo_head_hexsha": "2aa5e0be02786523cdeaf898d42411a7068d30b7", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "idaholab/SR2ML", "max_stars_repo_path": "doc/theory_manual/Introduction.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-27T03:14:49.000Z", "max_stars_repo_stars_event_min_datetime": "2021-01-25T02:01:22.000Z", "num_tokens": 67, "size": 245 }
\section{The Trainers} \newlength{\trainerIconWidth} \setlength{\trainerIconWidth}{2.0cm} \begin{center} \begin{longtable}{>{\centering\arraybackslash} m{1.1\trainerIconWidth} m{1\textwidth}} % Use the following block of commended LaTeX as a template for each trainer taking part in the workshop % ---- START TEMPLATE ---- % %\includegraphics[width=\trainerIconWidth]{handout/photos/generic.jpg} & % \textbf{Dr. Jane Bloggs}\newline % Research Fellow in Bioinformatics\newline % The Blogg Institute, Somewhere\newline % \mailto{[email protected]}\\ % ----- END TEMPLATE ----- % \end{longtable} \end{center}
{ "alphanum_fraction": 0.7160686427, "avg_line_length": 29.1363636364, "ext": "tex", "hexsha": "8cba23d58b178d69f21c0b76712def64b0c42031", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8900d22fafc72f5a63f0e6f1160125883e699ac5", "max_forks_repo_licenses": [ "CC-BY-3.0", "OLDAP-2.2.1" ], "max_forks_repo_name": "BPA-CSIRO-Workshops/btp-worksop-mgn", "max_forks_repo_path": "010_trainers/trainers.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8900d22fafc72f5a63f0e6f1160125883e699ac5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-3.0", "OLDAP-2.2.1" ], "max_issues_repo_name": "BPA-CSIRO-Workshops/btp-worksop-mgn", "max_issues_repo_path": "010_trainers/trainers.tex", "max_line_length": 105, "max_stars_count": null, "max_stars_repo_head_hexsha": "8900d22fafc72f5a63f0e6f1160125883e699ac5", "max_stars_repo_licenses": [ "CC-BY-3.0", "OLDAP-2.2.1" ], "max_stars_repo_name": "BPA-CSIRO-Workshops/btp-worksop-mgn", "max_stars_repo_path": "010_trainers/trainers.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 185, "size": 641 }
\documentclass{article} \usepackage{amsmath} \begin{document} \section{Reparametrizing the Two Body Problem} When fitting radial velocity (RV) data for exoplanet or binary star systems, the RV model is typically expressed as: \begin{equation} v_r(t) = v_0 + K\,\left[\cos(\omega + f(t)) + e\,\cos(\omega)\right] \end{equation} where $v_0$ is the systemic (barycentric) velocity of the system relative to the Sun, $K$ is the velocity semi-amplitude, $\omega$ is the argument of pericenter, and $f(t)$ is the True Anomaly. The True Anomaly is computed from the Mean Anomaly $M(t)$ \begin{equation} M = \frac{2\pi}{P} \, (t - t_{\mathrm{peri}}) \end{equation} through a transcendental equation that relates the Mean Anomaly to the Eccentric Anomaly $E(t)$: \begin{equation} M = E - e\,\sin E \end{equation} The Eccentric Anomaly is related to the True Anomaly by: \begin{align} \tan\frac{f}{2} &= \sqrt{\frac{1+e}{1-e}} \, \tan\frac{E}{2} \\ \tan\frac{E}{2} &= \sqrt{\frac{1-e}{1+e}} \, \tan\frac{f}{2} \end{align} In the above parametrization, the full set of parameters is $(v_0, K, P, e, \omega, t_{\mathrm{peri}})$. Another common representation of the Mean Anomaly is in terms of the phase of pericenter $M_0$, which specifies the phase at which pericenter occurs in the angle defined by time relative to a reference time $t_{\mathrm{ref}}$, which is often taken to be the minimum or mean observation time. So, an alternate definition of Mean Anomaly is: $$ M = \frac{2\pi}{P} \, (t - t_{\mathrm{ref}}) - M_0 $$ With this convention, $t_{\mathrm{peri}}$ is replaced by $M_0$ as a parameter so the list of parameters is $(v_0, K, P, e, \omega, M_0)$, but it is straightforward to transform between $M_0$ and $t_{\mathrm{peri}}$. In either case, the Mean Anomaly is defined such at $M=0$ (mod $2\pi$) occurs at pericenter. TODO: discussion about how this is not a good parametrization for MCMC sampling, and history (Ford 2005). \subsection{Replacing $\omega, M_0$ with phase of max/min velocity} Observed phase of max/min radial velocity $M^*_{\mathrm{max}}, M^*_{\mathrm{min}}$. \begin{align} M^*_{\mathrm{max}} &= \frac{2\pi}{P} \, (t_{\mathrm{max}} - t_{\mathrm{ref}}) \\ M^*_{\mathrm{min}} &= \frac{2\pi}{P} \, (t_{\mathrm{min}} - t_{\mathrm{ref}}) \end{align} Note: $M^*_{\mathrm{max}}, M^*_{\mathrm{min}}$ are not actually mean anomalies. They are mean anomaly $+ M_0$, because what we observe are phases relative to the reference time $t_{\mathrm{ref}}$. So, how do we relate these quantities to $(\omega, M_0)$? As far as I can figure, there is no direct, closed-form transformation, but I have figured out a procedure to do the transformation in terms of a different transcendental equation. This method is built on the intuition (guess) that the quantity \begin{equation} \Delta M = M^*_{\mathrm{min}} - M^*_{\mathrm{max}} \end{equation} will be related to the argument of pericenter. Defining also \begin{align} \Delta E &= E_{\mathrm{min}} - E_{\mathrm{max}} \\ \Sigma E &= E_{\mathrm{min}} + E_{\mathrm{max}} \end{align} we can re-express the classic Kepler transcendental equation as \begin{align} \Delta M &= \Delta E - e\, (\sin E_{\mathrm{min}} - \sin E_{\mathrm{max}})\\ \frac{\Delta M}{2} &= \frac{\Delta E}{2} - e\, \sin\frac{\Delta E}{2} \, \cos\frac{\Sigma E}{2} \end{align} where the last line makes use of the `difference of sines' trig identity. Using the relationship between $E$ and $f$, \begin{align} (E_{\mathrm{min}} - E_{\mathrm{max}})/2 = \arctan\left[ \sqrt{\frac{1-e}{1+e}} \, \tan \frac{f_{\mathrm{min}}}{2} \right] - \arctan\left[ \sqrt{\frac{1-e}{1+e}} \, \tan \frac{f_{\mathrm{max}}}{2} \right] \end{align} However, we know the values of $f_{\mathrm{min}}$ and $f_{\mathrm{max}}$: From Equation (1), we know that the RV is maximum when $f=f_{\mathrm{max}} \equiv -\omega$, and minimum when $f=f_{\mathrm{min}} \equiv \pi-\omega$. Plugging in these values and doing some trig algebra (using the difference of arctan's trig identity), this simplifies to \begin{align} \Delta E / 2 = \arctan\left(\frac{\sqrt{1 - e^2}}{e\,\sin\omega}\right) \end{align} and similarly, for the sum of $E_{\mathrm{min}} + E_{\mathrm{max}}$, \begin{align} \Sigma E / 2 = \arctan\left(\frac{\sqrt{1 - e^2}}{\tan\omega}\right) \quad . \end{align} So, the relationship between $\omega$, $e$, and $\Delta M$ is defined by the expression \begin{align} \frac{\Delta M}{2} &= \frac{\Delta E}{2} - e\, \sin\frac{\Delta E}{2} \, \cos\frac{\Sigma E}{2} \\ % \frac{\Delta M}{2} &= % \arctan\left(\frac{\sqrt{1 - e^2}}{e\,\sin\omega}\right) - % \,e \, % \sin\left(\arctan\left(\frac{\sqrt{1 - e^2}}{e\,\sin\omega}\right)\right) \, % \cos\left(\arctan\left(\frac{\sqrt{1 - e^2}}{\tan\omega}\right)\right) \frac{M_{\rm min} - M_{\rm max}}{2} &= \arctan\left( \frac{\sqrt{1 - e^2}}{e \, \sin\omega} \right) + \frac{2\, e \, \sqrt{1 - e^2} \, \sin\omega}{e^2 - 2 + e^2\,\cos(2\omega)} \end{align} and the phase $M_0$ can be retrieved with \begin{align} M_0 &= M^*_{\mathrm{max}} - M_{\mathrm{max}} \\ M_{\mathrm{max}} &= E_{\mathrm{max}} - e \, \sin E_{\mathrm{max}} \\ E_{\mathrm{max}} &= 2\,\arctan\left(\sqrt{\frac{1-e}{1+e}} \, \tan\frac{f_{\mathrm{max}}}{2}\right) \\ &= -2\,\arctan\left(\sqrt{\frac{1-e}{1+e}} \, \tan\frac{\omega}{2}\right) \\ \end{align} \end{document}
{ "alphanum_fraction": 0.6414031261, "avg_line_length": 42, "ext": "tex", "hexsha": "74d7f7c7ebc6621d004a6a47dabe0e3f30fc95ab", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b31c83f7a6cfdd004bd8aa365b44ddff35022554", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adrn/kermit", "max_forks_repo_path": "notes/phase-at-rv-max-min.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b31c83f7a6cfdd004bd8aa365b44ddff35022554", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adrn/kermit", "max_issues_repo_path": "notes/phase-at-rv-max-min.tex", "max_line_length": 99, "max_stars_count": 3, "max_stars_repo_head_hexsha": "b31c83f7a6cfdd004bd8aa365b44ddff35022554", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adrn/kermit", "max_stars_repo_path": "notes/phase-at-rv-max-min.tex", "max_stars_repo_stars_event_max_datetime": "2021-05-16T01:22:43.000Z", "max_stars_repo_stars_event_min_datetime": "2021-05-14T17:58:55.000Z", "num_tokens": 1935, "size": 5502 }
\setlist[coloritemize]{label=\textcolor{itemizecolor}{\textbullet}} \colorlet{itemizecolor}{.}% Default colour for \item in itemizecolor \setlength{\parindent}{0pt}% Just for this example \colorlet{itemizecolor}{black} \begin{coloritemize} \item Black is Examiners Question \end{coloritemize} \colorlet{itemizecolor}{blue} \begin{coloritemize} \item Blue is my sample question \end{coloritemize} \subsection{Distributed Systems - Exam paper 2018-19 Semester 7} \begin{enumerate} \item Question 1 (25 marks) \begin{itemize} \item Explain the importance of the following terms as they apply to distributed systems: Heterogeneity , Transparency , Scalability , Middleware , Inter-Process Communication (13 marks) \begin{coloritemize} \item my answer \end{coloritemize} \item Compare and contrast each of the following Inter-Process Communication models, identifying the key differences between them. Your answer should provide an example of each model. Use diagrams where appropriate. Remote Procedure Call Model , Object-Oriented Model , Service-Based Model (12 marks) \begin{coloritemize} \item my answer \end{coloritemize} \end{itemize} \item Question 2 (25 marks) \begin{itemize} \item “Marshalling frameworks based on highly structured Unicode formats have largely supplanted serialisation and binary data transfer formats.” You are required to provide a critique of this statement. Your answer should compare Unicode and lower-level marshalling formats in terms of heterogeneity, extensibility and efficiency. (10 marks) \begin{coloritemize} \item my answer \end{coloritemize} \item “XML schema definitions in combination with data binding frameworks can greatly simplify Inter-Process Communication in heterogeneous distributed systems”. Provide a critique of this statement. Discuss the data modelling process, the concept of data binding, an externalisation framework and a utility for automatically generating the code for class definitions from a .xsd your answer.(10 marks) \begin{coloritemize} \item my answer \end{coloritemize} \item Explain using pseudocode (or Java code), how an object may be transferred from one process to another using a Unicode format. Your answer should include the operations performed by the client and the server.(5 marks) \begin{coloritemize} \item my answer \end{coloritemize} \end{itemize} \item Question 3 (25 marks) \begin{itemize} \item Describe the function of the following components of the RMI architecture, using diagrams where appropriate: Remote Objects , RMI URLs and the RMI Registry (13 marks) \begin{coloritemize} \item my answer \end{coloritemize} \item Explain the procedure that is followed when creating a custom interface which specifies how a client process may interact with a Remote object. (3 marks) Write out the Java code for a Remote interface which provides the functionality described below. (10 marks)\\ You have been tasked with creating a RMI Database Service for student records. You may assume that a serializable class definition Student.java is available. The methods in the interface should make use of this serializable class definition where possible.\\ The Database Service will have the following remotely accessible methods:\\ getStudent – this method retrieves a single student record from the database.\\ It takes an integer (student id) as an argument. getAllStudents – this method retrieves all student records from the database.\\ addStudent – this method adds a new student to the database. deleteStudent – this method removes a single student record from the database.\\ It takes an integer (student id) as an argument. \begin{coloritemize} \item my answer \end{coloritemize} \item Explain how a pass by reference may be simulated using the RMI framework. Use examples of Java code and/or pseudocode to support your answer. (6 marks) \begin{coloritemize} \item my answer \end{coloritemize} \end{itemize} \item Question 4 (25 marks) \begin{itemize} \item Explain the mapping between HTTP methods and CRUD operations in RESTful architectures. (5 marks) \begin{coloritemize} \item my answer \end{coloritemize} \item Assume that a RESTful service which allows CRUD operations on a student resource is available at the following URL: http://www.examplesite.com/students\\ Explain how a HTTP request may be made to this service to retrieve the details of a student called Jane in XML format. Use a diagram of the client-server interaction along with the text of a sample HTTP request and HTTP response to aid your explanation.(10 marks) \begin{coloritemize} \item my answer \end{coloritemize} \item Explain how annotations may be used in the JAX-RS/Jersey framework to facilitate deployment of a Java Object as a RESTful web resource. Use a sample annotated Java class with one method to support your answer. Your code sample should demonstrate the use of annotations specifying the HTTP method type handled, resource path, path parameters and MIME response type.(10 marks) \begin{coloritemize} \item my answer \end{coloritemize} \end{itemize} \item Question 5 (25 marks) \begin{itemize} \item Compare the following partitioning strategies for distributed databases: Range Partitioning , Hash Partitioning , List Partitioning (9 marks) \begin{coloritemize} \item my answer \end{coloritemize} \item State Brewer’s CAP theorem and explain the meaning of each of the three systematic requirements to which it relates. (9 marks) \begin{coloritemize} \item my answer \end{coloritemize} \item Explain the purpose of WSDL in the context of distributed systems, giving examples. List and describe the four key aspects of a service which is described by WSDL. (7 marks) \begin{coloritemize} \item my answer \end{coloritemize} \end{itemize} \end{enumerate}
{ "alphanum_fraction": 0.7571123171, "avg_line_length": 22.1934306569, "ext": "tex", "hexsha": "19cfdff7b431d3918f7b7945e25c9c1a743dfb98", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "67d02f691de2f880d9cb6e5ba84accfcc42c2aa9", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "OmalleyTomas98/4thYearPapersSolution-Master", "max_forks_repo_path": "DistributedSystems-master/LaTeX-Project-WriteUp/chapters/2018-19DB.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "67d02f691de2f880d9cb6e5ba84accfcc42c2aa9", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "OmalleyTomas98/4thYearPapersSolution-Master", "max_issues_repo_path": "DistributedSystems-master/LaTeX-Project-WriteUp/chapters/2018-19DB.tex", "max_line_length": 191, "max_stars_count": null, "max_stars_repo_head_hexsha": "67d02f691de2f880d9cb6e5ba84accfcc42c2aa9", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "OmalleyTomas98/4thYearPapersSolution-Master", "max_stars_repo_path": "DistributedSystems-master/LaTeX-Project-WriteUp/chapters/2018-19DB.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1417, "size": 6081 }
\SetAPI{J-C} \section{cache.query.active} \label{configuration:CacheQueryActive} \ClearAPI Defines whether the query cache should be active, which stores results of queries to the persistence layer. Valid values are "true" and "false". %% GENERATED USAGE REFERENCE - DO NOT EDIT \begin{longtable}{ l l } \hline \textbf{Used in bean} & \textbf{Module} \ \endhead \hline \type{com.koch.ambeth.persistence.filter.QueryResultCache} & \prettyref{module:Persistence} \\ \hline \type{com.koch.ambeth.persistence.filter.QueryResultCache} & \prettyref{module:Persistence} \\ \hline \end{longtable} %% GENERATED USAGE REFERENCE END \type{com.koch.ambeth.persistence.config.PersistenceConfigurationConstants.QueryCacheActive} \begin{lstlisting}[style=Props,caption={Usage example for \textit{cache.query.active}}] cache.query.active=true \end{lstlisting}
{ "alphanum_fraction": 0.7789473684, "avg_line_length": 40.7142857143, "ext": "tex", "hexsha": "e73e9eccd07a00922dd51db7658b538d55c98651", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2022-01-08T12:54:51.000Z", "max_forks_repo_forks_event_min_datetime": "2018-10-28T14:05:27.000Z", "max_forks_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Dennis-Koch/ambeth", "max_forks_repo_path": "doc/reference-manual/tex/configuration/CacheQueryActive.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda", "max_issues_repo_issues_event_max_datetime": "2022-01-21T23:15:36.000Z", "max_issues_repo_issues_event_min_datetime": "2017-04-24T06:55:18.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Dennis-Koch/ambeth", "max_issues_repo_path": "doc/reference-manual/tex/configuration/CacheQueryActive.tex", "max_line_length": 144, "max_stars_count": null, "max_stars_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Dennis-Koch/ambeth", "max_stars_repo_path": "doc/reference-manual/tex/configuration/CacheQueryActive.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 238, "size": 855 }
\section{Monte Carlo Tree Search (MCTS)} \frame{\tableofcontents[currentsection, hideothersubsections]} \begin{frame} \frametitle{Monte Carlo Tree Search (MCTS)} Based on 2 fundamental concepts: \begin{itemize} \item {\small the true value of an action may be approximated using random simulation} \item these approximated values may be used efficiently to adjust the policy \end{itemize} \begin{figure} \centering \includegraphics[scale=0.2]{and_or_tree} \end{figure} \end{frame} \begin{frame} \frametitle{Monte Carlo Tree Search (MCTS): Selection} \begin{figure} \centering \includegraphics[scale=0.25]{mcts_steps} \end{figure} 1) Selection: \small \begin{itemize} \item starting at the root node, a child selection policy is recursively applied to descend through the tree until the most urgent expandable node is reached \item a node is expandable if it represents a nonterminal state and has unvisited (i.e., unexpanded) children. \end{itemize} \end{frame} \begin{frame} \frametitle{Monte Carlo Tree Search (MCTS): Expansion} \begin{figure} \centering \includegraphics[scale=0.25]{mcts_steps} \end{figure} 2) Expansion: \small \begin{itemize} \item one (or more) child nodes are added to expand the tree, according to the available actions. \end{itemize} \end{frame} \begin{frame} \frametitle{Monte Carlo Tree Search (MCTS): Simulation} \begin{figure} \centering \includegraphics[scale=0.25]{mcts_steps} \end{figure} 3) Simulation: \small \begin{itemize} \item A simulation is run from the new node(s) according to the default policy to produce an outcome. \end{itemize} \end{frame} \begin{frame} \frametitle{Monte Carlo Tree Search (MCTS): Backpropagation} \begin{figure} \centering \includegraphics[scale=0.25]{mcts_steps} \end{figure} 4) Backpropagation: \small \begin{itemize} \item The simulation result is ``backed up'' (i.e., backpropagated) through the selected nodes to update their statistics. \end{itemize} \end{frame} \begin{frame} \frametitle{Monte Carlo Tree Search (MCTS): UCT as tree policy} UCT (UCB1 for Tree): \\ every time a node (action) is to be selected within the existing tree, \\ the choice may be modeled as an independent multiarmed bandit problem \\ \vspace{5mm} \pause A child node $j$ is selected to maximize: \begin{equation*} UCT = \bar{X}_j + 2 C_p \sqrt{\frac{2~ln~n}{n_j}} \end{equation*} \pause \begin{itemize} \item $\bar{X}_j$: the average reward from child node $j$ \pause \item $n$: the number of times the current (parent) node has been visited, \pause \item $n_j$: the number of times child node $j$ has been visited, \pause \item $C_p > 0$: a constant. \end{itemize} \end{frame}
{ "alphanum_fraction": 0.7442207308, "avg_line_length": 27.0909090909, "ext": "tex", "hexsha": "bc493d903b2f5a23b9d4375eccf85cf06a2823f7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "779b0d9583fe0f4c582f03b808dd2b7027088493", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "tttor/robot-foundation", "max_forks_repo_path": "talk/tor/online-pomdp-planning/src/mcts.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "779b0d9583fe0f4c582f03b808dd2b7027088493", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "tttor/robot-foundation", "max_issues_repo_path": "talk/tor/online-pomdp-planning/src/mcts.tex", "max_line_length": 114, "max_stars_count": null, "max_stars_repo_head_hexsha": "779b0d9583fe0f4c582f03b808dd2b7027088493", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "tttor/robot-foundation", "max_stars_repo_path": "talk/tor/online-pomdp-planning/src/mcts.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 793, "size": 2682 }
\subsubsection{Random} \begin{lstlisting}[language=sql] /* function ExNrOfSerie($eId, $sId) */ select * from exercises_in_series where exId = ? and seriesId = ?, [$eId, $sId]; \end{lstlisting}
{ "alphanum_fraction": 0.7150259067, "avg_line_length": 32.1666666667, "ext": "tex", "hexsha": "25ad987d69c11e9302c8a40f9abc11a8044f2b74", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ab3443369a1cc6d2a4b5d3e85c0e8ef7ff98c4f1", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "arminnh/Programming-project-databases", "max_forks_repo_path": "verslagen/report_files/appendix/helpers/random.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ab3443369a1cc6d2a4b5d3e85c0e8ef7ff98c4f1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "arminnh/Programming-project-databases", "max_issues_repo_path": "verslagen/report_files/appendix/helpers/random.tex", "max_line_length": 80, "max_stars_count": null, "max_stars_repo_head_hexsha": "ab3443369a1cc6d2a4b5d3e85c0e8ef7ff98c4f1", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "arminnh/Programming-project-databases", "max_stars_repo_path": "verslagen/report_files/appendix/helpers/random.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 62, "size": 193 }
\LoadClass[notes]{hph} \setauthor{Hrant P.~Hratchian} \settitle{Summer Fortran Workshop: Problem 1 -- The Modified Particle in a Box} \setrunningtitle{Summer Fortran Workshop: Problem 1} \setdate{\today} \setcounter{chapter}{1} % \begin{document} \makeheaderfooter{} \maketitle % % % Section: Introduction and Problem Definition \section{Introduction and Problem Definition} Consider a \emph{modified} one-dimensional particle-in-a-box (\emph{m}PIB) where the potential is $\infty$ for $x\le{}0$ and $x\ge{}L$. In the range from $0$ to $L$, let the potential energy be given by % \begin{equation}\label{Eq:mPIBPotential} \displaystyle V\left(x\right) = b x\,\,\,\,\,\,\,\,\,\,0<x<L \end{equation} % Using atomic units, write a Fortran program that solves for the eigenfunctions and eigenvalues of the first five states of this system. Use the linear variational method to carry out this numerical problem. The standard one-dimensional PIB eigenfunctions should be used as your basis set. The PIB problem is a standard model system case studied in quantum mechanics. A brief overview of the model and key results are described below in Section \ref{Section:TheoreticalBackground}. This problem introduces a non-zero potential inside the box. This programming problem solves for this mPIB using the linear variational method, which is also described in Section \ref{Section:TheoreticalBackground}. The program should take a set of six input arguments from the command line: mass, box length $L$, slope parameter $b$, and the number of basis functions to be used in the calculation. The program should output the eigenvalues and expansion coefficients for the ground and first excited state. % % Section: Theoretical Background \section{Theoretical Background}\label{Section:TheoreticalBackground} This coding problem relies on two theoretical background topics: (1) the particle-in-a-box problem; and (2) the linear variational method. % % Subsection: Particle-in-a-Box \subsection{Particle-in-a-Box} As mentioned above, the one-dimensional particle-in-a-box (PIB) is a model system where the potential is $\infty$ for $x \le{}0$ and $x \ge{} L$. Most derivations begin by dividing the coordinate space into three regions: Region I ($x\le{}0$), Region II ($0<x<L$), and Region III ($x\ge{}L$). Regions I and III the potential energy is $\infty$ and it is trivial to show that the wave function vanishes. In Region II, a set of discrete quantum states are found. A quantum number, $n$, is determined to have allowed values $1, 2, 3, \cdots{}$. % \begin{equation} \displaystyle \braket{x}{n} = \psi(x) = \left(\frac{2}{L}\right)^{\sfrac{1}{2}}\sin{\left(\frac{n \pi}{L}x\right)} \,\,\,\,\,\,\,\,\,\,0<x<L\,\,\,\,\,\,\,\,\,\,n=1,2,3,\cdots{} \end{equation} % and the quantized energy levels $\left\{E_n\right\}$ are % \begin{equation}\label{Eq:PIBenergies} \displaystyle{} E_n = \frac{\pi^2}{2 m L^2}n^2\,\,\,\,\,\,\,\,\,\,n=1,2,3,\cdots{} \end{equation} % Note that the energies in Eq.~(\ref{Eq:PIBenergies}) are given in atomic units ($\hbar=1$). % % Subsection: Linear Variational Method \subsection{Linear Variational Method} The linear variational method is used to solve the Schr\"{o}dinger equation numerically and is the central technique used in this coding problem set. It is an especially useful method when a basis set can be well-defined for the physical system of interest, particularly if the basis set can be systematically increased and refined. The basis set used must satisfy three general requirements. First the members of the basis set should satisfy the same boundary conditions expected for the exact solutions of the Schr\"{o}dinger equation being studies. Second, it must be possible to solve matrix elements of the form % \begin{equation} \displaystyle H_{\alpha\beta} = \braketop{\alpha}{\mathcal{H}}{\beta} \end{equation} % where \ket{\alpha} and \ket{\beta} are members of the chosen basis set and $\mathcal{H}$ is the Hamiltonian. Third, the basis set should either be formally complete or be systematically expandable such that numerical experimentation can sufficiently establish approximate completeness. Given these requirements of the basis set, the development of the linear variational method begins with the Schr\"{o}dinger equation % \begin{equation}\label{Eq:SchrodingerEquation} \displaystyle \mathcal{H}\ket{\Psi} = E\ket{\Psi} \end{equation} % where $\mathcal{H}$ is the Hamiltonian, \ket{\Psi} is the (ground state) eigen-ket (i.e., the wave function), and $E$ is the energy eigenvalue corresponding to \ket{\Psi}. Using a (numerically) complete basis with obeying the same boundary conditions as model potential, which we denote as $\left\{\chi_1, \chi_2, \cdots{}\right\}$, % \begin{equation}\label{Eq:basisExpansion} \displaystyle \ket{\Psi} \approx{} \sum_n{c_n\ket{\chi_n}} \end{equation} % Substituting Eq.~(\ref{Eq:basisExpansion}) into Eq.~(\ref{Eq:SchrodingerEquation}) yields % \begin{equation}\label{Eq:SchrodingerEquationExpansion} \displaystyle \mathcal{H}\ket{\sum_n{c_n\chi_n}} = E\ket{\sum_n{c_n\chi_n}} \end{equation} % Multiplying on the left by another member of the complete set and invoking the interchange theorem of summation and integration, Eq.~(\ref{Eq:SchrodingerEquationExpansion}) becomes % \begin{equation}\label{Eq:SchrodingerEquationMatrix1} \displaystyle \sum_n{c_n \braketop{m}{\mathcal{H}}{n}} = E\sum_n{c_n\braket{m}{n}} \end{equation} % where index labels $m$ and $n$ have been used to denote \ket{\chi_m} and \ket{\chi_n}. In the application of the linear variational method used here, the basis set will be the conventional PIB eigenfunctions. Noting that this is an orthonormal basis set, Eq.~(\ref{Eq:SchrodingerEquationMatrix1}) can be written as % \begin{equation}\label{Eq:SchrodingerEquationMatrix1} \displaystyle \sum_n{c_n \braketop{m}{\mathcal{H}}{n}} = E\sum_n{c_n\delta_{mn}} \end{equation} % and, in matrix form as % \begin{equation}\label{Eq:SchrodingerEquationMatrix2} \displaystyle \begin{aligned} \sum_n{H_{mn}c_{n}} ={}& Ec_m \\ \mathbf{Hc} ={}& E\mathbf{c} \end{aligned} \end{equation} % Equation (\ref{Eq:SchrodingerEquationMatrix2}) is an eigensystem. Once the Hamiltonian matrix elements are solved, a standard eigenvalue decomposition algorithm can be used to find a set of eigenvectors (the expansion coefficients $\mathbf{c}$) and eigenvalues (the expectation energy corresponding to each eigenvector). The eigenvector with the lowest corresponding eigenvalue is the linear variational ground state solution to Eq.~(\ref{Eq:SchrodingerEquation}). The other eigenvectors (with eigenvalues greater than the lowest) are the linear variational method solutions for excited states. Generally, a well chosen basis set that can be systematically increased will converge in the number of basis functions for the ground state and low energy excited states before converging for higher energy excited states. %How does the ground state energy vary as a function of the number of basis functions? To explore this point, begin by using the two lowest energy states of the standard particle-in-a-box system as your set of basis functions. Then, use the first three states of the standard particle-in-a-box as your basis set. Follow this by numerical tests using four, five, six, seven, eight, nine, and ten states. Plot the ground state energy as a function of the number of basis functions used. % %Repeat the previous experiment with the potential function changed to $V\left(x\right) = 10 m x$. % %Plot the ground state wavefunction for the previous two problems using an appropriately converged basis set size. Comment on the effect of the added potential on the shape of the ground state wavefunction. % \end{document}
{ "alphanum_fraction": 0.7601238071, "avg_line_length": 61.5396825397, "ext": "tex", "hexsha": "9b06bc61184e54fae556312b23aae57fa9ee19f8", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-06-26T21:05:45.000Z", "max_forks_repo_forks_event_min_datetime": "2020-06-26T21:05:45.000Z", "max_forks_repo_head_hexsha": "77989f555497c14711c4aa1817540fdc3131eee4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "MQCPack/summerCodingWorkshop", "max_forks_repo_path": "Exercises_Workshop2/ProblemSet01-ModifiedPIB/tex/problem01.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "77989f555497c14711c4aa1817540fdc3131eee4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "MQCPack/summerCodingWorkshop", "max_issues_repo_path": "Exercises_Workshop2/ProblemSet01-ModifiedPIB/tex/problem01.tex", "max_line_length": 816, "max_stars_count": 1, "max_stars_repo_head_hexsha": "77989f555497c14711c4aa1817540fdc3131eee4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "MQCPack/summerCodingWorkshop", "max_stars_repo_path": "Exercises_Workshop2/ProblemSet01-ModifiedPIB/tex/problem01.tex", "max_stars_repo_stars_event_max_datetime": "2020-06-29T16:24:28.000Z", "max_stars_repo_stars_event_min_datetime": "2020-06-29T16:24:28.000Z", "num_tokens": 2099, "size": 7754 }
% % The first command in your LaTeX source must be the \documentclass command. \documentclass[sigconf]{acmart} \usepackage[ruled,vlined]{algorithm2e} % % \BibTeX command to typeset BibTeX logo in the docs \AtBeginDocument{% \providecommand\BibTeX{{% \normalfont B\kern-0.5em{\scshape i\kern-0.25em b}\kern-0.8em\TeX}}} % Rights management information. % This information is sent to you when you complete the rights form. % These commands have SAMPLE values in them; it is your responsibility as an author to replace % the commands and values with those provided to you when you complete the rights form. % % These commands are for a PROCEEDINGS abstract or paper. \copyrightyear{2018} \acmYear{2018} \setcopyright{acmlicensed} \acmConference[Woodstock '18]{Woodstock '18: ACM Symposium on Neural Gaze Detection}{June 03--05, 2018}{Woodstock, NY} \acmBooktitle{Woodstock '18: ACM Symposium on Neural Gaze Detection, June 03--05, 2018, Woodstock, NY} \acmPrice{15.00} \acmDOI{10.1145/1122445.1122456} \acmISBN{978-1-4503-9999-9/18/06} % % These commands are for a JOURNAL article. %\setcopyright{acmcopyright} %\acmJournal{TOG} %\acmYear{2018}\acmVolume{37}\acmNumber{4}\acmArticle{111}\acmMonth{8} %\acmDOI{10.1145/1122445.1122456} % % Submission ID. % Use this when submitting an article to a sponsored event. You'll receive a unique submission ID from the organizers % of the event, and this ID should be used as the parameter to this command. %\acmSubmissionID{123-A56-BU3} % % The majority of ACM publications use numbered citations and references. If you are preparing content for an event % sponsored by ACM SIGGRAPH, you must use the "author year" style of citations and references. Uncommenting % the next command will enable that style. %\citestyle{acmauthoryear} % % end of the preamble, start of the body of the document source. \begin{document} % % The "title" command has an optional parameter, allowing the author to define a "short title" to be used in page headers. \title{Real-Time Eye Tracking System with Parallel Image Processing} % % The "author" command and its associated commands are used to define the authors and their affiliations. % Of note is the shared affiliation of the first two authors, and the "authornote" and "authornotemark" commands % used to denote shared contribution to the research. \author{Bo-Chun Chen} \email{[email protected]} \affiliation{% \institution{Credit Program on Colleges of Electrical and Computer Engineering and Computer Science \\The Center for Continuing Education and Training \\ National Chiao Tung University} \streetaddress{No. 90, Jinshan 11th St} \city{HsinChu} \country{Taiwan}} % By default, the full list of authors will be used in the page headers. Often, this list is too long, and will overlap % other information printed in the page headers. This command allows the author to define a more concise list % of authors' names for this purpose. \renewcommand{\shortauthors}{Bo-Chun Chen} % % The abstract is a short summary of the work to be presented in the article. \begin{abstract} Eye tracking system has high potential as a natural user interface device; however, the mainstream systems are designed with infrared illumination, which may be harmful for human eyes. In this paper, a real-time eye tracking system is proposed without Infrared Illumination. To deal with various lighting conditions and reflections on the iris, the proposed system is based on a continuously updated color model for robust iris detection. Moreover, the proposed algorithm employs both the simplified and the original eye images to achieve the balance between robustness and accuracy. Multiple parallelism techniques including TBB, CUDA, POSIX and OpenMP can be used to speedup the process. The Experimental results are targeted to show that the proposed system can capture the movement of user's eye with standard deviation smaller than 10 pixels compared to ground truth, and the processing speed can be up to 30fps. \end{abstract} % % Keywords. The author(s) should pick words that accurately describe the work being % presented. Separate the keywords with commas. \keywords{Eye tracking, eye localization, eye movement, Parallel Programming, POSIX, CUDA, OpenMP, TBB} % % A "teaser" image appears between the author and affiliation information and the body % of the document, and typically spans the page. %\begin{teaserfigure} % \includegraphics[width=\textwidth]{sampleteaser} % \caption{Seattle Mariners at Spring Training, 2010.} % \Description{Enjoying the baseball game from the third-base seats. Ichiro Suzuki preparing to bat.} % \label{fig:teaser} %\end{teaserfigure} % % This command processes the author and affiliation and title information and builds % the first part of the formatted document. \maketitle \section{Introduction} \label{sec:intro} Eye tracking tracking is a highly potential technique for natural user interface, especially for wearable devices. Most existing wearable eye trackers employ infrared illumination to achieve robust performance by stable lighting conditions with infrared light sources close to the eyes. Since infrared light is invisible, it does not distract users, and the pupil becomes obvious and easy to detect under infrared illumination. However, infrared light sources illuminating eyes in such a close distance may cause harms to eyes. Radiation in the near infrared light is the most hazardous and is transmitted by the optical components of eyes \cite{seeber2007light}. It has higher transmission rate and lower absorption rate than visible light, and near infrared light can eventually reaches the retina while visible light is always absorbed by cornea \cite{Teaching}. If one is overexposed to infrared light, it may cause the thermal retina burn \cite{doi:10.1080/713820456}. In this work, we propose a new solution with parallel programming for eye tracking system without infrared illumination. To deal with the variation of uncontrolled lighting conditions and maintain the accuracy as well, several strategies are proposed. First, an iris color model is utilized for robust iris detection, which is continuously updated to address various lighting conditions. Second, the proposed system employs a simplified and original eye images at the same time as a kind of multi-scale algorithm. The simplified image is employed to detect coarse seed point and mask impossible feature points. Third, a modified Starburst algorithm is proposed with an iris mask for eye images without infrared illumination. The first and second part includes several filter and morphological operation, which can be speedup by parallelism, while the third part is a sequential procedure which has limited parallelism. \section{Statement of the problem} Most application of an eye tracking system is to map information of an eye to a gaze point, which can be used to build attention model or some user interface. The features of an eye must be stable and same when we gaze at same point, different when we gaze at different gaze point, that is, to have high intra-class similarity and low inter-class similarity. Research in the past detect the center of pupil as the feature of an eye according to the fact that infrared light can make the image of pupil clear, noise-free and thus very easy to detect\cite{5068882}\cite{Morimoto2000331} as shown in Fig.~\ref{fig:infrared}(a). However, without infrared illumination, the most prominent feature of an eye is the limbus, which is the boundary of sclera and iris. Fig.~\ref{fig:infrared}(b) shows the eye image without infrared illumination. Due to the reflection in the iris, it is hard to detect the pupil. The only choice is to detect the cornea center because of being concentric with pupil. Unfortunately, the iris is always cropped by eyelids and corners of the eye. If one wants to detect the entire iris region, the best way to do is to open his/her eyes as much as possible, and it would be too tired to use a system. Instead, we only need a stable and distinguishable point to represent the position of an eye. So the problem is to find a way to represent the center of iris with high intra-class similarity and low inter-class similarity as our feature of an eye. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=4cm]{../Fig/infrared_spectrum.png} & \includegraphics[width=4cm]{../Fig/visible_spectrum.png} \\ (a) & (b) \end{tabular} \caption{(a) With infrared illumination. (b) Without infrared illumination.} \label{fig:infrared} \end{figure} \section{Proposed approaches} \begin{figure} \begin{minipage}[b]{1.0\linewidth} \centering % \centerline{\includegraphics[width=8.2cm]{../Fig/Overview_Algorithm_highlight.png}} \centerline{\includegraphics[scale=0.6]{../Fig/System_Overview.png}} % \vspace{2.0cm} \end{minipage} \caption{System overview} \label{fig:System overview} \end{figure} %======================================== Fig.~\ref{fig:System overview} shows the overview of the whole system. The input of the system is an eye image. Different from the eye images acquired under infrared illumination, the eye images under normal lighting conditions usually contain noise from lighting condition variation and reflection of environment, as shown in Fig.~\ref{fig:preprocessing}(a). Therefore, the first step is image pre-processing, which aims to filter out the distortion and noise of input eye image. Since the color information is critical to the whole system, we correct the color with automatic white balance to generate image $I_E$. Moreover, a simplified gray-scale eye image $\hat{I_E}$ is also generated with histogram equalization, morphological gray opening operation, and Gaussian blurring operation to reduce the reflection inside the iris, as shown in Fig.~\ref{fig:preprocessing}(b). As mentioned in Section~\ref{sec:intro}, both these two images are employed in the eye localization as a kind multi-scale processing: the simplified image is used to generate rough detection and masking information robustly, while the white-balanced image is utilized to generated accurate eye location accordingly. An iris color modeled is also maintained in this system to address various lighting conditions. While the iris color model keeps improving itself, the result of eye position, the virtual center, is also getting more accurate, and vice versa. The details of the algorithm are described in the following subsections. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=4cm]{../Figures/visible_eye.png} & \includegraphics[width=4cm]{../Figures/simplified.png} \\ (a) & (b) \end{tabular} \caption{(a) White-balanced eye image $I_E$. (b) Simplified eye image $\hat{I_E}$.} \label{fig:preprocessing} \end{figure} \subsection{Eye Localization} The purpose of eye localization is to find a stable feature that can represent the location of an eye with high intra-class similarity and low inter-class similarity. We define the stable feature point as a ``virtual center.'' The virtual center in our system is not the location of pupil but is the center of the region surrounded by the feature points at eyelids and left/right sides of limbus. %We find out that the eyelids can be used with left and right sides of limbus to define the virtual center. The method of feature point detection is based on Starburst algorithm \cite{Li:2005:SHA:1099539.1099986}, which is originally invented to detect the boundary of one's pupil under infrared illumination. Because the pupil is invisible under visible light, it is proposed to change the target to limbus and eyelids. In order to make Starburst algorithm robust enough to detect feature points under visible light and have the ability to adapt to variant lighting conditions, two distinct features are developed and added. The first added feature is the automatic seed point finding algorithm. In the original Starburst algorithm, the seed point is picked as the center point of a frame. In order to automatically find the seed point with illumination invariant property, a color model with 2D h-s (hue-saturation) histogram is constructed for the iris region. During initialization, the rough iris region is estimated by calculating the moment center of the binarized version of $\hat{I_E}$ followed by a region growing process. Within the rough iris region, the h-s histogram of $I_E$ is calculated as the initial iris color model. After the iris color model is established, for an input eye image $I_E$, the back projection of the iris color model is derived by replacing the value of each pixel $(i,j)$ with the corresponding normalized bin value in the h-s histogram. That is, \begin{equation} I_{BP}(i,j) = H(h_{i,j}, s_{i,j}), \end{equation} where the higher value means that the corresponding position has a higher probability to be a part of the iris region, as shown in Fig.~\ref{fig:BPF}(b). The seed point is then generated with binarization and moment center calculation. In the following frames, in order to deal with various lighting conditions, the iris color is updated every specified number of frames. To do so, a rough iris mask $M_{RI}$ is generated by finding the convex hull of the binarized $\hat{I_E}$, as shown in Fig.~\ref{fig:BPF}(c). A new h-s histogram of $I_E$ is calculated inside the mask $M_{RI}$. The back projection $I_{BP}$ is then generated as well as an index called iris ratio, which is defined as. \begin{equation} \mbox{Iris Ratio} = \frac{\eta}{\sigma}, \end{equation} where $\eta$ means the sum of probability of iris point inside the mask $M_{RI}$, and $\sigma$ means the sum of probability of iris point outside the mask. Greater iris ratio comes a more representative model, and we discard those 2D h-s histogram models with low iris ratio. The second feature added is the iris mask that is used to mask out the reflection in iris region and other impossible feature points to enhance Starburst algorithm. As shown in Figs.~\ref{fig:ova}(a)--(c), the iris region is estimated with projections of the simplified image $\hat{I_E}$. Eyelid fitting is then done by modeling the upper and lower eyelids as two parabolas. With the simplified image $\hat{I_E}$, the valley-peak field, Fig.~\ref{fig:ova}(f), is generated by calculating the convex-hull of the extracted iris rectangle with the biggest circle in it, as shown in Fig.~\ref{fig:ova}(e), and the pixels with large horizontal gradient but low vertical gradient, as shown in Fig.~\ref{fig:ova}(d). We then use the eyelid feature points outside the valley-peak field to fit upward and downward parabola functions, as shown in Fig.~\ref{fig:ova}(g). The intersection of the region inside the eyelid and the estimated iris region as Fig.~\ref{fig:ova}{c} forms the iris mask $M_I$ for our modified Starburst algorithm described as Algorithm~\ref{alg:starburst}, and the feature points are then generated as shown in Fig.~\ref{fig:ova}(h), where the navy blue and light blue feature points represent the inliers of upper and lower eyelid parabolas individually, and green feature points represent the limbus feature points detected by our modified Starburst algorithm~\ref{alg:starburst}. All the feature points will be used to form convex hull as shown in Figs.~\ref{fig:ova}(i)--(j), and this is how the accurate iris mask $M_{AI}$ generated. Finally, the location of the virtual center $(X_e, Y_e)$ is derived from the moment center of the inverted version of $\hat{I_E}$ masked by $M_{AI}$, as shown in Figs.~\ref{fig:ova}(k)--(l) and the following equations. \begin{align} m_{i, j} &= \sum_{x,y}{(255-\hat{I_E}) \cdot M_{AI} \cdot x^i y^j},\\ X_e &= m_{1,0}/m_{0,0}, Y_e = m_{0,1}/m_{0,0} \end{align} \begin{figure}[t] \begin{minipage}[b]{0.32\linewidth} \centering \centerline{\includegraphics[width=2.5cm]{../Fig/IrisModelUpdating/ImageForValidTesting.jpg}} % \vspace{1.5cm} \centerline{(a)}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.32\linewidth} \centering \centerline{\includegraphics[width=2.5cm]{../Fig/IrisModelUpdating/BackProjection.jpg}} % \vspace{1.5cm} \centerline{(b)}\medskip \end{minipage} % \hfill \begin{minipage}[b]{0.32\linewidth} \centering \centerline{\includegraphics[width=2.5cm]{../Fig/IrisModelUpdating/IrisRegion.jpg}} % \vspace{1.5cm} \centerline{(c)}\medskip \end{minipage} % \caption{Hue-saturation histogram model refreshment (a) Input eye image. (b) Back projection $I_{BP}$. (c) Rough iris mask $M_{RI}$.} \label{fig:BPF} \end{figure} \begin{figure}[t] \begin{minipage}[b]{.32\linewidth} \centering \centerline{\includegraphics[width=3cm]{../Fig/IrisRegionExtraction_GPF/Deriv_hy_result.png}} % \vspace{1.5cm} \centerline{(a)}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.32\linewidth} \centering \centerline{\includegraphics[width=2.5cm]{../Fig/IrisRegionExtraction_GPF/Deriv_vx_result.png}} % \vspace{1.5cm} \centerline{(b)}\medskip \end{minipage} % \hfill \begin{minipage}[b]{0.32\linewidth} \centering \centerline{\includegraphics[width=2.5cm]{../Fig/IrisRegionExtraction_GPF/Resulted_irisRegionExtraction.jpg}} % \vspace{1.5cm} \centerline{(c)}\medskip \end{minipage} % \vfill \begin{minipage}[b]{0.32\linewidth} \centering \centerline{\includegraphics[width=2.5cm]{../Fig/EyelidDetection/RefinedGradX_EyeRegion.jpg}} % \vspace{1.5cm} \centerline{(d)}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.32\linewidth} \centering \centerline{\includegraphics[width=2.5cm]{../Fig/EyelidDetection/RefinedGradX_EyeRegion_Plus_IrisCenter.jpg}} % \vspace{1.5cm} \centerline{(e)}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.32\linewidth} \centering \centerline{\includegraphics[width=2.5cm]{../Fig/EyelidDetection/ValleyPeakField.jpg}} % \vspace{1.5cm} \centerline{(f)}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.32\linewidth} \centering \centerline{\includegraphics[width=2.5cm]{../Fig/EyelidDetection/Eyelid_Result.jpg}} % \vspace{1.5cm} \centerline{(g)}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.32\linewidth} \centering \centerline{\includegraphics[width=2.5cm]{../Fig/ClipFeaturePts/ClipFeaturePointsDisp.jpg}} % \vspace{1.5cm} \centerline{(h)}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.32\linewidth} \centering \centerline{\includegraphics[width=2.5cm]{../Fig/ExactIrisCenter/RefinedFeaturePts.jpg}} % \vspace{1.5cm} \centerline{(i)}\medskip \end{minipage} \begin{minipage}[b]{0.32\linewidth} \centering \centerline{\includegraphics[width=2.5cm]{../Fig/ExactIrisCenter/FeaturePoints_LimbusFtPtsConvexHull.jpg}} % \vspace{1.5cm} \centerline{(j)}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.32\linewidth} \centering \centerline{\includegraphics[width=2.5cm]{../Fig/ExactIrisCenter/MomentCenterByGrayLevel.jpg}} % \vspace{1.5cm} \centerline{(k)}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.34\linewidth} \centering \centerline{\includegraphics[width=2.5cm]{../Fig/ExactIrisCenter/EyePosition_CenterResult.jpg}} % \vspace{1.5cm} \centerline{(l)}\medskip \end{minipage} \caption{(a) Input image and the horizontal projection. (b) Input image and the vertical projection. (c) Extracted iris rectangle. (d) High x-directed gradient but low y-directed gradient pixels. (e) Fig.~\ref{fig:ova}(d) plus extracted biggest circle. (f) Valley-Peak field. (g) Fitted parabolas of eyelids. (h) Detected inliers of both parabolas of eyelids and limbus feature points. (i) All the detected feature points. (j) Accurate iris mask $M_{AI}$. (k) Inverted iris masked simplifed eye image. (l) Virtual center.} \label{fig:ova} \end{figure} \begin{algorithm}[t] \KwIn{Preprocessed eye image $I_E$ and iris mask $M_{I}$} \KwOut{Feature points surrounding iris} initialization;\\ \While{seed point does not converge}{ clear F, the set of final feature points\; Stage1:\\ Emit rays radially from the seed point with angle ranging in $[0,2\pi]$\; \For{Each ray}{ Move extendedly along the ray from the seed point\; Calculate derivation of intensity at each pixel\; \If{Outside the iris mask $\wedge$ derivation $>$ 0} { Push feature point to F\; } } Stage2:\\ \For{Each candidate feature point detected in Stage 1}{ Estimate the angle of the line from the feature point to the seed point, called $Ang_{fs}$\; Emit rays from the feature point back to the seed points with angle ranging in $[Ang_{fs} - \pi/12 , Ang_{fs} + \pi/12]$\; \For{Each ray}{ Move backward along the ray from the the feature point\; Calculate derivation of intensity at each pixel\; \If{Outside the iris mask $\wedge$ derivation $>$ 0}{ Push feature point to F\; } } } Seed point $\leftarrow$ Geometry center of feature points\; } \caption{Modified Starburst Algorithm} \label{alg:starburst}. \end{algorithm} \subsection{Parallelism} The parallelism of the eye tracking system can be divided into two main categories: 'parallel in block' and 'parallel in channel' as shown in Fig.~\ref{fig:parallelism_tiling}(a)--(b). If the operation is identical with each pixel, such as Gaussian filtering, RGB to gray convertion and morphological operation in the pre-processing block, iris color model back projection and binary operation in the iris color model updating block, masking and simplified projection in the eye localization block, then we can divide one image frame into several blocks and use 'parallel in block' technique as shown in Fig.~\ref{fig:parallelism_tiling}(a). The number of block can be defined by the number of CPU or GPU cores in one's computer, for example, 4 CPU cores in Fig.~\ref{fig:parallelism_tiling}(a). Each block can be processed through a sequence of image processing pipeline individually until the next image processing stage needs information from other blocks. The longer length of the sequence may produce better efficiency since the fork-join of threads will take some overhead. If we are dealing with color image frames, which have multiple channels, then we can further apply 'parallel in channel' technique if the operation to all channels are identical, such as auto-white-balencing in the pre-processing block and RGB to HSV convertion in the iris color model updating block. As shown in Fig.~\ref{fig:parallelism_tiling}(b) for example, three channels can be propagated through image processing pipeline individually until the next image processing stage needs information from other channels. Besides, in one channel, we can further deploy 'parallel in block' if the operation to all pixels are identical, for example, the HSV histogram calculation in the iris color model updating block. \begin{figure}[t] \begin{minipage}[b]{0.8\linewidth} \centering \centerline{\includegraphics[width=8cm]{../Fig/Parallel_Tiling.png}} % \vspace{1.5cm} \centerline{(a)}\medskip \end{minipage} % \vfill \begin{minipage}[b]{0.8\linewidth} \centering \centerline{\includegraphics[width=8cm]{../Fig/Parallel_Tiling_Color.png}} % \vspace{1.5cm} \centerline{(b)}\medskip \end{minipage} \caption{(a) Parallel in block. (b) Parallel in channel} \label{fig:parallelism_tiling} \end{figure} \section{Language selection} C++ with OpenCV library is the programming language chosen to implement in this work according to its speed, efficiency and multiple parallel-library supporting. OpenMP and TBB with OpenCV will be used to fork and join parallel tasks though CPU cores. CUDA with OpenCV will also be considered to parallelize some tasks on GPU cores. C++ along with OpenCV will also be applied to implement some I/O and GUI implementation. \section{Related work} %According to having infrared illumination or not, the method can be classified into two categories: limbus detection, and pupil detection. It is because that under infrared illumination, the boundary between pupil and cornea is very distinguishable as shown in Fig.~\ref{fig:infrared}(a). However, when being illuminated without infrared light, pupil is not visible under most of the situations as Fig.~\ref{fig:infrared}(b). The only noteworthy feature is the boundary between cornea and sclera, in other words, the limbus. Ryan et al.\cite{Ryan:2008:LSW:1344471.1344487} switches between limbus detection and pupil detection due to the lightness of environment. They use pupil detection in bright light, and use limbus detection in dim light. Starburst\cite{Li:2005:SHA:1099539.1099986} algorithm and ellipse fitting with RANSAC technique are used to find the pupil and cornea in their work. Ebisawa\cite{5068882} uses dark and bright pupil difference technique to detect pupil. The work is shown in Fig.~\ref{fig:Difference of bright and dark pupil technique}. Bright pupil image is generated by turning on the ring-like Infrared light sources attached around the aperture of a camera. Dark pupil image is generated by turning on other ring-like infrared light sources attached away from the aperture. After getting the difference image, setting a threshold and extract the connected component will give a result of pupil contour\cite{Morimoto2000331}. The accuracy of this kind of work is in the range of 5-20 pixels. \begin{figure} \begin{minipage}[b]{1.0\linewidth} \centering \centerline{\includegraphics[scale=0.3]{../Fig/Dark_bright_pupil_difference_technique.png}} \end{minipage} \caption{Difference of bright and dark pupil technique\cite{5068882}} \label{fig:Difference of bright and dark pupil technique} \end{figure} \section{Statement of expected results} Current system is implemented in Visual C++ and OpenCV library \cite{opencv_library} on a personal computer with a 4.0-GHz CPU and 24-GB RAM. The size of the monitor is 22 inches and the aspect ratio is 16:10$ \left(\;47.39\times29.61\; cm^2\right)$. The resolution of an eye image is 640$\times$480, and the resolution of the screen is 1680$\times$1050. The distance between the user and the monitor is 50c.m away in front of the monitor, which makes my field of view limited to the size of the monitor. The visual angle ranges horizontally in $\left[-25.35 , 25.35\right]$ $\left(degree\right)$ and vertically in $\left[-16.49 , 16.49\right]$ $\left(degree\right)$. The experiments conducted with the current system shows that the processing speed of the whole system is 10--11fps, and the average accuracy is about 10 pixels, which is comparable to those systems using multiple infrared light sources and multiple cameras. After being parallelized, it is expected to be 1.6 times speedup under 2 CPU cores, which is 16fps, and 3.2 times speedup under 4 CPU cores, which is around 30fps while the accuracy should remain almost same, that is, 10 pixels. \section{A timetable} The schedule is described as Table \ref{tab:schedule}. \begin{table}[t] \centering \caption{Timetable} \label{tab:schedule} \begin{tabular}{ccc} \hline \hline Item & Schedule & Comment\\ \hline Experiment environment & 4/15-4/22 & error format(mean+std). \\ Profiling & 4/23-4/29 & cost of time. \\ Implementation & 4/30-5/27 & OpenMP+TBB+CUDA \\ Experiments & 5/28-6/10 & OpenMP+TBB+CUDA \\ Presentation slides & 6/11-6/24 & presentation slides ready \\ \hline \hline \end{tabular} \end{table} % % The next two lines define the bibliography style to be used, and the bibliography file. \bibliographystyle{ACM-Reference-Format} \bibliography{sample-base} \end{document}
{ "alphanum_fraction": 0.7681669691, "avg_line_length": 60.9513274336, "ext": "tex", "hexsha": "bf611c54fb8f132589e37b2d1b56cf6c445f4ebf", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0b76be74426d25f9bd020f65d2051cf43bea7f40", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Coslate/Parallel_Programming", "max_forks_repo_path": "Final_Project/Proposal/Latex/proposal/samples/sample-sigconf.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0b76be74426d25f9bd020f65d2051cf43bea7f40", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Coslate/Parallel_Programming", "max_issues_repo_path": "Final_Project/Proposal/Latex/proposal/samples/sample-sigconf.tex", "max_line_length": 1469, "max_stars_count": null, "max_stars_repo_head_hexsha": "0b76be74426d25f9bd020f65d2051cf43bea7f40", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Coslate/Parallel_Programming", "max_stars_repo_path": "Final_Project/Proposal/Latex/proposal/samples/sample-sigconf.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7191, "size": 27550 }
\section{Question 2} System: $$ G_{1_{(s)}} = \dfrac{1}{(17s+1)(5s+1)}\exp(-30s) = \dfrac{1}{102s^2+23s+1} $$ We use Optimal PID to design controller with ITAE, ISE and IAE cost function. In program we use 100 second for optimization but use 1000 second for simulation beacuse optimization takes too long time and in 100 second beacuse of long delay time we can't see system behavior. \newpage \begin{itemize} \item ITAE $$ K_p = 0.6867, \quad K_i = 0.0347, \quad K_d = 15.0543 $$ \begin{figure}[H] \caption{Step responde with PID controller and ITAE cost function} \centering \includegraphics[width=11cm]{../Figure/Q2/ITAE.png} \end{figure} \item ISE $$ K_p =0.5399, \quad K_i = 0.0446, \quad K_d =20.8391 $$ \begin{figure}[H] \caption{Step responde with PID controller and ISE cost function} \centering \includegraphics[width=11cm]{../Figure/Q2/ISE.png} \end{figure} \item IAE $$ K_p = 0.6522, \quad K_i = 0.0393, \quad K_d = 17.5028 $$ \begin{figure}[H] \caption{Step responde with PID controller and IAE cost function} \centering \includegraphics[width=11cm]{../Figure/Q2/IAE.png} \end{figure} \end{itemize} PID designed with ITAE and IAE cost function work better system is fast with lower overshoot but in ITAE cost function system has better undershoot.
{ "alphanum_fraction": 0.6621331424, "avg_line_length": 37.7567567568, "ext": "tex", "hexsha": "4c9d2f78d9e370bb05fa7b811c5d9c1443ea979b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2a6285f627377a5e5edfb32c92e054ab213d311a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "alibaniasad1999/Principle-Of-Controller-Design", "max_forks_repo_path": "HW/HW VII/Report/Q2/Q2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2a6285f627377a5e5edfb32c92e054ab213d311a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "alibaniasad1999/Principle-Of-Controller-Design", "max_issues_repo_path": "HW/HW VII/Report/Q2/Q2.tex", "max_line_length": 275, "max_stars_count": null, "max_stars_repo_head_hexsha": "2a6285f627377a5e5edfb32c92e054ab213d311a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "alibaniasad1999/Principle-Of-Controller-Design", "max_stars_repo_path": "HW/HW VII/Report/Q2/Q2.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 465, "size": 1397 }
%---------------------------------------------------------------------- \chapter{Implementation} \label{chap:code-implementation} %---------------------------------------------------------------------- \section{Software overview} \label{sec:software-overview} The code of the ODE solvers software that I have developed is available on GitHub via this link: \href{https://github.com/FarmHJ/numerical-solver}{\underline{\emph{GitHub repository}}}. GitHub is a good platform for version control and collaborations. Software with version control can keep track of changes made to the code. It is very useful for large projects with collaborations. Aside from having version control, the software is fully documented. Details on its structure and functions are available at the link: \href{https://numerical-solver.readthedocs.io/en/latest/index.html}{\underline{\emph{software documentation}}}. Finally, the software is also full-tested to ensure the correctness of the code. More details of the testing infrastructure are given in later sections. To showcase the features of the code and to give a brief summary of the quality of the code, badges are displayed on the main page of the GitHub repository, as shown in Figure \ref{fig:badges}. These badges indicate that the code is tested to work using several python versions and operating systems. The badge `codecov' shows the code coverage of the software, which will be described in details in the Section \ref{sec:testing}, together with the testing infrastructure. Finally, there is a status badge `Doctest' to verify that the documentation of the software is built successfully. \begin{figure} \includegraphics[width=0.95\columnwidth]{badges} \caption{Badges on Github repository.} \label{fig:badges} \end{figure} \section{Implemented numerical methods} \label{sec:implemented-methods} The numerical methods implemented in this software are classified into three classes: one-step methods, predictor-corrector methods and adaptive methods. The methods included are: \begin{enumerate} \item One-step methods \begin{itemize} \item Euler's explicit method \item Euler's implicit method \item Trapezium rule method \item Four-stage Runge-Kutta method \end{itemize} \item Predictor-corrector method \begin{itemize} \item Euler-Trapezoidal method \end{itemize} \item Adaptive methods \begin{itemize} \item BS23 algorithm \item RKF45 algorithm \end{itemize} \end{enumerate} The main code containing these numerical methods can be found at the link: \href{https://github.com/FarmHJ/numerical-solver/blob/main/solver/methods.py}{\underline{\emph{methods code}}}. \section{Unit testing} \label{sec:testing} In the process of developing the software, a unit testing infrastructure was put in place. The purpose of the unit testing is to create a robust and sustainable software. All codes are fully tested whilst being written (this is known as test-driven development) to ensure correctness and making future maintenance of the code much easier. Code coverage is a measurement of the percentage of codes covered in the unit testing process. It is usual to aim for 100\% code coverage. However, a 100\% code coverage does not necessarily mean that the code is correct and free of errors. Nevertheless, it provides some confidence that the code is implemented correctly. Every method in this software is tested using the `unittest' Python library, a unit testing framework. The numerical solution produced by each method is tested to be the same solution as the manually calculated solution. The methods are mostly tested against the test problem Eqs.~\eqref{eqn:example_model}-\eqref{eqn:example-end}. \section{Testing initialisation of problem} \label{sec:test_init} The initialisations of each class are tested to ensure that variables are initialised correctly and input type satisfies the requirements. For example, \begin{lstlisting}[language=Python, caption= {initialisation testing}, title={Testing initialisation of problem}, label={code:test_init}] def test__init__(self): def func(x, y): return [-y[0]] x_min = 0 x_max = 1 initial_value = [1] mesh_points = 10 problem = solver.OneStepMethods( func, x_min, x_max, initial_value, mesh_points) # Test initialisation self.assertEqual(problem.x_min, 0) self.assertEqual(problem.mesh_points, 10) # Test raised error for callable function with self.assertRaises(TypeError): solver.OneStepMethods( x_min, x_min, x_max, initial_value, mesh_points) # Test raised error if initial_value not list with self.assertRaises(TypeError): solver.OneStepMethods( func, x_min, x_max, 1, mesh_points) \end{lstlisting} A simple model with known solution is first initialised. Required inputs were checked to make sure the problem is set up properly, in line 14 and 15 of the code snippet above. To ensure that the inputs to the function are of the desired data type, errors are raised whenever the user inputs a wrong data type. The unit testing also tests that these errors are raised appropriately (line 17 to 25) whenever the data type does not satisfy the requirements. \section{Testing function} \label{sec:test_func} After making sure the problem is properly initialised, we then test that the numerical methods are working correctly. Take the adaptive method, BS23 algorithm, as an example, the testing of a method is as follows: \begin{lstlisting}[language=Python, caption= {testing of function}, title={Testing execution of method}, label={code:test_func}] def test_ode23(self): def func(x, y): return [-y[0]] x_min = 0 x_max = 1 initial_value = [1] problem = solver.AdaptiveMethod( func, x_min, x_max, initial_value, initial_mesh=0.5) mesh, soln = problem.ode23() # Test end point of mesh self.assertGreaterEqual(mesh[-1], 1.0) # Test mesh point self.assertAlmostEqual(mesh[1], 0.3483788976565) # Test solution at first stepsize self.assertAlmostEqual(soln[1][0], 0.7052580305097) \end{lstlisting} The method is first tested that it executes the computations up to the maximum mesh value indicated. Then, it is checked that the first adaptive mesh point and its solution, obtained from the software, matches the value computed manually. \section{Conclusion} \label{sec:software-conclusion} The software can solve any initial value problem of the general form Eqs.~\eqref{eqn:initial_value_start}-\eqref{eqn:initial_value} using the implemented numerical methods as listed in Section \ref{sec:implemented-methods}. The software can also handle initial value problems of higher dimension. An example of the use of the software can be found at the link: \\ \href{https://nbviewer.jupyter.org/github/FarmHJ/numerical-solver/blob/main/examples/fitzhugh_nagumo.ipynb}{\underline{\emph{Example use of software}}}. All methods are tested, not only making sure all lines are tested, but also testing their functionalities. Moreover, the results from all notebooks (see Appendix \ref{chap:link}) behaves as expected by theory. The time efficiency of the software is not a major concern of the project, thus some methods might take long periods of time to run. The purpose of setting up a software with version control, documentation and testing is to create a robust and reusable software. These software engineering methods can give confidence in the correctness of the code and aid in the future maintenance of the code. By implementing these methods to this software, it will allow me to gain experience in developing the software engineering techniques that I will use throughout my D.Phil..
{ "alphanum_fraction": 0.753656659, "avg_line_length": 67.7739130435, "ext": "tex", "hexsha": "b093ce56442485b5336df48cbeb4b98b3466ea1f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8a9b823b0ca6eb3c714c055324f35c74d5af5263", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "FarmHJ/numerical-solver", "max_forks_repo_path": "report/code_implementation.tex", "max_issues_count": 18, "max_issues_repo_head_hexsha": "8a9b823b0ca6eb3c714c055324f35c74d5af5263", "max_issues_repo_issues_event_max_datetime": "2021-03-23T15:12:32.000Z", "max_issues_repo_issues_event_min_datetime": "2020-12-12T08:05:30.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "FarmHJ/numerical-solver", "max_issues_repo_path": "report/code_implementation.tex", "max_line_length": 859, "max_stars_count": null, "max_stars_repo_head_hexsha": "8a9b823b0ca6eb3c714c055324f35c74d5af5263", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "FarmHJ/numerical-solver", "max_stars_repo_path": "report/code_implementation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1740, "size": 7794 }
\documentclass[11pt]{article} \usepackage{graphicx} \DeclareGraphicsExtensions{.png,.jpg} \graphicspath{{./figures/}} \usepackage{url} \begin{document} \title{Architecture of the Proposed Cloud Application Platform Monitor} \author{Hiranya Jayathilaka, Wei-Tsung Lin, Chandra Krintz, and Rich Wolski \\ Computer Science Dept., UC Santa Barbara \\ \\ Collaborators: Michael Xie and Ying Xiong \\ Huawei Technologies\\ \\ January 2016 } \date{} \maketitle \section{Introduction} Over the last decade Platform-as-a-Service (PaaS) has become a popular approach for deploying applications in the cloud. Many organizations, academic institutions, and hobbyists make use of public and/or private PaaS clouds to deploy their applications. PaaS clouds provide a high level of abstraction to the application developer that effectively hides all the infra\-structure-level details such as physical resource allocation (CPU, memory, disk etc), operating system, and network configuration. This enables application developers to focus solely on the programming aspects of their applications, without having to be concerned about deployment issues. PaaS clouds execute web-accessible (HTTP/s) applications, to which they provide high levels of scalability, availability, and execution management. PaaS clouds provide scalability by automatically allocating resources for applications on the fly (auto scaling), and provide availability through the execution of multiple instances of the application and/or the PaaS services they employ for their functionality. Consequently, viable PaaS technologies as well as PaaS-enabled applications continue to increase rapidly in number. This rapid growth in PaaS technology has intensified the need for new techniques to monitor applications deployed in a PaaS cloud. Application developers and users wish to monitor the availability of the deployed applications, track application performance, and detect application and system anomalies as they occur. To obtain this level of deep operational insight into PaaS-deployed applications, the PaaS clouds must be equipped with powerful instrumentation, data gathering and analysis capabilities that span the entire stack of the PaaS cloud. Moreover, PaaS clouds must provide comprehensive data visualization and notification mechanisms. However, most PaaS technologies available today either do not provide any application monitoring support, or only provide primitive monitoring features such as application-level logging. Hence, they are not capable of performing powerful predictive analyses or anomaly detection, which require much more fine-grained, low-level and full stack data collection and analytics. To address this limitation, we present the design of a comprehensive application platform monitor (APM) that can be easily integrated with a wide variety of PaaS technologies. The proposed APM is not an external system that monitors a PaaS cloud from the outside (as most APM systems today). Rather, it integrates with the PaaS cloud from within thereby extending and augmenting the existing components of the PaaS cloud to provide comprehensive full stack monitoring, analytics and visualization capabilities. We believe that this design decision is a key differentiator over existing PaaS and cloud application monitoring systems because (i) it is able to take advantage of the scaling, efficiency, deployment, fault tolerance, security, and control features that the PaaS offers, (ii) while providing low overhead end-to-end monitoring and analysis of cloud applications. This document details the architecture of the proposed APM, and how it integrates with a typical PaaS cloud. We describe individual components of the APM, their functions and how they interact with each other. Where appropriate, we also detail the concrete technologies (tools and products) that we plan to use to implement various components of the APM, and provide our rationale and intuition behind choosing these technologies. We start by describing the layered system organization typically seen in PaaS clouds. Then we describe the APM architecture, and show how it fits into the PaaS. \section{PaaS System Organization} \begin{figure} \centering \includegraphics[scale=0.5]{paas_architecture} \caption{PaaS system organization.} \label{fig:paas_architecture} \end{figure} Figure~\ref{fig:paas_architecture} shows the key system layers of a typical PaaS cloud. Arrows indicate the flow of data and control in response to application requests. At the lowest level of a PaaS cloud is an infrastructure that consists of the necessary compute, storage and networking resources. How this infrastructure is set up may vary from a simple cluster of physical machines to a comprehensive Infrastructure-as-a-Service (IaaS) solution. In large scale PaaS clouds, this layer typically consists of many virtual machines and/or containers with the ability to acquire more resources on the fly. On top of the infrastructure layer lies the PaaS kernel. This is a collection of managed, scalable services that high-level application developers can compose into their applications. The provided services may include database services, caching services, queuing services and much more. Some PaaS clouds provide a managed set of APIs (an SDK) for the application developer to access these fundamental services. In that case all interactions between the applications and the PaaS kernel must take place through the cloud provider specified APIs (e.g. Google App Engine). One level above the PaaS kernel we find the application servers that are used to deploy and run applications. Application servers provide the necessary integration (linkage) between application code and the underlying PaaS kernel, while sandboxing application code for secure, multi-tenant operation. On top of the application servers layer resides the fronted and load balancing layer. This layer is responsible for receiving all application requests, filtering them and routing them to an appropriate application server instance for further execution. As the fronted server, it is the entry point for PaaS-deployed applications for all application clients. \section{Cloud APM Architecture} \subsection{Key Functions} \begin{figure} \centering \includegraphics[scale=0.5]{apm_functions} \caption{Key functions of the APM.} \label{fig:apm_functions} \end{figure} \begin{figure} \centering \includegraphics[scale=0.5]{apm_layout} \caption{Deployment view of the APM functions.} \label{fig:apm_layout} \end{figure} Like most system monitoring solutions, the proposed cloud APM must serve four major functions: Data collection, storage, processing (analytics) and visualization. Figure~\ref{fig:apm_functions} shows the logical organization of these functions in the APM, and various tasks that fall under each of them. Figure~\ref{fig:apm_layout} shows a physical deployment view of the said functions. Arrows indicate the flow of information through the APM. Data collection is performed by various sensors and agents that instrument the applications and the core components of the PaaS cloud. While sensors are very primitive in their capability to monitor a given component, an agent may intelligently adapt to changing conditions, making decisions on what information to capture and how often. Monitoring and instrumentation should be lightweight and as non-intrusive as possible so their existence does not impose additional overhead on the applications. Data storage components should be capable of dealing with potentially very high volumes of data. The data must be organized and indexed to facilitate efficient retrieval, and replicated to maintain reliability and high availability. Data processing components should also be capable of processing large volumes of data in near real-time, while supporting a wide range of data analytics features such as filters, projections and aggregations. They will employ various statistical and perhaps even machine learning methods to understand the data, detect anomalies and identify bottlenecks in the system. Data visualization layer mainly consists of graphical interfaces (dashboards) for displaying various metrics computed by the data processing components. Additionally it may also have APIs to export the calculated results and trigger alerts. \subsection{APM Architecture and Integration with PaaS} \begin{figure} \centering \includegraphics[scale=0.35]{apm_architecture} \caption{APM architecture.} \label{fig:apm_architecture} \end{figure} Figure~\ref{fig:apm_architecture} illustrates the overall architecture of the proposed APM, and how it fits into the PaaS cloud stack. APM components are shown in grey, with their interactions indicated by the black lines. The small grey boxes attached to the PaaS components represent the sensors and agents used to instrument the cloud platform for data collection purposes. Note that the APM collects data from all layers in the PaaS stack (i.e. full stack monitoring). From the front-end and load balancing layer we gather all information related to incoming application requests. A big part of this is scraping the HTTP server access logs, which indicate request timestamps, source and destination addressing information, response time (latency) and other HTTP message parameters. This information is readily available for harvesting in most technologies used as front-end servers (e.g. Apache HTTPD, Nginx). Additionally we may also collect information pertaining to active connections, invalid access attempts and other errors. From the application server layer we intend to collect basic application logs as well as any other logs and metrics that can be easily collected from the application runtime. This may include some process level metrics indicating the resource usage of the individual application instances. If deeper insight into the application execution becomes necessary, more intrusive instrumentation can be introduced to the application server (perhaps selectively or adaptively). At the PaaS kernel layer we employ instrumentation to record information regarding all kernel invocations made by the applications. This instrumentation must be applied carefully as to not introduce a noticeable overhead to the application execution. For each PaaS kernel invocation, we can capture the following parameters. \begin{itemize} \item Source application making the kernel invocation \item Timestamp \item Target kernel service and operation \item Execution time of the invocation \item Request size, hash and other parameters \end{itemize} Collecting this PaaS kernel invocation details enables tracing the execution of application requests, without the need for instrumenting application code, which we believe is a feature unique to PaaS clouds. Finally, at the lowest infrastructure level, we can collect information related to virtual machines, containers and their resource usage. We can also gather metrics on network usage by individual components which might be useful in a number of traffic engineering use cases. Where appropriate we can also scrape hypervisor and container manager logs to get an idea of how resources are allocated and released over time. To summarize, the types of services and resources that this APM will be able to monitor include the following. Moreover, our design of the data collection layer is abstract and thus easily extended to permit monitoring of new services and PaaS components as they become available in the future. \begin{itemize} \item Cloud Infrastructure: \begin{itemize} \item CPU, memory, disk, network \item Linux containers, virtual machines \end{itemize} \item PaaS Kernel (including PaaS cloud SDK) \begin{itemize} \item Task queues, security components (user/developer tracking and authentication and authorization), enterprise service bus \item Data caches (memcache), datastores (key value, NoSQL), databases (fixed schema, SQL). \end{itemize} \item Application servers \begin{itemize} \item Per-language runtime systems \item Our APM will target the Java language \end{itemize} \item Front-end components \begin{itemize} \item HTTP/s request serving \item Load balancing and rate limiting components \end{itemize} \end{itemize} \subsection{Cross-layer Data Correlation} Previous subsection details how the APM collects useful monitoring data at each layer of the cloud stack. To make most out of the gathered data, and use them to perform complex analyses, we must be able to correlate data records collected at different layers of the PaaS. For example consider the execution of a single application request. This single event results in following data records at different layers of the cloud, which will be collected and stored by the APM as separate entities. \begin{itemize} \item A front-end server access log entry \item An application server log entry \item Zero or more application log entries \item Zero or more PaaS kernel invocation records \end{itemize} We require a mechanism to tie these disparate records together, so the data processing layer can easily aggregate the related information. For instance, we must be able to retrieve via an aggregation query, all PaaS kernel invocations made by a specific application request. To facilitate this requirement we propose that front-end server tags all incoming application requests with unique identifiers. This request identifier can be attached to HTTP requests as a header which is visible to all components internal to the PaaS cloud. All data collecting agents can then be configured to record the request identifiers whenever recording an event. At the data processing layer APM can aggregate the data by request identifiers to efficiently group the related records. \section{Implementation} \begin{figure} \centering \includegraphics[scale=0.5]{apm_impl} \caption{APM implementation based on ElasticSearch.} \label{fig:apm_impl} \end{figure} In this section we outline some of the technologies and tools that we have chosen to implement the proposed APM architecture. After a thorough evaluation of numerous existing system monitoring tools and platforms, we have decided to implement our APM for PaaS clouds using ElasticSearch. More specifically, ElasticSearch will operate as the primary data storage component of the APM. ElasticSearch is ideal for storing large volumes of structured and semi-structured data. It supports scalability and high availability via sharding and replication. Perhaps what makes ElasticSearch an excellent choice for an APM is its comprehensive data indexing and query support. Using the tried and tested Apache Lucene technology, ElasticSearch continuously organizes and indexes data, making the information available for fast retrieval and efficient querying. Additionally it also provides powerful data filtering and aggregation features, which can greatly simplify the implementations of high-level data processing algorithms. Data can be directly stored in ElasticSearch via its REST API. This means most data collection agents can simply make HTTP calls to ElasticSearch to add new records. ElasticSearch also supports batch processing thereby enabling agents to locally buffer collected data, and store them in batches to avoid making too many HTTP calls. For scraping server logs and storing the extracted records in ElasticSearch, we can use the Logstash tool. Logstash supports scraping a wide range of standard log formats (e.g. Apache HTTPD access logs), and other custom log formats can be supported via a simple configuration. It also integrates naturally with ElasticSearch. For data visualization we are currently considering Kibana, a powerful web-based dash boarding tool that is specifically designed to operate in conjunction with ElasticSearch. Kibana provides a wide range of charting and tabulation capabilities, with particularly strong support for temporal data. Since ElasticSearch exposes all stored data via its REST API, it's also possible to bring other visualization tools into the mix easily. Figure~\ref{fig:apm_impl} shows the APM deployment view with ElasticSearch and other related technologies in place. Most of the data processing features are provided by ElasticSearch itself, and other more complex data analytics can be provided by a custom data processing system. \section{APM Use Cases} In this section we elaborate on some concrete use cases of the proposed APM. In particular, we discuss how the APM can be used to predict performance SLAs for web applications deployed in a PaaS cloud, as well as to detect performance anomalies. These use cases rely on the data collected by the APM, and some of its data processing and visualization capabilities. Where appropriate we will extend the base design of the APM to incorporate new components and tools required to implement the features discussed here. \subsection{Static Topology Discovery and SLA Prediction} Our goal is to give a prediction that can make it possible to determine response time service level agreements (SLAs) with probabilities specified by the cloud provider in a way that is scalable. To allow PaaS administrators to determine what response time guarantees can be made regarding the deployed applications, we will take an approach that combines static analysis of the hosted web applications and runtime monitoring of the PaaS cloud. Also, since we want to provide the prediction to PaaS users when they are deploying the applications, such static analysis must be done before deploying or running an application on the PaaS cloud. A typical PaaS cloud exports many kinds of services, such as data storage, caching and queuing (PaaS kernel services). Application developers compose these services into their web applications. From experience, we know that most applications hosted on PaaS spend majority of the execution time on PaaS service invocations, and they do not have many branches and loops. Therefore, in our design we use static analysis to identify the PaaS kernel service invocations that dominate the response time of web applications. By doing so we also detect the topology of applications -- i.e. the service dependencies. Our APM design includes sensors/agents that monitor the performance of PaaS kernel services over time. This information can be recorded periodically to form a set of time series. This historical performance data can be aggregated and processed using a time series forecasting methodology to calculate statistical bounds on the response time of applications. These forecast values can be used as the basis of a performance SLA. Also, because service implementations and platform behavior under load change over time, the predicted SLAs may become invalid after a period of time. We will develop a statistical model to detect such SLA invalidations. When such invalidations occur, the SLA prediction can be reinvoked to establish new SLAs. To build a system that predicts response time SLAs using only static information, our design has three components: \begin{itemize} \item Static analysis tool \item Monitoring agent \item SLA predictor \end{itemize} \subsubsection{Static Analysis Tool} \begin{figure} \centering \includegraphics[scale=0.4]{cloud_app_model} \caption{Cloud Application Model.} \label{fig:cloud_app_model} \end{figure} This component analyzes the source code of the web application and extracts a sequence of PaaS service invocations. Figure~\ref{fig:cloud_app_model} illustrates the typical PaaS development and deployment model. Developers use the services exposed by the PaaS cloud (aka PaaS kernel services or PaaS SDK) to implement their applications. Applications in turns are exposed to end users via one or more web APIs. The end user could be a client application external to the cloud, or another application running in the same cloud environment. The underlying PaaS kernel service implementations are highly scalable, highly available (have SLAs associated with them), and automatically managed by the platform. Developers upload their finished applications to the cloud for deployment. An uploaded application typically consists of source code or some intermediate representation of it along with one or more deployment descriptors (configurations, versioning information, crypto resources etc.) When an application has been uploaded, the static analysis tool can analyze the source code or the application's intermediate representation (e.g. Java bytecode). It performs a simple construction and inter-procedural static analysis of the control flow graph (CFG). By performing a depth-first traversal on the CFG it is possible to identify all possible paths of execution through the application code. This includes paths that occur due to branching (if-else constructs, switch statements etc.), looping as well error handling (try-catch constructs). For each identified path, the static analysis tool extracts a sequence of PaaS service invocations. Since the applications need to be exposed to users through HTTP/s, the static analysis tool can begin the extraction by checking specific language classes or framework annotations, for example, Java's servlet classes or the classes marked with the JAX-RS Path annotation. For each application the static analysis tool produces a list of annotated PaaS service invocation sequences -- one sequence per program path. It then prunes this list to eliminate duplicates. Duplicates occur when an application has multiple program paths with the same sequence of PaaS service invocations. Ideally, we can identify the PaaS kernel service calls by their namespace (in Java's case, the package name). Although loops are rare in this type of applications, when they occur, they are used to iterate over a dataset returned from a database. The tool estimates the loop bounds if specified in the PaaS kernel service API (e.g. the maximum number of entities to return). Otherwise, we can ask users to provide an estimation of the size of their dataset. \subsubsection{Monitoring Agent} This agent monitors and records the response time of individual PaaS services within a running PaaS system. It can be built as a native PaaS feature, or as an independent application deployed on PaaS. To avoid unnecessary performance overhead on other PaaS-hosted web applications, the monitoring agent runs in the background separate from them. The agent invokes services provided by PaaS kernel periodically and records response times for each service. Also, the agent periodically reclaims old measurement data to eliminate unnecessary storage. In our design, these agents can be implemented as ElasticSearch's custom agents. The collected data will be sent back to ElasticSearch and wait for processing. \subsubsection{SLA predictor} The SLA predictor uses the outputs of other two components to predict an upper bound on the response time of the services. To make SLA predictions, we propose using Queue Bounds Estimation from Time Series (QBETS)~\cite{Nurmi:2007:QQB:1791551.1791556}, a non-parametric time series analysis method that we developed in prior work. We originally designed QBETS for predicting the scheduling delays of batch queue systems used in high performance computing environments. We adapt it for use ``as-a-service'' in PaaS systems to predict the execution time of deployed applications. A QBETS analysis requires three inputs: \begin{enumerate} \item A time series of data generated by a continuous experiment, \item The percentile for which an upper bound should be predicted ($p \in [1..99]$) \item The upper confidence level of the prediction ($c \in (0,1)$) \end{enumerate} QBETS uses this information to predict an upper bound for the $p$-th percentile of the input time series. The predicted value has a probability of $0.01p$ of being greater than or equal to the next data point that will be added to the time series by the continuous experiment. The upper confidence level $c$ serves as a conservative bound on the predictions. That is, predictions made with an upper confidence level of $c$ will overestimate the true percentile with a probability of $1-c$. This confidence guarantee is necessary because QBETS does not determine the percentiles of the time series precisely, but only estimates them. To further clarify what QBETS does, assume a continuous experiment that periodically measures the response time of a system. This results in a time series of response time data. Suppose at time $t$, we run QBETS on the time series data collected so far with $p=95$ and $c=0.01$. The prediction returned by QBETS has a 95\% chance of being greater than or equal to the next response time value measured by our experiment after time $t$. Since $c=0.01$, the predicted value has a 99\% chance of overestimating the true 95th percentile of the time series. We find QBETS to be an ideal fit for our work due to several reasons. \begin{itemize} \item QBETS works with time series data. Since the response time of various PaaS kernel services can be easily represented as time series, they are highly amenable for QBETS analysis. \item QBETS makes predictions regarding the future outcomes of an experiment by looking at the past outcomes -- an idea that aligns with our goal of predicting future application response times from historical PaaS kernel service performance data. \item Response time SLAs of web applications should be specified with exact correctness probabilities and confidence levels for them to be useful to developers and PaaS administrators. QBETS meets these requirements. \item QBETS is simple, efficient and has been applied successfully to analyze a wide range of time series data, including correlated and uncorrelated data, in the past. \end{itemize} In our case, QBETS takes the response times for each PaaS kernel service we record in ElasticSearch. Notice that this data is collected continuously by the PaaS monitoring agent, so QBETS is able to automatically adapt to the changing conditions of the cloud. Given the percentile for which an upper bound should be predicted and the upper confidence level of the prediction, QBETS can generate a conservative prediction. Since an application may invoke multiple PaaS kernel services, the SLA predictor also needs to align and aggregate multiple time series together before engaging QBETS. For example, suppose an application makes 3 PaaS kernel service invocations. The static analysis component would detect the 3 target kernel services invoked by the application. The SLA predictor should then retrieve the response time data pertaining to those 3 PaaS kernel services from ElasticSearch. This information would be retrieved as 3 separate time series. SLA predictor then aligns the time series data (by timestamp), and aggregates them to form a single time series where each data point is an approximation of the total time spent by the application on invoking PaaS kernel services. This aggregate time series can be provided as the input to QBETS to make the response time predictions. Note that our static analysis tool produces multiple sequences of PaaS service invocations for each analyzed application. Multiple sequences occur due to the existence of branches, loops and error handling logic in the application code. The SLA predictor can make predictions for each of the paths identified by the static analysis tool. The largest predicted value can then be used as the basis for a response time SLA, thus covering all paths of the input applications. The key assumption that makes our approach viable is that PaaS-hosted web applications spend most of their execution time on invoking PaaS kernel services. Previous studies~\cite{Jayathilaka:2015:RTS:2806777.2806842} have shown this to be true, with applications spending over 90\% of their execution time on PaaS kernel service invocations. \subsubsection{Workflow} \begin{figure} \centering \includegraphics[scale=0.35]{apm_flow} \caption{APM architecture and components interaction.} \label{fig:apm_flow} \end{figure} Figure~\ref{fig:apm_flow} illustrates how the components interact with each other during the prediction making process. The SLA prediction can be invoked when a web application is deployed to the PaaS cloud or at any time during the development process to give developers insight into the worst-case response time of their applications. When the prediction is invoked, it performs static analysis on all operations in the application. Next, it retrieves benchmarking data collected by the monitoring agent for all PaaS service invocations. Finally, the QBETS analysis is applied to the data with the desired percentile and confidence value. After the predictions are made, we can use the largest value across all application paths as the SLA prediction for a web application. \subsection{Performance Anomaly Detection} Numerous statistical models have been developed over time for detecting performance anomalies in running applications. However, prior work has mostly focused on simple stand-alone applications. Few efforts have extended this notion towards web applications, but web applications in PaaS clouds, for the most part, is an uncharted territory. We intend to build on prior work regarding detecting performance anomalies in web applications, and invent new mechanisms that can detect performance anomalies of PaaS-deployed applications. Such techniques must be able to detect a drop in performance level of an application, and then determine if it occurred due to a change in the workload, or some system-level issue. This requires correlating performance data of an application (e.g. response time), with workload information (e.g. number of users). If a performance drop occurred due to a system-level issue, we must further analyze the performance data concerning PaaS kernel services. Note that the proposed APM collects such low-level information regarding the PaaS kernel service invocations by applications. This information can be analyzed in relation to a detected performance anomaly to identify where the bottleneck is. APM can also keep track of the sequences of PaaS kernel services invoked by a given application over time. Each unique sequence represents the execution of a different path through the application code. This information is useful for identifying the nature of the workload handled by a given application, and how it evolves with time. We can use novelty detection (a form of anomaly detection) to identify the execution of new, previously unseen paths, which by themselves may be a sign of an anomaly. \section{Conclusions} As PaaS increased in popularity and use, the need for technologies to monitor and analyze the performance and behavior of deployed applications has also grown. However, most PaaS clouds available today do not provide adequate support for such analysis. Therefore, we propose an application platform monitoring system that is able to take advantage of PaaS cloud features, but that is portable across them. To provide comprehensive full stack monitoring and analytics, the APM we propose provides four major functions: data collecting, data storage, data processing, and data visualization. We describe the necessary organization of these functions, and illustrate how these functions work as a component in the system. Also, by providing the architecture of typical PaaS and proposed APM, we illustrate how these functions can be built as components that make APM can be easily integrated with any PaaS. After investigating popular application performance data collection and analysis tools, we choose ElasticSearch for data management. ElasticSearch provides powerful, easy to use indexing features and scalability. We also choose to collect data via custom agents and Logstash. Logstash supports a variety of standard log formants, and is easy to customize the configuration to collect a variety of key data. \bibliography{references} \bibliographystyle{plain} \end{document}
{ "alphanum_fraction": 0.8182809094, "avg_line_length": 60.3571428571, "ext": "tex", "hexsha": "4e6ffe75fae16d5f022c7700c376e8f9dd0ec97f", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-05-25T02:59:15.000Z", "max_forks_repo_forks_event_min_datetime": "2020-05-25T02:59:15.000Z", "max_forks_repo_head_hexsha": "d58fe64bb867ef58af19c1d84a5e1ec68ecddd3d", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "UCSB-CS-RACELab/eager-appscale", "max_forks_repo_path": "Eager/elk-experiment/docs/design/design_document.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d58fe64bb867ef58af19c1d84a5e1ec68ecddd3d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "UCSB-CS-RACELab/eager-appscale", "max_issues_repo_path": "Eager/elk-experiment/docs/design/design_document.tex", "max_line_length": 228, "max_stars_count": 3, "max_stars_repo_head_hexsha": "d58fe64bb867ef58af19c1d84a5e1ec68ecddd3d", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "UCSB-CS-RACELab/eager-appscale", "max_stars_repo_path": "Eager/elk-experiment/docs/design/design_document.tex", "max_stars_repo_stars_event_max_datetime": "2018-07-16T18:20:23.000Z", "max_stars_repo_stars_event_min_datetime": "2016-06-12T01:18:49.000Z", "num_tokens": 6528, "size": 32110 }
\documentclass[12pt,english]{article} \usepackage[colorlinks=true, linkcolor=blue, citecolor=blue, plainpages=false, pdfpagelabels=true, urlcolor=blue]{hyperref} \usepackage[bottom]{footmisc} \usepackage{filecontents} % This creates a bib file called ref.bib in the same folder as the current tex file \begin{filecontents}{ref.bib} @article{becker_human_1986, title = {Human Capital and the Rise and Fall of Families}, volume = {4}, url = {http://ideas.repec.org/a/ucp/jlabec/v4y1986i3ps1-39.html}, pages = {S1--39}, number = {3}, journaltitle = {Journal of Labor Economics}, author = {Becker, Gary S and Tomes, Nigel}, urldate = {2012-02-15}, date = {1986} } @article{case_lasting_2005, title = {The lasting impact of childhood health and circumstance}, volume = {24}, issn = {0167-6296}, url = {http://www.ncbi.nlm.nih.gov/pubmed/15721050}, doi = {10.1016/j.jhealeco.2004.09.008}, pages = {365--389}, number = {2}, journaltitle = {Journal of health economics}, shortjournal = {J Health Econ}, author = {Case, Anne and Fertig, Angela and Paxson, Christina}, urldate = {2012-06-13}, date = {2005-03}, pmid = {15721050}, keywords = {Adult, child, Child Welfare, Cohort Studies, Great Britain, Health Status Indicators, Humans, Social Class} } @article{conti_understanding_2010, title = {Understanding the Early Origins of the Education–Health Gradient}, volume = {5}, issn = {1745-6916, 1745-6924}, url = {http://pps.sagepub.com/content/5/5/585.abstract}, doi = {10.1177/1745691610383502}, pages = {585--605}, number = {5}, journaltitle = {Perspectives on Psychological Science}, author = {Conti, Gabriella and Heckman, James J}, urldate = {2012-02-16}, date = {2010-09-01}, keywords = {health, education, genetics, treatment effects} } \end{filecontents} \usepackage[authordate, backend=bibtex, doi=false, isbn=false, sorting=nyt, maxbibnames=10, maxcitenames=3, sortcites=False]{biblatex-chicago} \bibliography{ref} \begin{document} \title{Basic Bibliography and Reference Testing} \author{\href{http://fanwangecon.github.io/}{Fan Wang} \thanks{See \href{https://fanwangecon.github.io/Tex4Econ/}{Tex4Econ} for more latex examples.}} \maketitle According to \textcite{becker_human_1986}, ipsum dolor sit amet, consectetur adipiscing elit. Integer placerat nunc orci, id pellentesque lacus ullamcorper at. Mauris venenatis gravida magna non dapibus. Nullam vel consequat purus, id luctus dui. Suspendisse vel auctor nulla. Proin ipsum felis, efficitur eu eleifend vitae, efficitur pellentesque mauris \autocite{case_lasting_2005, conti_understanding_2010}. \paragraph{\href{https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3140132}{Data}} Village closure information is taken from a village head survey, which was collected in conjunction with household surveys. Village heads were asked if the village currently had a primary school, and asked about the year of school closure if the village school had been closed. Based on the village heads survey, there are four categories of closure status. The first category includes 193 villages that did not have village schools in 2011 and experienced school closure between 1999 and 2010. In the second category, which included 22 villages, a school closure year between 1999 and 2010 was reported, but village heads also reported that the village currently had a school in 2011. In this case, it is plausible that new schools were built in these 22 villages after school closure.\footnote{Generally students went to schools in township centers after village school closure, but in these 22 villages, it is possible that a new consolidated school was built inside these villages.} \pagebreak \begingroup %\setstretch{1.1} \setlength\bibitemsep{0pt} \printbibliography \endgroup \pagebreak \end{document}
{ "alphanum_fraction": 0.7647368421, "avg_line_length": 43.6781609195, "ext": "tex", "hexsha": "68539eb4ed14c42725982f51a412591ca61e41bf", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7bdbfb29e956d31239bd592b6392574e4aec5c15", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "guohui-jiang/Tex4Econ", "max_forks_repo_path": "_support/reference/biblatex_basic/biblatex_test.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7bdbfb29e956d31239bd592b6392574e4aec5c15", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "guohui-jiang/Tex4Econ", "max_issues_repo_path": "_support/reference/biblatex_basic/biblatex_test.tex", "max_line_length": 986, "max_stars_count": null, "max_stars_repo_head_hexsha": "7bdbfb29e956d31239bd592b6392574e4aec5c15", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "guohui-jiang/Tex4Econ", "max_stars_repo_path": "_support/reference/biblatex_basic/biblatex_test.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1078, "size": 3800 }
% !TeX root = ../arara-manual.tex \chapter*{License} \label{chap:license} \epigraph{Anything that prevents you from being friendly, a good neighbour, is a terror tactic.}{\textsc{Richard Stallman}} \arara\ is licensed under the \href{http://www.opensource.org/licenses/bsd-license.php}{New BSD License}. It is important to observe that the New BSD License has been verified as a GPL-compatible free software license by the \href{http://www.fsf.org/}{Free Software Foundation}, and has been vetted as an open source license by the \href{http://www.opensource.org/}{Open Source Initiative}. \vfill \begin{messagebox}{New BSD License}{araracolour}{\icinfo}{white} \footnotesize \includegraphics[scale=0.25]{logos/logo1.pdf} Copyright \textcopyright\ 2012--2018, Paulo Roberto Massa Cereda\\ All rights reserved. \vspace{1em} Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: \begin{itemize} \item Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. \item Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. \end{itemize} This software is provided by the copyright holders and contributors ``as is'' and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. In no event shall the copyright holder or contributors be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and on any theory of liability, whether in contract, strict liability, or tort (including negligence or otherwise) arising in any way out of the use of this software, even if advised of the possibility of such damage. \end{messagebox}
{ "alphanum_fraction": 0.795754717, "avg_line_length": 70.6666666667, "ext": "tex", "hexsha": "94d8a508193c16ab82850cadedd3204ada24f9c4", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "cb875d1ebc9a09bb6252c5562fd13a2aed6386f9", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "GHResearch/arara", "max_forks_repo_path": "docs/chapters/license.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "cb875d1ebc9a09bb6252c5562fd13a2aed6386f9", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "GHResearch/arara", "max_issues_repo_path": "docs/chapters/license.tex", "max_line_length": 757, "max_stars_count": null, "max_stars_repo_head_hexsha": "cb875d1ebc9a09bb6252c5562fd13a2aed6386f9", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "GHResearch/arara", "max_stars_repo_path": "docs/chapters/license.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 464, "size": 2120 }
\section{Resiliency and Survivability} \label{sec:survivability} Being a platform for contractual money, Ergo should also support long-term contracts for a period of at least an average person's lifetime. However, even young existing smart contract platforms are experiencing issues with performance degradation and adaptability to external conditions. This leads to a situation where the cryptocurrency depends on a small group of developers to provide a fixing hard-fork, or the cryptocurrency won't survive. For example, the Ethereum network was started with a Proof-of-Work consensus algorithm with a plan to switch to Proof-of-Stake in the near future. However, delays in the Proof-of-Stake development have led to several fixing hard-forks~\cite{ethDifficultyBomb} and the community is still forced to rely on core developers promising to implement the next hard-fork. The first common survivability issue is that in pursuit of popularity, developers tend to implement ad-hoc solutions without proper preliminary research and testing. Such solutions inevitably lead to bugs, which then lead to hasty bug fixes, then to fixes of those bug fixes, and so on, making the network unreliable and even less secure. A notable example is the IOTA cryptocurrency, which implemented various scaling solutions, including its own hash function and DAG structure, that allowed it to achieve high popularity and market capitalization. However, a detailed analysis of these solutions revealed multiple serious problems, including practical attacks that enabled token theft~\cite{heilmancryptanalysis, de2018break}. A subsequent hard-fork~\cite{IOTAReport} then fixed these problems by switching to the well-known SHA3 hash function, thereby confirming the uselessness of such kind of innovations. Ergo's approach here is to use stable well-tested solutions, even if they lead to slower short-term innovations. Most of the solutions used in Ergo are formalized in papers presented at peer-reviewed conferences~\cite{reyzin2017improving,meshkov2017short,chepurnoy2018systematic,chepurnoy2018self,chepurnoy2018checking,duong2018multi} and have been widely discussed in the community. A second problem that decentralization (and thus survivability) faces is the lack of secure trustless light clients. Ergo tries to fix this problem of blockchain technology without creating new ones. Since Ergo is a PoW blockchain, it easily allows extraction of a small header from the block content. This header alone permits validation of the work done in the block and a headers-only chain is enough for best chain selection and synchronization with the network. A headers-only chain, although much smaller than the full blockchain, still grows linearly with time. Recent research on light clients provides a way for light clients to synchronize with the network by downloading an even smaller amount of data, thereby unlocking the ability to join the network using untrusted low-end hardware such as mobile phones~\cite{kiayias2017non,luuflyclient}. Ergo uses an authenticated state~\ref{sec:utxo} and for transactions included in a block, a client may download a proof of their correctness. Thus, regardless of the blockchain size, a regular user with a mobile phone can join the network and start using Ergo with the same security guarantees as a full node. Readers may notice a third potential problem in that although support for light clients solves the problem for Ergo users, it does not solve the problem for Ergo miners, who still need to keep the whole state for efficient transaction validation. In existing blockchain systems, users can put arbitrary data into this state. This data, which lasts forever, creates a lot of dust and its size increases endlessly over time~\cite{perez2019another}. A large state size leads to serious security issues because when the state does not fit in random-access memory, an adversary can generate transactions whose validation become very slow due to required random access to the miner's storage. This can lead to DoS attacks such as the one on Ethereum in 2016~\cite{ethDos2016}. Moreover, the community's fear of such attacks along with the problem of ``state bloat'' without any sort of compensation to miners or users holding the state may have prevented scaling solutions that otherwise could have been implemented (such as larger block sizes, for example). To prevent this, Ergo has a storage rent component: if an output remains in the state for 4 years without being consumed, a miner may charge a small fee for every byte kept in the state. This idea, which is similar to regular cloud storage services, was only proposed quite recently for cryptocurrencies~\cite{chepurnoy2017space} and has several important consequences. Firstly, it ensures that Ergo mining will always be stable, unlike Bitcoin and other PoW currencies, where mining may become unstable after emission is done~\cite{carlsten2016instability}. Secondly, growth of the state's size becomes controllable and predictable, thereby helping Ergo miners to manage their hardware requirements. Thirdly, by collecting storage fees from outdated boxes, miners can return coins to circulation, and thus, prevent the steady decrease of circulating supply due to lost keys~\cite{wsj2018}. All these effects should support Ergo's long-term survivability, both technically and economically. A fourth vital challenge to survivability is that of changes in the external environment and demands placed on the protocol. A protocol should adapt to the ever changing hardware infrastructure, new ideas to improve security or scalability that emerge over time, the evolution of use-cases, and so on. If all the rules are fixed without any ability to change them in a decentralized manner, even a simple constant change can lead to heated debates and community splits. For instance, discussion of the block-size limit in Bitcoin led to its splitting into several independent coins. In contrast, Ergo protocol is self-amendable and is able to adapt to the changing environment. In Ergo, parameters like block size can be changed on-the-fly via voting of miners. At the beginning of each 1024-block voting epoch, a miner proposes changes of up to 2 parameters (such as an increase of block size and a decrease of storage fee factor). During the rest of the epoch, miners vote to approve or reject the changes. If a majority of votes within the epoch support the change, the new values are written into the extension section of the first block of the next epoch, and the network starts using the updated values for block mining and validation. To absorb more fundamental changes, Ergo follows the approach of {\em soft-forkability} that allows changing the protocol significantly while keeping old nodes operational. At the beginning of an epoch, a miner can also propose to vote for a more fundamental change~(e.g., adding a new instruction to ErgoScript) describing affected validation rules. Voting for such breaking changes continues for 32,768 blocks and requires at least $90\%$ of ``Yes'' votes to be accepted. Once being accepted, a 32,768-blocks long activation period starts to give outdated nodes time to update their software version. If a node software is still not updated after the activation period, then it skips the specified checks but continues to validate all the known rules. List of previous soft-fork changes is recorded into the extension to allow light nodes of any software version to join the network and catch up to the current validation rules. A combination of soft-forkability with the voting protocol allows changing of almost all the parameters of the network except the PoW rules that are responsible for the voting itself.
{ "alphanum_fraction": 0.8185103911, "avg_line_length": 117.3787878788, "ext": "tex", "hexsha": "702cd161d858d4b9a4090afe05534ae1ad41f27e", "lang": "TeX", "max_forks_count": 131, "max_forks_repo_forks_event_max_datetime": "2022-03-22T01:08:16.000Z", "max_forks_repo_forks_event_min_datetime": "2017-07-19T12:46:49.000Z", "max_forks_repo_head_hexsha": "9964f415526f491a4837774d80b59792e1e2b8bb", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "scasplte2/ergo", "max_forks_repo_path": "papers/whitepaper/survivability.tex", "max_issues_count": 886, "max_issues_repo_head_hexsha": "9964f415526f491a4837774d80b59792e1e2b8bb", "max_issues_repo_issues_event_max_datetime": "2022-03-31T10:21:25.000Z", "max_issues_repo_issues_event_min_datetime": "2017-07-20T21:59:30.000Z", "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "scasplte2/ergo", "max_issues_repo_path": "papers/whitepaper/survivability.tex", "max_line_length": 323, "max_stars_count": 424, "max_stars_repo_head_hexsha": "9964f415526f491a4837774d80b59792e1e2b8bb", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "scasplte2/ergo", "max_stars_repo_path": "papers/whitepaper/survivability.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-29T13:33:57.000Z", "max_stars_repo_stars_event_min_datetime": "2017-07-17T12:33:06.000Z", "num_tokens": 1584, "size": 7747 }
\section{Resources, Process, Work Product} The goal of the \gadf effort is to enable efficient collaboration on gamma-ray data formats and codes. To this end, we have set up the following resources that are open to anyone interested in the topic: \begin{itemize} \item{} A mailing list (currently 75 members, including people from all major gamma-ray collaborations) with this official description: ``This group is organized for the discussion of software and data formats for the gamma-ray astronomy community. If you are interested in open and common data and software formats for space- and ground-based instruments you are encouraged to join.'': \\ \ogralist \item{}A Github organisation for online collaboration on data format specifications via issues and pull requests:\\ \gadfgithub \item{}Our main work product, the data format specifications, are available online at:\\ \gadfrtd \item{}We hold monthly tele-conferences and plan to hold roughly bi-yearly face-to-face meetings. The first one (Meudon, France in April 2016) was focused on IACT DL3, future meetings will be a bit broader in scope: \ogrameudon \end{itemize} Our main work product will be a set of data format specifications for gamma-ray data. Each format usually specifies the names and semantics of data and metadata (a.k.a. ``header'') fields. The scope, status, ongoing discussions, and plans for the data format specifications are presented in the next section. The development of open-source tools and libraries as well as export of existing gamma-ray data to these proposed formats is highly encouraged. However, that work is mainly done by members of the collaborations and software projects mentioned in Figure~\ref{fig:purpose}, who then make suggestions for additions or improvements to the existing specifications. Currently the process of specification writing is informal and the data format specifications currently written should be seen as proposals, not final standards. We are following the ``release early and often'' philosophy, hoping for feedback and contributions from the larger gamma-ray astronomy community. This approach was motivated by the lack of progress in the past five years on IACT DL3 formats. Although work has begun within CTA on the development of a DL3 format, CTA doesn't produce DL3 data yet. Current IACTs were starting to export their data to FITS format and analyzing them with the current science tools, and many slightly different ways to store the same information in FITS files appeared. Our hope is that this more open format development, making adoption and contributions easy (sending a comment to the mailing list, or making an issue or pull request on Github), will help accelerate the process. Achieving format stability and dealing with ``requests for enhancement'' after a first stable version of the format specifications is released will be discussed at future meetings. \begin{figure}[tb] \centerline{\includegraphics[width=\textwidth]{figures/webpage}} \caption{ \emph{Left:} \texttt{gamma-astro-data-formats} Github issue tracker with ongoing discussions. \emph{Right:} latest version of the \texttt{gamma-astro-data-formats} specifications on Read the Docs (PDF and older tagged versions also available). } \label{fig:webpage} \end{figure}
{ "alphanum_fraction": 0.8024954352, "avg_line_length": 142.8695652174, "ext": "tex", "hexsha": "0940ac3bce9bf1ee929f8aea5040373b9890ffe1", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "184f8ff3513a0cb13697e2d12305128cfebc4e1e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "open-gamma-ray-astro/gamma2016-poster", "max_forks_repo_path": "proceeding/text/process.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "184f8ff3513a0cb13697e2d12305128cfebc4e1e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "open-gamma-ray-astro/gamma2016-poster", "max_issues_repo_path": "proceeding/text/process.tex", "max_line_length": 1107, "max_stars_count": null, "max_stars_repo_head_hexsha": "184f8ff3513a0cb13697e2d12305128cfebc4e1e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "open-gamma-ray-astro/gamma2016-poster", "max_stars_repo_path": "proceeding/text/process.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 689, "size": 3286 }
\documentclass[10pt,letterpaper]{article} \providecommand{\main}{.} \usepackage{cogsci} \usepackage{pslatex} \usepackage{graphicx} \usepackage{multicol} \usepackage{blindtext} \usepackage{tablefootnote} \usepackage{hyperref} \usepackage{tipa} \usepackage{booktabs} \usepackage[table,usenames,dvipsnames]{xcolor} \usepackage[style=apa]{biblatex} \DeclareNameAlias{sortname}{family-given} \addbibresource{qp1.bib} \usepackage{gb4e} \graphicspath{{\main/figures/}{figures}} \title{`Sally the Congressperson': The Role of Individual Ideology on the Processing and Production of English Gender-Neutral Role Nouns} \author{{\large \bf Brandon Papineau ([email protected])} \\\\ {\large \bf Robert J. Podesva ([email protected])} \\\\ {\large \bf Judith Degen ([email protected])} \\ Department of Linguistics, Margaret Jacks Hall, Bldg. 460\\ Stanford, CA, 94305 USA} \begin{document} \maketitle \begin{abstract} Language and gender are inextricably linked; we make reference to the real-world gender identities of the people around us every day. Moreover, psycholinguistic investigation has demonstrated that we make assumptions about individuals' gender based on the language used to describe them, and that the biases underpinning these assumptions in turn influence the ways in which we describe and refer to others. What has been left underinvestigated is the role that individual, rather than societally-held, ideologies about gender play in this aspect of the linguistic system. In two web-based studies, we investigate the processing and production of gender-neutral role nouns such as \textit{congressperson} as a function of individual gender ideology and political alignment. Our results indicate an asymmetry between the processing and production of gender-neutral role nouns: while individuals' gender ideologies do not modulate the processing of these terms, gender ideologies do interact with political party in production tasks, such that Democrat-identified participants with more progressive gender ideologies produce more gender-neutral role nouns. We appeal to the notion of \textit{indexicality}, arguing that Democrats draw on these forms as semiotic resources for constructing progressive personae in interaction, while Republicans and political non-partisans do not.\\ \linebreak \textbf{Keywords:} language and gender; language processing; language production; language and politics; morphology \end{abstract} \section{Introduction} English contains a subset of lexical entries which identify the semantic or real-world gender identity of the individuals they pick out, consisting primarily of pronouns, kinship terms, and a limited set of other role nouns. While generally common in discourse, these terms are often ideologically, socially, and politically charged or contested. Consider the famous and contemporary case of English pronouns. While psycholinguistic investigations have indicated that there is a processing advantage found in singular \textit{they} when it is paired with gender-underspecified referents \parencite{foertsch1997search,doherty2017gender,ackerman}, its usage continues to be debated on the battlefields of style guides, op-eds, and popular discourse, especially as it relates to its use as a pronoun used by non-binary or gender non-conforming individuals. \par More conventionally-gendered pronouns have also been the subject of psycholinguistic analysis as they relate to real-world referent gender. \textcite{von2020implicit} found that, in the context of the United States 2016 presidential election, participant beliefs about whether or not Hillary Clinton would win the presidency had no effect on the production of \textit{she} as a coreferent pronoun with \textit{the future president}, and that \textit{she} induced a processing penalty when read in a context in which it was coreferent with \textit{the future president}. In fact, it was coreferential \textit{they} which increased in produced frequency as belief in Clinton's victory increased. However, in the context of the 2017 British General Election, \textit{she} was produced more frequently than \textit{he} when coreferential with \textit{the future Prime Minister}, when the incumbent Prime Minister was female (Theresa May). On the other hand, there was no such processing bonus for \textit{she} over \textit{he} until after the results of the election, indicating lingering sexist beliefs in the realm of language processing. These findings are reminiscent of previous work examining the relationship between societal expectations and reading times on gender-anomalous coreferents. For instance, coreferential pronouns are harder to process when they do not align with the stereotypical gender of the role noun in question, such as \textit{he} for \textit{nurse} or \textit{she} for \textit{electrician} \parencite{foertsch1997search,duffy2004violating}. These findings, taken together, suggest that our biases about who performs a particular social role inform the ways we produce and process the pronouns which refer to them.\par Beyond the realm of pronominal reference, \textcite{pozniak2021failures} found that respondents who believed that female candidates would win in the 2020 Parisian and Marseille municipal elections were more likely to produce feminine-marked titles (as well as pronouns) to refer to the future politicians, but that masculine-marked forms were still dominant in both locales. Corpus data similarly indicates that referent gender indication is more prevalent when the gender of the referent runs counter to stereotypical assumptions. For example, the Corpus of Contemporary American English \parencite{coca} contains 165 tokens of \textit{male nurse}, compared to 53 of \textit{female nurse}. These biases are in turn learned by large language models trained on natural language corpora, raising concerns about the perpetuation of societal biases in the realm of automation and language \parencite{caliskan2017semantics,bender2021dangers,sutton2018biased}. Such findings further underscore the role of societally-held beliefs about the genders of particular social roles play in the language we use to describe those who fill these roles.\par While the aforementioned studies have investigated the role of group and societal-level biases and ideologies (interactional systems of biases and expectations about the world) in the processing and production of gendered language, this high-level focus leaves room for a more granular investigation of the role of \textit{individual} ideologies on this facet of the linguistic system. We can couch this question in the notion of surprisal, which measures processing difficulty as proportional to the relative surprisal (1) of a particular word occurring given previous input (\textit{w}\textsubscript{1},...,\textit{w}\textsubscript{\textit{i}-1}) and any extralinguistic or extrasentential content (\textit{C}) \parencite{levy2008expectation}. \begin{exe} \item processing difficulty $\propto$ log\textit{P}(\textit{w}\textsubscript{\textit{i}}$|$\textit{w}\textsubscript{1},...,\textit{w}\textsubscript{\textit{i}-1},\textit{C}) \end{exe} In the aforementioned examples, we can explain processing difficulties incurred by co-referring pronouns that do not concord with stereotypical associations of particular occupations by positing that these pronouns are relatively more surprising than would be a stereotype-concordant pronoun, as a result of prior beliefs about gender roles. However, it is reasonable to assume that not every individual will interpret these co-referential terms as being equally surprising. For example, we might expect that an individual with a particularly open-minded attitude towards gender roles will be socially progressive and consume media that reflects these values; as a result, they may be exposed to more gender-neutral language, and in turn less surprised by its use in context. Alternatively, the ideologies themselves might be at play in the calculation of difficulty, in the form of extrasentential context. \par While it may be difficult to tease apart exactly where ideology is implemented in processing calculations, we set out to investigate the \textit{extent} to which such individually-held ideologies might influence the processing and production of gender-neutral language, and how they might manifest. We conducted two web-based experiments centered around the domain of `role nouns', which describe individuals' social and professional positions in the world \parencite{misersky2014norms}. These include both compound forms (n=14) which make a ternary distinction between male, female, and gender-neutral forms, as well as affixed forms which make only a binary distinction (n=6). Examples of these forms are provided in (1) and (2), respectively. \begin{exe} \ex \textbf{Compound ternary distinctions} \begin{xlist} \ex \textit{congressman, congresswoman, congressperson}; \textit{policeman, policewoman, police officer} \end{xlist} \ex \textbf{Affixed binary distinctions} \begin{xlist} \ex \textit{actor, actress}; \textit{villain, villainess} \end{xlist} \end{exe} We report on the experimental design and results of two studies: Experiment 1 examines whether the processing of gender-neutral nouns is modulated by individuals' gender ideology, while Experiment 2 examines whether this same ideology affects the production of these terms. We conclude with a discussion of how these findings contribute to our understanding of the gender-language relationship \footnote{It is important to note that many of the assumptions in our designs, such as the decision to use `male' and `female' names, implicitly endorse or perpetuate the notion of gender as a binary. We would like to highlight that these decisions in no way reflect the beliefs or values of the authors.}.\par \section{Experiment One: Self-Paced Reading} In an experiment similar to that of the processing experiment in \textcite{von2020implicit}, our first investigation concerned the role that individuals' ideologies about gender play in their processing of gender-neutral role nouns. \subsection{Methods} \subsubsection{Participants} 298 participants were recruited through the online recruitment platform \textcite{prolific}\footnote{200 participants were initially recruited, and an additional 98 Republican participants were subsequently recruited after the original sample revealed a heavy skew towards Democrat-identifying participants.}, excluding any participants who failed to correctly respond to at least 85\% of attention check questions (n=19). All participants additionally self-identified as L1 English speakers and as having been born and in and currently residing in the United States. None of the participants had participated in the pilot study or in any other study related to the present project. The demographic breakdown of the participants whose data was included in Experiment One is provided in \hyperref[exp1-sample-table]{Table 1}.\par \begin{table}[!ht] \begin{center} \caption{Experiment One Participant Demographics} \label{exp1-sample-table} \vskip 0.12in \begin{tabular}{llll} \hline & Democrat & Republican & Non-Partisan\tablefootnote{In both studies, `Non-Partisan' participants were recruited as either Democrats or Republicans, but reported a centrist identity in the post-experimental questionnaire.} \\ \hline Female & 64 & 41 & 34 \\ Male & 46 & 59 & 25 \\ Other & 3 & 0 & 0 \\ Decline to state & 0 & 3 & 1 \\ \hline \end{tabular} \end{center} \end{table} \subsubsection{Stimuli \& Procedure} In a web-based implementation of a self-paced reading task, participants saw a series of 20 sentence sets of the form ``[NAME] is a(n) [TITLE] from [STATE]. S/he likes [ACTIVITY]", where ``[TITLE]" stands in for the critical item of gendered role noun. The states and activities were randomized at the stimuli creation stage so that they remained constant for all participants. On the other hand, names varied such that each participant saw 10 vignettes with male-coded names and 10 with female-coded names. Role nouns were then distributed so that 5 of the female names co-occurred with female-marked forms and the other 5 with neutral forms; the same was true for the male names, but with male-marked forms\footnote{We intentionally avoided gender-incongruent forms such as `David is a congresswoman', for fear that doing so would bring too much attention to the research question regarding gender.}. The resulting combinations are presented in (3) through (6); participants saw each of these combinations five times, followed by activity preferences, for a total of twenty trials. Each name and title occurred only once, so that no participant saw both `congressman' and `congresswoman' or `congressperson', for example'. \begin{exe} \ex \textbf{\textit{Female congruent }} \begin{xlist} \ex Sally is a congresswoman from Kansas. \end{xlist} \ex \textbf{\textit{Female neutral}} \begin{xlist} \ex Sally is a congressperson from Kansas. \end{xlist} \ex \textbf{\textit{Male congruent }} \begin{xlist} \ex David is a congressman from Kansas. \end{xlist} \ex \textbf{\textit{Male neutral}} \begin{xlist} \ex David is a congressperson from Kansas. \end{xlist} \end{exe} In order to attain sufficiently-gendered names, the twenty most popular male and female names were selected from the lists of most popular names for boys and girls in 1998 according to the United States Social Security \textcite{socialsecurity}. Names which appeared within the top 100 entries on both lists (e.g. Taylor, Ryan) were excluded. \par Participants proceeded through these sentences one word at a time by pressing the spacebar. This resulted in the previous word disappearing and the subsequent one being revealed on the screen; measurements of reading time were taken for each word in the sentence as a proxy for processing difficulty or effort, as has been standardized in the field \parencite{forster2009maze}. At the end of each vignette, participants were asked about properties of the character described, providing a `yes' or `no' answer to questions about their home state (\textit{Is Sally from Kansas?}) or about their preferred activities (\textit{Does David enjoy skiing?}); these questions served both to distract from the principal question under investigation, and as attention checks, since each of these questions had a vignette-internally correct answer. Participants were provided with an example that did not mark gender before proceeding to the main set of 20 vignettes. \subsubsection{Post-Experimental Survey} Upon completing the reading task, participants proceeded to the post-experimental survey. \par In order to assess the participants' ideologies towards gender, we employ the Social Roles Questionnaire developed by \textcite{baber2006social}. This survey consists of 13 questions which are designed to elicit both implicit and explicit ideologies about gender, including the notions of gender as an immutable fact vs gender as a social construct (what Baber and Tucker term `gender transcendence'), as well as about the societal roles performed by the (binary) genders (`gender linking').\par Each of the 13 questionnaire items was presented alongside a sliding scale from `strongly disagree' to `strongly agree', which corresponded to numerical values of 0 and 100, respectively. The questions related to `gender linking' were inversely coded and then converted to the same scaling as the `gender transcendence' subscale. Participants were then assigned a gender ideology score from 0 to 100 by taking the mean of their individual responses; the closer to 0 a participant is, the more open-minded their approach to gender, and the closer to 100, the more conservative or traditional their view of gender. These opposite poles of the spectrum were termed `gender progressive' and `gender conservative', respectively.\par Finally, participants filled out an optional post-experimental demographic survey, including questions about their own gender, political affiliations, and age. Participants who declined to indicate their age or political orientation were excluded from analysis. \subsubsection{Unigram Surprisal} In order to account for effects of word frequency and surprisal, frequency values for each of the twenty critical items' neutral forms were taken from the `Spoken' (news media) section of COCA \parencite{coca}. These were then converted into unigram surprisal values by taking the negative log of their relative probabilities in the corpus. The decision to use unigram, contextless surprisal values was due to the difficulty in using large language models to obtain surprisal values for very infrequent terms, such as \textit{foreperson}. \subsection{Results} \subsubsection{Exclusions} In addition to the aforementioned participant exclusions, 238 trials were excluded for being more than 2.5 standard deviations away from that lexical item's mean reading time, resulting in a final count of 5,342 observations for analysis (4.2\% exclusion rate). \subsubsection{Model Structure} We fit a linear mixed effect model which predicted length-residualized reading time from fixed effects of political party (ternary, reference level: ``Democrat"), referent gender (binary, reference level: ``female"), participant age, gender ideology, and unigram surprisal. Random intercepts were included for participant and lexeme. Interactions were included between: ideology and age; surprisal and party; age and surprisal; age and party; ideology and party, and the three-way interaction between age, surprisal, and party. These interactions were included as a result of initial investigations which revealed a significant modulation of frequency effects by age (Figure 3). \subsubsection{Gender Ideology} In examining the role of gender ideology in the processing of gender-neutral terms, we find no effect of gender ideology for Democrats ($\beta$ = -0.00, \textit{SE} = 0.00, \textit{t} = -0.11, \textit{p} $>$ 0.5), or in the higher-level interactions for Republicans ($\beta$ = -0.00, \textit{SE} = 0.00, \textit{t} = -1.51, \textit{p} $>$ 0.1) or Non-Partisans ($\beta$ = -0.00, \textit{SE} = 0.00, \textit{t} = -1.34, \textit{p} $>$ 0.1). This suggests that prior beliefs about gender and its binary social roles does not modulate the processing of gender-neutral language. \begin{figure}[h] \centering \includegraphics[scale=0.115]{sprt-neutral-ideo.png} \caption{Residualised reading time on neutral forms (e.g. \textit{congressperson}) as a function of gender ideology} \end{figure} \subsubsection{Political Affiliation} At the party-level, we find that Democrats are significantly faster than their Non-Partisan counterparts in their reading of gender-neutral terms ($\beta$ = 0.04, \textit{SE} = 0.02, \textit{t} = 2.354, \textit{p} = 0.02), but this difference is not found between Democrats and Republicans ($\beta$ = 0.00, \textit{SE} = 0.02, \textit{t} = 0.83, \textit{p} $>$ 0.1). However, we observe the same difference between Democrats and Non-Partisans in the sentences prefixes leading up to the critical items, as shown in the first three points of Figure 1. As a result, we interpret this a spurious result unrelated to ideology and its affect on processing times. \begin{figure}[h] \centering \includegraphics[scale=0.115]{sprt-neutral-all-regions-poli-party.png} \caption{Residualised reading time by sentence location. ``[TITLE]" indicates the location of the critical items.} \end{figure} \subsubsection{Unigram Surprisal} Finally, we find only a marginal effect of word surprisal ($\beta$ = -0.02, \textit{SE} = 0.01, \textit{t} = -1.825, \textit{p} = 0.07). Despite this, we do find that there is a significant three-way interaction in the Democratic party ($\beta$ = 0.00, \textit{SE} = 0.00, \textit{t} = 2.378, \textit{p} = 0.02) and that Republicans do not significantly differ from them ($\beta$ = -0.00, \textit{SE} = 0.0, \textit{t} = -0.78, \textit{p} = 0.43), such that older participants show a greater degree of sensitivity to word surprisal in the expected direction; more surprising words are processed more slowly. This effect is weaker in younger Republicans, however, and entirely absent in younger Democrats (Figure 3). This may indicate that the frequency values obtained from COCA are not representative of the linguistic input experienced by younger Americans. \begin{figure}[h!] \centering \includegraphics[scale=0.115]{proc-freq-party.png} \caption{Residualised reading time on critical items by unigram surprisal. Each point indicates a lexeme. Age was demarcated at 40 years old.} \end{figure} \section{Experiment Two: Forced-Choice Production} Having investigated the potential link between gender ideology and the processing of neutral role nouns, we find that individual gender-ideology does not significantly impact the processing of gender-neutral role nouns, but that there are interactions between participant party and age as they relate to unigram word surprisal. We turn now to the issue of ideology and the production of such terms, in an effort to further our understanding of the gender-language relationship by identifying the potential (a)symmetry between processing and production. In a forced-choice task, participants selected the form of the lexeme they felt best completed the vignettes from Experiment One. \subsection{Methods} \subsubsection{Participants} 301 participants were recruited using Prolific, with the same criteria as Experiment 1\footnote{100 Democrats and 100 Republicans were recruited initially, in order to maintain a political balance. An additional 100 male-identifying participants were subsequently recruited due to a significant gender imbalance in the initial participant population (13.4\% male-identifying participants in the original population), as a result of an influx of female participants after Prolific went viral on social media app TikTok \parencite{charalambides2021}.}. Participants who failed to correctly respond to 80\% of attention checks were excluded (n=25). The final gender-political distribution is provided in \hyperref[exp2-sample-table]{Table 2}.\par \begin{table}[!ht] \begin{center} \caption{Experiment Two Participant Demographics} \label{exp2-sample-table} \vskip 0.12in \begin{tabular}{llll} \hline & Democrat & Republican & Non-Partisan \\ \hline Female & 82 & 62 & 25 \\ Male & 42 & 46 & 10 \\ Other & 4 & 0 & 0 \\ Decline to state & 1 & 0 & 1 \\ \hline \end{tabular} \end{center} \end{table} \subsubsection{Stimuli \& Procedure} All items in the experiment consisted of a complete sentence missing a single word, using the same stimuli sentence frames and critical items from Experiment One. Participants were then provided with either two or three words which could complete the sentence by filling in the blank, and were asked to select the word which best did that. There were a total of 80 trials, with 20 critical items and 60 filler items.\par Filler items took one of two forms; semantic fillers and grammatical fillers. Semantic fillers had no prescriptively correct answer, as in (8)-(10). \begin{exe} \ex That's the cutest (horse/Lusitano/equine) I have ever seen! \ex The (customer/parent/child) is always right. \ex Revati is a (writer/journalist/author) from India. \end{exe} Grammatical fillers, on the other hand, had prescriptively correct answers, and employed grammatical processes such as demonstrative selection (11), verb agreement (12), or preposition selection (123), among others. These items served a secondary purpose as attention check questions. \begin{exe} \ex She is typing on (\textbf{the}/these/those) computer. \ex Katherine (\textbf{sang}/song/sing) that song beautifully. \ex They are they eating their soup (between/\textbf{with}/at) a spoon. \end{exe} All response possibilities, regardless of type (filler or critical) were shuffled between participants. Similarly, all 70 trials were randomized between participants, after which they moved to the post-experimental phase of the study. \subsubsection{Post-Experiment Questionnaire} All participants completed the same post-experiment questionnaire as that of Experiment One. \subsubsection{Neutral Probability} We calculated a probability value for each of the neutral terms given its co-occurrence with a particular name gender. Because participants were presented with both gendered and gender-neutral options, traditional surprisal values are insufficient to capture the probability of a particular form being selected. As a result, probabilities were calculated as the log proportion of neutral forms over the competing gendered form, given a particular gendered name, with words per million values taken from the `spoken' portion of COCA \parencite{coca} (14). For example, in the sentence `Sally is a congress[person/woman/man]', the probability of `congressperson' are calculated as the words per million occurrences of `congressperson' divided by the same metric for `congresswoman'. \begin{exe} \item probability = log($\frac{\textrm{neutral wpm}}{\textrm{gendered wpm}}$) \end{exe} \subsection{Results} \begin{table*}[h!] \centering \caption{Model outputs for each fixed effect (rows) for each of the political macrocategories.} \vskip 0.12in \begin{tabular}{l r r l r r l r r l } % \begin{tabular}{l {p{2cm} r p{2cm} r p{2cm} l p{2cm} r p{2cm} r p{2cm} l p{2cm} r p{2cm} r p{2cm} l } \toprule & \multicolumn{3}{c}{Democrats} & \multicolumn{3}{c}{Non-Partisans} & \multicolumn{3}{c}{Republicans} \\ \cmidrule(r){2-4} \cmidrule(r){5-7} \cmidrule(r){8-10} & \multicolumn{1}{c}{$\beta$} & \multicolumn{1}{c}{SE} & \multicolumn{1}{c}{p} & \multicolumn{1}{c}{$\beta$} & \multicolumn{1}{c}{SE} & \multicolumn{1}{c}{p} & \multicolumn{1}{c}{$\beta$} & \multicolumn{1}{c}{SE} & \multicolumn{1}{c}{p}\\ \midrule trial gender & 0.862 & 0.124 & \cellcolor{lightgray} $<$0.001 & 1.041 & 0.223 & \cellcolor{lightgray} $<$0.001 & 1.272 & 0.143 & \cellcolor{lightgray} $<$0.001\\ ideology & -0.034 & 0.007 & \cellcolor{lightgray} $<$0.001 & -0.005 & 0.015& 0.714 & 0.001 & .005 & 0.886 \\ neutral probability & 9.234 & 2.222 & \cellcolor{lightgray} $<$0.001 & 15.242 & 4.623 & \cellcolor{lightgray} $<$0.001 & 14.596 & 2.315 & \cellcolor{lightgray} $<$0.001\\ \bottomrule \end{tabular} %\caption{lmer(suspectconvictionJustified ~ generation * condition + (1|storyreproduction), data=dfmodel); high correlation of fixed effects} \label{tab:exp2results} \end{table*} \subsubsection{Model Structure} For each of the political parties, we fit a generalized linear mixed effects model that predicted neutral responses (binary, reference level: "gendered") as a function of the interaction between gender ideology (centered) with referent gender (binary, centered, scaled, reference level: "male"), with an additional main effect of neutral probability. We also included random intercepts for participant and lexical item. None of the interactions reached a level of significance, and significant results are provided in Table 3. \subsubsection{Exclusions} An additional 241 responses were excluded from model analysis for being incongruent with the names that appeared in the vignettes, such as `David is a congresswoman' or `Sally is a congressman'. These responses are, however, included in Figure 4. \subsubsection{Gender Ideology} We did not observe a main effect of gender ideology on the proportion of gender-neutral responses selected (Table 3, Row 2). Rather, only Democrats show an effect of this predictor, such that more gender progressive Democrats produced higher proportions of gender-neutral role nouns than their less progressive counterparts. Republicans and Non-Partisans show no such modulation by gender ideology. This indicates that Democrats have recruited gender-neutral role nouns as a semiotic resource with which to construct progressive personae. \par We additionally find that Democrats have a higher base production rate of gender-neutral role nouns than their Non-Partisan or Republican counterparts. While Democrats selected the gender-neutral forms 59.6\% of the time, Republicans selected them only 45.1\% of the time. The political Non-Partisans performed in the middle, selecting the neutral forms 53\% of the time. This underscores the use of gender-neutral language as a marker of progressive gender ideology. \begin{figure}[h] \centering \includegraphics[scale=0.12]{prod-3x2x3.png} \caption{Proportion of responses by gender produced in Experiment Two, according to gender of the name in the stimulus sentence (x-axis facet) and participant political alignment (y-axis facet)} \end{figure} \subsubsection{Referent Gender} With regard to trial gender, or the gender of the name presented in the vignette, we observed a main effect on production rates of gender-neutral titles. This resulted in participants of all three political macrocategories being more likely to produce gender-neutral forms when forced to pick a role title that coreferred with a male name. Across all observations, gender-neutral forms were produced 57\% of the time with male names, compared to only 48.7\% of the time with female names. These trends are presented in Row 1 of Table 3, and shown in the cross-gender differences of Figure 3. Possible reasons for this are discussed below. \subsubsection{Neutral Probability} Finally, we observe a main effect of probability in the expected direction, such that a higher log probability of the neutral occurrence predicts a neutral response selection. This effect was found across all three political macrocategories, as seen in the third row of Table 3. \section{General Discussion} In our processing study, we observed no significant effect of gender ideology on the processing of gender-neutral role titles when they coreferred with gendered names. This is reminiscent of the findings of \textcite{von2020implicit}, wherein presidentially-coreferential \textit{she} incurred a significant processing penalty despite societal expectations that Hillary Clinton would win the 2016 election. Our data similarly indicates individually-held beliefs about gender do not modulate the processing of gender-neutral role nouns. However, we do see a difference in processing as a function of age and word surprisal, such that young participants show less sensitivity to effects of word frequency than do older participants, potentially indicating differences in exposure to these terms in media.\par However, when forced to make a gendered selection of a role noun which corefers with a gendered name, we observe that gender-progressive Democrats are more likely to select the gender neutral version than their more conservative counterparts. This is true both group internally (i.e. progressive Democrats use neutral terms more than conservative Democrats) and group-externally (i.e. Democrats use more neutral terms than Republicans or Non-Partisans). \par To explain this discrepancy, we argue that gender-neutral forms of morphologically-gendered items are a semiotic resource upon which users of English can draw to \textit{index} \parencite{eckert2008variation} their relative progressiveness with regard to gender. As a result, Democrats (who have generally higher scores on our scale of gender-progressiveness than Republicans) are more likely to use gender-neutral compounds in their creation of gender-progressive personae. On the other hand, Republicans are less likely to use these forms, as doing so would index an ideology about gender that they may not have. \par External commentary indicates that the production of gender-neutral forms has come to index a stance of gender progressiveness, and that such indexical associations form part of a larger language ideology (in the sense of Gal \& Irvine 1995) wherein differences in gendered language are mapped onto social categories such as `Democrats' or `Republicans'. For example, former Acting Director of National Intelligence Richard Grenell tweeted an image of a cookie with an accompanying display-case card that read ``Gingerbread Person". Alongside this was Grenell's caption: `Stop voting for Democrats.' \parencite{Grenell}. Grenell explicitly draws on language ideology to implicitly assert that elected Democrats are responsible for the proliferation of politically-correct language regarding gender. \par We also observe that male names are more likely to elicit neutral role titles in the production task than female names. This may be because the lexical items under investigation are overwhelmingly male-associated, with only \textit{flight attendant} being rated as `likely a woman' in our norming study. As a result, these roles being filled by women are societally `marked', in that they run counter to our expectations. Participants may then be more likely to pick the `marked' form of the lexeme, which in most cases is the female form, either by morphology (\textit{actor} vs. \textit{actr-ess}, where \textit{actress} is morphologically more complex) or by frequency. While none of the participants in our norming study reported being unfamiliar with any of the female terms, terms such as \textit{firewoman} and \textit{villainess} are rare in the corpus, if they occur at all (\textit{firewoman}, for example, does not). As such, it may be the case that participants are selecting marked linguistic forms to pick out marked real-world referents.\par Alternatively, as is apparent in Figure 4, the answer may partially lie in the fact that respondents are willing to assign female referents masculine titles at a much greater rate than they are to do the opposite. While such productions were not included in the analysis, their presence as options in the task at hand may have inadvertently skewed the proportions of productions. Future work is planned to investigate these `gender incongruent' productions.\par In sum, we believe that these results further our understanding of the relationship between gender and language by highlighting an incongruity in the processing and production of gender-neutral role nouns. Moreover, this incongruity is found at the individual level, calling for a greater degree of granularity of our investigations of biases in the linguistic system. The examination of such biases is critical in the development of fair and inclusive language, and we hope that the work herein will encourage researchers to pursue such work with the individual and their experiences in mind. \newpage \nocite{gal1995boundaries} \printbibliography \end{document}
{ "alphanum_fraction": 0.7859483176, "avg_line_length": 127.1702898551, "ext": "tex", "hexsha": "a91df44329066f30efebe5653c8df8bf9445da3d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2b2b87e13cb7a8abd0403828fbc235768a774aaa", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "BranPap/gender_ideology", "max_forks_repo_path": "talks_and_papers/qp_paper/papineau_qp1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2b2b87e13cb7a8abd0403828fbc235768a774aaa", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "BranPap/gender_ideology", "max_issues_repo_path": "talks_and_papers/qp_paper/papineau_qp1.tex", "max_line_length": 1745, "max_stars_count": null, "max_stars_repo_head_hexsha": "2b2b87e13cb7a8abd0403828fbc235768a774aaa", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "BranPap/gender_ideology", "max_stars_repo_path": "talks_and_papers/qp_paper/papineau_qp1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8360, "size": 35099 }
\documentclass[12pt,a4paper]{article} \usepackage[T1]{fontenc} \usepackage[charter]{mathdesign} \usepackage{amsmath,amsthm,enumitem,titlesec,xcolor} \usepackage{microtype} \usepackage[a4paper,margin=25mm]{geometry} \usepackage[unicode]{hyperref} \hypersetup{ hidelinks, pdftitle={Distributed Algorithms}, pdfauthor={Jukka Suomela}, } \definecolor{titlecolor}{HTML}{0088cc} \definecolor{hlcolor}{HTML}{f26924} \newcommand{\q}[2]{\paragraph{\mbox{Question #1: }#2.}} \newcommand{\sep}{{\centering \raisebox{-3mm}[0mm][0mm]{$*\quad*\quad*$}\par}} \newcommand{\hl}[1]{\textbf{\emph{#1}}} \newcommand{\cemph}[1]{\textcolor{hlcolor}{\textbf{\emph{\boldmath #1}}}} \DeclareMathOperator{\diam}{diam} \setitemize{noitemsep,leftmargin=3ex} \titleformat{\paragraph}[runin] {\normalfont\normalsize\bfseries\color{titlecolor}}{\theparagraph}{1em}{} \begin{document} \noindent \emph{CS-E4510 Distributed Algorithms / Jukka Suomela\\ exam, 16 December 2021} \paragraph{Instructions.} There are three questions; please \cemph{try to answer something in each of them}. If you cannot solve a problem entirely, please at least explain what you tried and what went wrong. Do not spend too much time with one problem; the problems are not listed in order of difficulty and they do not depend on each other. All questions refer to problems that we studied in the previous exam, you can find the old exam here: \begin{center} \url{https://jukkasuomela.fi/da2020/exam-2021-10-27.pdf} \end{center} You are free to look at any source material (this includes lecture notes, textbooks, and anything you can find with Google), but you are not allowed to collaborate with anyone else or ask for anyone's help (this includes collaboration with other students and asking for help in online forums). You are free to use any results from the lecture notes directly without repeating the details. Please note that we are looking for mathematical proofs here. The proof can be brief and a bit sketchy, but the proof idea has to be solid. Please give enough details so that a friendly, cooperative reader can understand your proof idea correctly and see why it makes sense. Illustrations are probably going to be very helpful. \q{1}{PN} Recall question 1 in the previous exam. There you were allowed to label edges with arbitrary \cemph{integers}, and you showed that the problem was solvable with a deterministic algorithm in the PN model. Now let us make the problem slightly more challenging: the edge labels must be integers from the set \cemph{$\{-1, 0, +1\}$}; everything else remains the same (adjacent edges have different labels, and the sum of the labels is $0$). Prove that the new problem \hl{cannot} be solved with any deterministic algorithms in the PN model. \medskip \noindent\hl{Hint:} You are expected to use an argument that uses covering maps. \q{2}{LOCAL} Recall question 2 in the previous exam. There we specified a graph problem, and you designed an algorithm that solves the problem in \cemph{$o(n)$} rounds. Prove that the same problem \hl{cannot} be solved with any deterministic algorithm in the LOCAL model in \cemph{$O(1)$} rounds. \q{3}{CONGEST} Recall question 3 in the previous exam. There we specified a graph problem, and you designed a deterministic CONGEST model algorithm that solves the problem in \cemph{$O(\diam(G))$} rounds. Prove that the same problem \hl{cannot} be solved with any deterministic algorithm in the CONGEST model in \cemph{$0$} rounds (i.e., without any communication). Prove that this holds even if the unique identifiers are numbers from $\{1,2,\dotsc,n^2\}$, and the nodes get the value of $n$ as input. \end{document}
{ "alphanum_fraction": 0.7670748299, "avg_line_length": 58.3333333333, "ext": "tex", "hexsha": "ff579a4dae5d763bdbe295eeb7ac3e3524685895", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-11-11T12:33:40.000Z", "max_forks_repo_forks_event_min_datetime": "2021-06-22T03:53:31.000Z", "max_forks_repo_head_hexsha": "874238b4e1d395769fc89d0d3a9453366056ad1d", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "suomela/da2020", "max_forks_repo_path": "exams/exam-2021-12-16.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "874238b4e1d395769fc89d0d3a9453366056ad1d", "max_issues_repo_issues_event_max_datetime": "2021-11-17T18:42:16.000Z", "max_issues_repo_issues_event_min_datetime": "2021-11-17T18:31:27.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "suomela/da2020", "max_issues_repo_path": "exams/exam-2021-12-16.tex", "max_line_length": 536, "max_stars_count": 16, "max_stars_repo_head_hexsha": "874238b4e1d395769fc89d0d3a9453366056ad1d", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "suomela/da2020", "max_stars_repo_path": "exams/exam-2021-12-16.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-07T15:46:43.000Z", "max_stars_repo_stars_event_min_datetime": "2020-12-11T00:47:26.000Z", "num_tokens": 975, "size": 3675 }
\filetitle{isempty}{True if system priors object is empty}{systempriors/isempty} \paragraph{Syntax}\label{syntax} \begin{verbatim} Flag = isempty(S) \end{verbatim} \paragraph{Input arguments}\label{input-arguments} \begin{itemize} \itemsep1pt\parskip0pt\parsep0pt \item \texttt{S} {[} systempriors {]} - System priors, \href{systempriors/Contents}{\texttt{systempriors}}, object. \end{itemize} \paragraph{Output arguments}\label{output-arguments} \begin{itemize} \itemsep1pt\parskip0pt\parsep0pt \item \texttt{Flag} {[} true \textbar{} false {]} - True if the system priors object, \texttt{S}, is empty, false otherwise. \end{itemize} \paragraph{Description}\label{description} \paragraph{Example}\label{example}
{ "alphanum_fraction": 0.7425474255, "avg_line_length": 21.7058823529, "ext": "tex", "hexsha": "05ffa7c5c7dc08099a8877d95c8fbd9acc1ca210", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-01-17T07:06:39.000Z", "max_forks_repo_forks_event_min_datetime": "2022-01-17T07:06:39.000Z", "max_forks_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_forks_repo_path": "-help/systempriors/isempty.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_issues_repo_issues_event_max_datetime": "2020-09-02T10:40:25.000Z", "max_issues_repo_issues_event_min_datetime": "2017-03-28T08:13:20.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_issues_repo_path": "-help/systempriors/isempty.tex", "max_line_length": 84, "max_stars_count": 1, "max_stars_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_stars_repo_path": "-help/systempriors/isempty.tex", "max_stars_repo_stars_event_max_datetime": "2017-12-06T13:38:38.000Z", "max_stars_repo_stars_event_min_datetime": "2017-12-06T13:38:38.000Z", "num_tokens": 231, "size": 738 }
\documentclass[acmsmall, 9pt]{article} \usepackage[a4paper, total={6in, 10in}]{geometry} \input{../tex/packages} \input{../tex/layout-tweaks} \input{../tex/macros} \input{../tex/listings} \setlist[itemize]{leftmargin=4mm} \setlist[enumerate]{leftmargin=4mm} \addbibresource{../bibliography.bib} \begin{document} \pagestyle{empty} \title{Higher-Order Polymorphic Typed Lambda Calculus: Approaches to Formalising Grammars with Rich Kinds} \maketitle \noindent System F$_\omega$ \cite{cambridge-lambda-calc, pierce2002types}, also known as higher-order polymorphic lambda calculus, extends System F with richer kinds $K$, namely the kind $K_1 \rightarrow K_2$ of type constructors, and adds type-level lambda-abstraction $\lambda \alpha^K. \, A$ and application $A\,B$. \begin{align*} \text{Kinds} \quad K &::= * \; | \; K_1 \rightarrow K_2\\ \text{Types} \quad A, B &::= \iota \; | \; A \rightarrow B \; | \; \forall \alpha^K . \, A\; | \; \alpha \; | \; \lambda \alpha^K. \, A \; | \; A \, B \end{align*} We are also free to extend the set of kinds $K$ with other arbitrary kind constants, which means our set of types we range over with $A, B$ includes non-value types of kind other than $*$. We must therefore consider various details when organizing the type calculus, such as what syntactic classes of type metavariables are there, what are the kinding rules for our types, which kinds of types can be polymorphic over, and what kinds of types can we use in type application. There are a number of options in how to formalize a grammar with rich kinds. Let's consider adding row types $R$ to our grammar, which is an unordered collection of labels $\ell$. We say that rows have kind $\mathsf{Row}$ and labels have kind $\mathsf{Label}$. From rows, we can then form variants (sums) $\tyAngle{R}$ and records (products) $\tyBrace{R}$ which are value types of kind $*$. \subsubsection{Type Indiscriminative Method} \label{sssec:tau-method} The most general way is to use a single type metavariable $T$ which captures all types of different kinds $K$. We note that this relies on the kinding rules to delineate which types are well-formed. For example: \begin{align*} &\text{Kinds} \quad &K &::= * \; | \; K_1 \rightarrow K_2 \; | \; \mathsf{Row} \; | \; \mathsf{Label} \span\span \\ &\text{Types} \quad &T &::= c \; | \; \tyFun{T_1}{T_2} \; | \; \forall \alpha^K . \, T\; | \; \alpha \; | \; \lambda \alpha^K. \, T \;| \; T_1 \; T_2 \; \span\span\\ &&&\quad \; | \; l \; | \; l ; T \; | \; \cdot \; | \;\tyBrace{T} \; | \; \tyAngle{T} \\ \end{align*} Here we have type constants $c$ which capture value types such as $\texttt{Bool}$. Functions are written $\tyFun{T_1}{T_2}$. Universal quantification $\forall \alpha^K. \, T$ can quantify over types of any kind $K$ to produce some type $T$ of kind $K'$. Type variables $\alpha$ can be inhabited by types of any kind which $T$ ranges over. Type abstraction $\lambda \alpha^K. \, T$ has higher-kind $K_1 \rightarrow K_2$. Type application $T_1 \, T_2$ can then allow types of kinds other than $*$ and $* \rightarrow *$ to be applied to each other. Lastly, we have labels $l$, rows extended with labels $l; T$, empty rows $\cdot$, records $\tyBrace{T}$, and variants $\tyAngle{T}$. % but for this to be well-formed under the kinding rules, $T_1$, $T_2$ and $T_1 \rightarrow T_2$ must be of kind $*$. This grammar permits functions, type application, type abstraction, and universal quantification to work over types of any kind. We note that this grammar on its own cannot dictate what types are well-formed or ill-formed, therefore it is important to have kinding rules to express what is allowed. For example, this type syntax says that a function $l \rightarrow l$ between two types of kind $\mathsf{Label}$ is possible, however this isn't well-formed as we cannot pass or return labels as values. \begin{figure}[H] \flushleft \shadebox{$\Delta \vdash T : K$} \begin{smathpar} \inferrule*[lab={\ruleName{constant}}] { } { \Delta \vdash c : * } \and \inferrule*[lab={\ruleName{function}}] { \Delta \vdash T_1 : * \\ \Delta \vdash T_2 : * } { \Delta \vdash \tyFun{T_1}{T_2} : * } \and \inferrule*[lab={\ruleName{forall}}] { \Delta \concat (\alpha : K) \vdash T : * } { \Delta \vdash \forall \alpha^K. \, T : * } \and \inferrule*[lab={\ruleName{type variable}}] { \alpha : K \in \Delta } { \Delta \vdash \alpha : K } \and \inferrule*[lab={\ruleName{type constructor}}] { \Delta \concat (\alpha : K_1) \vdash T : K_2 } { \Delta \vdash \lambda \alpha^{K_1} . \, T : \tyFun{K_1}{K_2} } \and \inferrule*[lab={\ruleName{type constructor application}}] { \Delta \vdash T_1 : \tyFun{K_1}{K_2} \\ \Delta \vdash T_2 : K_1 } { \Delta \vdash T_1 \, T_2 : K_2 } \and \inferrule*[lab={\ruleName{label}}] { } { \Delta \vdash l : \mathsf{Label} } \and \inferrule*[lab={\ruleName{row-extend}}] { \Delta \vdash l : \mathsf{Label} \\ \Delta \vdash T : \mathsf{Row} } { \Delta \vdash l; T : \mathsf{Row} } \and \inferrule*[lab={\ruleName{row-empty}}] { } { \Delta \vdash \cdot : \mathsf{Row} } \and \inferrule*[lab={\ruleName{record}}] { \Delta \vdash T : \mathsf{Row} } { \Delta \vdash \tyBrace{T} : * } \and \inferrule*[lab={\ruleName{variant}}] { \Delta \vdash T : \mathsf{Row} } { \Delta \vdash \tyAngle{T} : * } \end{smathpar} \caption{Kinding Rules} \end{figure} \subsubsection{Type Categorization Method \cite{hillerstrom2016liberating}} \label{sssec:type-categorization-method} Another method is to distinguish between type metavariables which produce types of different kinds. For example: \begin{align*} &\text{Kinds} \quad &K &::= * \; | \; K_1 \rightarrow K_2 \; | \; \mathsf{Row} \; | \; \mathsf{Label}\\ &\text{Value Types} \quad &A, B &::= c \; | \; A \rightarrow B \; | \; \forall \alpha^K . \, A\; | \; \alpha \; | \; \lambda \alpha^K. \, A \; | \; A \, B \; | \; \{ R \} \; | \; \tyAngle{R}\\ &\text{Row Types} \quad &R &::= l; R \; | \; \cdot \; | \; \rho \\ &\text{Label Types} \quad &l &::= l_1 \; | \; l_2 \; | \; \ldots \end{align*} \textbf{Value types} $A,B$ are any types which produce a kind $*$, except for type variables $\alpha$. This includes type constants $c$ and functions $A \rightarrow B$. Universally quantified types, $\forall \alpha^K . \; A$, are able to quantify over types with any kind $K$ in $\forall \alpha^K. \, A$ but the type variable $\alpha^K$ must always be used to return a type $A$ of kind $*$. We also consider type constructors $\lambda \alpha^K. \; A$ of kind $K_1 \rightarrow K_2$ as value types, which take as input a type of rich kind $K_1$ but must eventually produce a type of kind $*$ in $K_2$. Type application $A\;B$ applies a type constructor $A$ to a value type $B$. Lastly, record types $\langle R \rangle$, functions $A \rightarrow B$, universally quantified types $\forall \alpha^K . \; A$, and type application $A\;B$, which are all of kind $*$. Note that although we can abstract over types of kind $K$ in type constructors $\lambda \alpha^K. \, A$, type constructor application $A\,B$ does not allow us to apply type constructors to types of kind other than $*$; this means types such as record constructors $\{ \, \_ \, \}$ cannot exist on their own, only records $\{ R \}$ which are already applied to a row type $R$ to yield a kind $*$. \lbreak \textbf{Non-value} types are types which produce kinds other than $*$. This includes row types $R$ and label types $l$. We note that row types $R$ also have their own row type variable $\rho$, which allows $\rho$ to be used in place of where $R$ can occur, and hence row types can be defined polymorphically. The fact that universal quantification is only defined in value types $A, B$ means that row types must be used in the context of a value type where $\rho$ is quantified over by $\forall \alpha^\mathsf{Row} . \; A$ where we can unify $\rho$ and $\alpha$. \lbreak This approach more clearly delineates types of different kinds, and restricts type application, type abstraction, and universal quantification to types which produce a kind $*$. This is generally desirable to enforce a stronger well-formed type system within the grammar itself, e.g. that values can only possibly have types $A, B$ with output kinds $*$ at the term-level, and that types which use type constructors to take a richly-kinded type as input must already be fully applied to have a kind $*$. A disadvantage of this is that type constructor application $A\,B$ does not allow us to apply type constructors to types of kind other than $*$. \subsubsection{\textbullet \; Explicitly Kinded Type Indiscriminative Method \cite{leijen2005extensible}} When we have a system with rich kinds, we can refine the notion of using a single metavariable $T$ by annotating it with a kind $K$, written $T^K$. This lets us capture types with kinds other than $*$ whilst being explicit about which types are well-formed. \lbreak For each kind $K$ we have a collection of types $T^K$; this includes type constants $c^K$, polymorphic types $\forall \alpha^K . \, T^{K'}$, type variables $\alpha^K$, and type application $T_1^{K_2 \rightarrow K} \; T_2^{K_2}$. The kind signatures for type constants $c^K$ are given explicitly in the grammar, where we use wildcards ``$\_$'' to represent arguments of type constants. \begin{align*} &\text{Kinds} \quad &K &::= * \; | \; K_1 \rightarrow K_2 \; | \; \mathsf{Row} \; | \; \mathsf{Label} \\ &\text{Types} \quad &T^K &::= c^K \; | \; \forall \alpha^{K_1} . \, T^{K_2}\; | \; \alpha^K \; | \; \lambda \alpha^{K_1}. \, T^{K_2} \; | \; T_1^{K_2 \rightarrow K} \; T_2^{K_2} \span\span\\ &\text{Type constants} \quad &c^K &::= \texttt{()}, \texttt{bool}, \texttt{int} &::& \; *\\ &&&| \quad \_ \rightarrow \_ &::& \; * \rightarrow * \rightarrow * \\ &&&| \quad l &::& \; \mathsf{Label} \\ &&&| \quad \_ ; \_ &::& \; \mathsf{Label} \rightarrow \mathsf{Row} \rightarrow \mathsf{Row}\\ &&&| \quad \cdot &::& \; \mathsf{Row}\\ &&&| \quad \tyBrace{\_} &::& \; \mathsf{Row} \rightarrow \mathsf{Type}\\ &&&| \quad \tyAngle{\_} &::& \; \mathsf{Row} \rightarrow \mathsf{Type}\\ \end{align*} \noindent We can then let a choice of metavariables range over types over different kinds, e.g. let $\rho \doteq T^{\mathsf{Row}}$, and similarly with type variables, e.g. let $\beta \doteq \alpha^{\mathsf{Row}}$. \printbibliography \end{document}
{ "alphanum_fraction": 0.6605643107, "avg_line_length": 58.2252747253, "ext": "tex", "hexsha": "6fd9c90627ea81d67b673c333074addae3e5f3d5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "24ec4a6cb67ff265c7e927c8c4bde6c7c06c18c5", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "min-nguyen/min-nguyen.github.io", "max_forks_repo_path": "tex/formalising-higher-order-lambda-calculus/formalising-higher-order-lambda-calculus.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "24ec4a6cb67ff265c7e927c8c4bde6c7c06c18c5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "min-nguyen/min-nguyen.github.io", "max_issues_repo_path": "tex/formalising-higher-order-lambda-calculus/formalising-higher-order-lambda-calculus.tex", "max_line_length": 858, "max_stars_count": null, "max_stars_repo_head_hexsha": "24ec4a6cb67ff265c7e927c8c4bde6c7c06c18c5", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "min-nguyen/min-nguyen.github.io", "max_stars_repo_path": "tex/formalising-higher-order-lambda-calculus/formalising-higher-order-lambda-calculus.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3407, "size": 10597 }
% Copyright 2019 by Till Tantau % % This file may be distributed and/or modified % % 1. under the LaTeX Project Public License and/or % 2. under the GNU Free Documentation License. % % See the file doc/generic/pgf/licenses/LICENSE for more details. \section{Three Dimensional Drawing Library} \begin{tikzlibrary}{3d} This package provides some styles and options for drawing three dimensional shapes. \end{tikzlibrary} \subsection{Coordinate Systems} \begin{coordinatesystem}{xyz cylindrical} The |xyz cylindrical| coordinate system allows to you specify a point in terms of cylindrical coordinates, sometimes also referred to as cylindrical polar coordinates or polar cylindrical coordinates. It is very similar to the |canvas polar| and |xy polar| coordinate systems with the difference that you provide an elevation over the $xy$-plane using the |z| key. % \begin{key}{/tikz/cs/angle=\meta{degrees} (initially 0)} The angle of the coordinate interpreted in the ellipse whose axes are the $x$-vector and the $y$-vector. \end{key} % \begin{key}{/tikz/cs/radius=\meta{number} (initially 0)} A factor by which the $x$-vector and $y$-vector are multiplied prior to forming the ellipse. \end{key} % \begin{key}{/tikz/cs/z=\meta{number} (initially 0)} Factor by which the $z$-vector is multiplied. \end{key} % \begin{codeexample}[preamble={\usetikzlibrary{3d}}] \begin{tikzpicture}[->] \draw (0,0,0) -- (xyz cylindrical cs:radius=1); \draw (0,0,0) -- (xyz cylindrical cs:radius=1,angle=90); \draw (0,0,0) -- (xyz cylindrical cs:z=1); \end{tikzpicture} \end{codeexample} % \end{coordinatesystem} \begin{coordinatesystem}{xyz spherical} The |xyz spherical| coordinate system allows you to specify a point in terms of spherical coordinates. % \begin{key}{/tikz/cs/radius=\meta{number} (initially 0)} Factor by which the $x$-, $y$-, and $z$-vector are multiplied. \end{key} % \begin{key}{/tikz/cs/latitude=\meta{degrees} (initially 0)} Angle of the coordinate between the $y$- and $z$-vector, measured from the $y$-vector. \end{key} % \begin{key}{/tikz/cs/longitude=\meta{degrees} (initially 0)} Angle of the coordinate between the $x$- and $y$-vector, measured from the $y$-vector. \end{key} % \begin{key}{/tikz/cs/angle=\meta{degrees} (initially 0)} Same as |longitude|. \end{key} % \begin{codeexample}[preamble={\usetikzlibrary{3d}}] \begin{tikzpicture}[->] \draw (0,0,0) -- (xyz spherical cs:radius=1); \draw (0,0,0) -- (xyz spherical cs:radius=1,latitude=90); \draw (0,0,0) -- (xyz spherical cs:radius=1,longitude=90); \end{tikzpicture} \end{codeexample} % \end{coordinatesystem} \subsection{Coordinate Planes} Sometimes drawing with full three dimensional coordinates is not necessary and it suffices to draw in two dimensions but in a different coordinate plane. The following options help you to switch to a different plane. \subsubsection{Switching to an arbitrary plane} \begin{key}{/tikz/plane origin=\meta{point} (initially {(0,0)})} Origin of the plane. \end{key} \begin{key}{/tikz/plane x=\meta{point} (initially {(1,0)})} Unit vector of the $x$-direction in the new plane. \end{key} \begin{key}{/tikz/plane y=\meta{point} (initially {(0,1)})} Unit vector of the $y$-direction in the new plane. \end{key} \begin{key}{/tikz/canvas is plane} Perform the transformation into the new canvas plane using the units above. Note that you have to set the units \emph{before} calling |canvas is plane|. % \begin{codeexample}[preamble={\usetikzlibrary{3d}}] \begin{tikzpicture}[ ->, plane x={(0.707,-0.707)}, plane y={(0.707,0.707)}, canvas is plane, ] \draw (0,0) -- (1,0); \draw (0,0) -- (0,1); \end{tikzpicture} \end{codeexample} % \end{key} \subsubsection{Predefined planes} \begin{key}{/tikz/canvas is xy plane at z=\meta{dimension}} A plane with % \begin{itemize} \item |plane origin={(0,0,|\meta{dimension}|)}|, \item |plane x={(1,0,|\meta{dimension}|)}|, and \item |plane y={(0,1,|\meta{dimension}|)}|. \end{itemize} \end{key} \begin{key}{/tikz/canvas is yx plane at z=\meta{dimension}} A plane with % \begin{itemize} \item |plane origin={(0,0,|\meta{dimension}|)}|, \item |plane x={(0,1,|\meta{dimension}|)}|, and \item |plane y={(1,0,|\meta{dimension}|)}|. \end{itemize} \end{key} \begin{key}{/tikz/canvas is xz plane at y=\meta{dimension}} A plane with % \begin{itemize} \item |plane origin={(0,|\meta{dimension}|,0)}|, \item |plane x={(1,|\meta{dimension}|,0)}|, and \item |plane y={(0,|\meta{dimension}|,1)}|. \end{itemize} \end{key} \begin{key}{/tikz/canvas is zx plane at y=\meta{dimension}} A plane with % \begin{itemize} \item |plane origin={(0,|\meta{dimension}|,0)}|, \item |plane x={(0,|\meta{dimension}|,1)}|, and \item |plane y={(1,|\meta{dimension}|,0)}|. \end{itemize} \end{key} \begin{key}{/tikz/canvas is yz plane at x=\meta{dimension}} A plane with % \begin{itemize} \item |plane origin={(|\meta{dimension}|,0,0)}|, \item |plane x={(|\meta{dimension}|,1,0)}|, and \item |plane y={(|\meta{dimension}|,0,1)}|. \end{itemize} \end{key} \begin{key}{/tikz/canvas is zy plane at x=\meta{dimension}} A plane with % \begin{itemize} \item |plane origin={(|\meta{dimension}|,0,0)}|, \item |plane x={(|\meta{dimension}|,0,1)}|, and \item |plane y={(|\meta{dimension}|,1,0)}|. \end{itemize} \end{key} \subsection{Examples} \begin{codeexample}[preamble={\usetikzlibrary{3d}}] \begin{tikzpicture}[z={(10:10mm)},x={(-45:5mm)}] \def\wave{ \draw[fill,thick,fill opacity=.2] (0,0) sin (1,1) cos (2,0) sin (3,-1) cos (4,0) sin (5,1) cos (6,0) sin (7,-1) cos (8,0) sin (9,1) cos (10,0)sin (11,-1)cos (12,0); \foreach \shift in {0,4,8} { \begin{scope}[xshift=\shift cm,thin] \draw (.5,0) -- (0.5,0 |- 45:1cm); \draw (1,0) -- (1,1); \draw (1.5,0) -- (1.5,0 |- 45:1cm); \draw (2.5,0) -- (2.5,0 |- -45:1cm); \draw (3,0) -- (3,-1); \draw (3.5,0) -- (3.5,0 |- -45:1cm); \end{scope} } } \begin{scope}[canvas is zy plane at x=0,fill=blue] \wave \node at (6,-1.5) [transform shape] {magnetic field}; \end{scope} \begin{scope}[canvas is zx plane at y=0,fill=red] \draw[help lines] (0,-2) grid (12,2); \wave \node at (6,1.5) [rotate=180,xscale=-1,transform shape] {electric field}; \end{scope} \end{tikzpicture} \end{codeexample} \begin{codeexample}[preamble={\usetikzlibrary{3d}}] \begin{tikzpicture} \begin{scope}[canvas is zy plane at x=0] \draw (0,0) circle (1cm); \draw (-1,0) -- (1,0) (0,-1) -- (0,1); \end{scope} \begin{scope}[canvas is zx plane at y=0] \draw (0,0) circle (1cm); \draw (-1,0) -- (1,0) (0,-1) -- (0,1); \end{scope} \begin{scope}[canvas is xy plane at z=0] \draw (0,0) circle (1cm); \draw (-1,0) -- (1,0) (0,-1) -- (0,1); \end{scope} \end{tikzpicture} \end{codeexample} %%% Local Variables: %%% mode: latex %%% TeX-master: "pgfmanual-pdftex-version" %%% End:
{ "alphanum_fraction": 0.6150836481, "avg_line_length": 30.1300813008, "ext": "tex", "hexsha": "79bb47a2f76ba9bf58f2d89483796b38fac2fd70", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "52fe6e0cd5af6b4610fd344a7392cca11bc5a72e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "waqas4afzal/LatexUrduBooksTools", "max_forks_repo_path": "Texlive_Windows_x32/2020/texmf-dist/doc/generic/pgf/text-en/pgfmanual-en-library-3d.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "52fe6e0cd5af6b4610fd344a7392cca11bc5a72e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "waqas4afzal/LatexUrduBooksTools", "max_issues_repo_path": "Texlive_Windows_x32/2020/texmf-dist/doc/generic/pgf/text-en/pgfmanual-en-library-3d.tex", "max_line_length": 79, "max_stars_count": null, "max_stars_repo_head_hexsha": "52fe6e0cd5af6b4610fd344a7392cca11bc5a72e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "waqas4afzal/LatexUrduBooksTools", "max_stars_repo_path": "Texlive_Windows_x32/2020/texmf-dist/doc/generic/pgf/text-en/pgfmanual-en-library-3d.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2532, "size": 7412 }
\documentclass{beamer} \usepackage{pdfpages} \beamertemplatenavigationsymbolsempty{} %TODO use subfiles package? \begin{document} \setbeamercolor{background canvas}{bg=} \section{Exercises} \includepdf[pages=-]{00_ex_intro/00_ex_intro.pdf} \section{Motivation}\includepdf[pages=-]{00_motivation/00_motivation} \section{System Theory}\includepdf[pages=-]{01_system-theory/01_system-theory} \section{Image Processing}\includepdf[pages=-]{02_image_processing/02_image_processing.pdf} \section{Endoscopy}\includepdf[pages=-]{03_endoscopy/03_endoscopy} \section{Microscopy}\includepdf[pages=-]{04_microscopy/04_microscopy} \section{MR I}\includepdf[pages=-]{05_mr/05_mr} \section{MR II}\includepdf[pages=-]{05_mr/05_mr2} \section{X-ray}\includepdf[pages=-]{06_x-ray/06_x-ray} \section{CT}\includepdf[pages=-]{07_ct/07_ct} \section{Spectral CT}\includepdf[pages=-]{07_ct/07_spectral_ct} \section{Phase Contrast X-ray}\includepdf[pages=-]{08_phase-contrast_x-ray/08_phase-contrast_x-ray.pdf} \section{Nuclear Medicine}\includepdf[pages=-]{09_emission_tomography/09_emission_tomography.pdf} \section{Ultra Sound}\includepdf[pages=-]{10_ultrasound/10_ultrasound.pdf} \section{OCT}\includepdf[pages=-]{11_oct/11_oct.pdf} \end{document}
{ "alphanum_fraction": 0.8029315961, "avg_line_length": 49.12, "ext": "tex", "hexsha": "a32094f953ca423a2fc48891fba980bef6279c85", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2022-03-16T13:20:15.000Z", "max_forks_repo_forks_event_min_datetime": "2022-01-18T02:33:13.000Z", "max_forks_repo_head_hexsha": "b11e66540c155ff9840b40c950527ac421e2582a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "bkainz/mt_lecture_slides", "max_forks_repo_path": "all_slides.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b11e66540c155ff9840b40c950527ac421e2582a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "bkainz/mt_lecture_slides", "max_issues_repo_path": "all_slides.tex", "max_line_length": 103, "max_stars_count": 3, "max_stars_repo_head_hexsha": "b11e66540c155ff9840b40c950527ac421e2582a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "bkainz/mt_lecture_slides", "max_stars_repo_path": "all_slides.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-27T17:55:01.000Z", "max_stars_repo_stars_event_min_datetime": "2022-01-14T22:17:37.000Z", "num_tokens": 374, "size": 1228 }
\section{The fundamental theorem of algebra} \begin{outcome} \begin{enumerate} \item Find the complex roots of a quadratic polynomial. \item In special cases, find the complex roots of a polynomial of degree 3 or more. \item Factor a polynomial into linear factors. \end{enumerate} \end{outcome} The complex numbers were invented so that equations such as $z^2+1=0$ would have solutions. In fact, this equation has two complex solutions, namely $z=i$ and $z=-i$. However, something much more general (and surprising) is true: {\em every} non-trivial polynomial equation has a solution in the complex numbers. To understand this statement, recall that a \textbf{polynomial}% \index{polynomial} is an expression of the form \begin{equation*} p(z) = a_nz^n + a_{n-1}z^{n-1} + \ldots + a_1z + a_0. \end{equation*} The constants $a_0,\ldots,a_n$ are called the \textbf{coefficients}% \index{coefficient!of a polynomial}% \index{polynomial!coefficient} of the polynomial. If $a_n$ is the largest non-zero coefficient, we say that the polynomial has \textbf{degree}% \index{degree!of a polynomial}% \index{polynomial!degree} $n$. A polynomial of degree $0$ is of the form $p(z) = a_0$, and is also called a \textbf{constant polynomial}% \index{polynomial!constant}% \index{constant polynomial}. Recall that a \textbf{root}% \index{root!of a polynomial}% \index{polynomial!root} of a polynomial is a number $z$ such that $p(z)=0$. The fundamental theorem of algebra is the following: \begin{theorem}{Fundamental theorem of algebra}{fundamental-algebra} Every non-constant polynomial $p(z)$ with real or complex coefficients has a complex root. \end{theorem} The proof of this theorem is beyond the scope of this book. Note that the theorem does not say that the roots are always easy to find. To find the roots of a polynomial of degree 2, we can use the quadratic formula. However, if the degree is greater than 2, we may sometimes have to use fancier methods, such as Newton's method from calculus, or even a computer algebra system, to locate the roots. We give some examples. \begin{example}{Roots of a quadratic polynomial}{complex-root} Find the roots of the polynomial $p(z) = z^2 - 2z + 2$. \end{example} \begin{solution} The quadratic formula gives \begin{equation*} z = \frac{2 \pm \sqrt{-4}}{2}. \end{equation*} Of course, in the real numbers, the square root of $-4$ does not exist, so $p(z)$ has no roots in the real numbers. However, in the complex numbers, the square root of $-4$ exists and is equal to $\pm2i$. Thus, the roots of $p(z)$ are: \begin{equation*} z = \frac{2 \pm 2i}{2} = 1\pm i. \end{equation*} Indeed, we can double-check that $1+i$ and $1-i$ are in fact roots: \begin{equation*} \begin{array}{ll} p(1+i) = (1+i)^2 - 2(1+i) + 2 = (1 + 2i + (-1)) - 2 - 2i + 2 = 0, \\ p(1-i) = (1-i)^2 - 2(1-i) + 2 = (1 - 2i + (-1)) - 2 + 2i + 2 = 0. \\ \end{array} \end{equation*} \vspace{-2ex} \end{solution} \begin{example}{Roots of a cubic polynomial}{complex-root2} Find the roots of the polynomial $p(z) = z^3 - 4z^2 + 9z - 10$. \end{example} \begin{solution} By the intermediate value theorem of calculus, we know that a cubic polynomial with real coefficients always has at least one real root. This is because $p(z)$ goes to $-\infty$ when $z\to-\infty$ and to $\infty$ when $z\to\infty$. By trial and error, we find that $z=2$ is a root of this polynomial. We can therefore factor out $(z-2)$ from this polynomial: \begin{equation*} p(z) = z^3 - 4z^2 + 9z - 10 = (z-2)(z^2 - 2z + 5). \end{equation*} Now we can use the quadratic formula to find the roots of $z^2 - 2z + 5$. We find \begin{equation*} z = \frac{2\pm\sqrt{-16}}{2} = \frac{2\pm 4i}{2} = 1\pm 2i. \end{equation*} Thus, the three complex roots of $p(z)$ are $z=2$, $z=1+2i$, and $z=1-2i$. \end{solution} The following proposition is an important and useful consequence of the fundamental theorem of algebra: \begin{proposition}{Factoring a polynomial}{complex-factoring} Let $p(z)$ be a polynomial of degree $n$ with real or complex coefficients. Then $p(z)$ can be factored into $n$ linear factors over the complex numbers, i.e., $p(z)$ can be written in the form \begin{equation*} p(z) = a(z-b_1)(z-b_2)\cdots(z-b_n), \end{equation*} where $b_1,\ldots,b_n$ are (not necessarily distinct) roots of $p(z)$. \end{proposition} \begin{proof} If $n=0$, then $p(z)=a$ and there is nothing to show. Otherwise, by the fundamental theorem of algebra, $p(z)$ has at least one complex root, say $b_1$. From calculus, we know that we can factor out $(z-b_1)$ from $p(z)$, i.e., we can find a polynomial $q(z)$ of degree $n-1$ such that \begin{equation*} p(z) = (z-b_1) q(z), \end{equation*} We can repeatedly apply the same procedure to $q(z)$ until $p(z)$ has been factored into linear factors. \end{proof} \begin{example}{Factoring a polynomial}{complex-factoring} Factor $p(z) = z^3 - 4z^2 + 9z - 10$ into linear factors. \end{example} \begin{solution} From Example~\ref{exa:complex-root2}, we know that $p(z)$ has three distinct roots $b_1=2$, $b_2=1+2i$, and $b_3=1-2i$. We can therefore write \begin{equation*} p(z) = a(z-b_1)(z-b_2)(z-b_3). \end{equation*} Since the leading term is $z^3$, we find that $a=1$. Therefore \begin{equation*} p(z) = (z-2)\,(z-1-2i)\,(z-1+2i). \end{equation*} \vspace{-2ex} \end{solution}
{ "alphanum_fraction": 0.6827711942, "avg_line_length": 38.9007092199, "ext": "tex", "hexsha": "aa086d0bd1ec523756f0d6afe6c81e6eb7c901b1", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-06-30T16:23:12.000Z", "max_forks_repo_forks_event_min_datetime": "2020-11-09T11:12:03.000Z", "max_forks_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "selinger/linear-algebra", "max_forks_repo_path": "baseText/content/ComplexNumbers-FundamentalTheorem.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "selinger/linear-algebra", "max_issues_repo_path": "baseText/content/ComplexNumbers-FundamentalTheorem.tex", "max_line_length": 76, "max_stars_count": 3, "max_stars_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "selinger/linear-algebra", "max_stars_repo_path": "baseText/content/ComplexNumbers-FundamentalTheorem.tex", "max_stars_repo_stars_event_max_datetime": "2021-06-30T16:23:10.000Z", "max_stars_repo_stars_event_min_datetime": "2019-03-21T06:37:13.000Z", "num_tokens": 1839, "size": 5485 }
\documentclass{article} \usepackage{afterpage} \usepackage{float} \usepackage{longtable} \usepackage{graphicx} \usepackage{pdflscape} \usepackage[numbers,sort&compress]{natbib} \usepackage{psfrag} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{graphicx} \usepackage{nicefrac} \usepackage{graphicx} \usepackage{caption} % \usepackage{subcaption} \usepackage{subfigure} % \usepackage{algorithm} % \usepackage{paralist} % % \usepackage[geometry]{ifsym} \usepackage{rotating} % \newcommand{\uu}[1]{\boldsymbol #1} \usepackage{listings} \usepackage{xcolor} \lstset{language=C++, keywordstyle=\color{blue}, stringstyle=\color{red}, commentstyle=\color{green}, morecomment=[l][\color{magenta}]{\#} } \begin{document} \section{MHD - smooth Neumann conditions} \end{document}
{ "alphanum_fraction": 0.7109004739, "avg_line_length": 21.641025641, "ext": "tex", "hexsha": "ea7a40d067679d24f82c4aead725da5ff0655ab3", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2020-01-13T13:59:44.000Z", "max_forks_repo_forks_event_min_datetime": "2019-10-28T16:12:13.000Z", "max_forks_repo_head_hexsha": "35524f40028541a4d611d8c78574e4cf9ddc3278", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "wathen/PhD", "max_forks_repo_path": "MHD/FEniCS/MHD/Stabilised/SaddlePointForm/Test/SplitMatrix/ScottTest/Examples/2dexamples.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "35524f40028541a4d611d8c78574e4cf9ddc3278", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "wathen/PhD", "max_issues_repo_path": "MHD/FEniCS/MHD/Stabilised/SaddlePointForm/Test/SplitMatrix/ScottTest/Examples/2dexamples.tex", "max_line_length": 52, "max_stars_count": 3, "max_stars_repo_head_hexsha": "35524f40028541a4d611d8c78574e4cf9ddc3278", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "wathen/PhD", "max_stars_repo_path": "MHD/FEniCS/MHD/Stabilised/SaddlePointForm/Test/SplitMatrix/ScottTest/Examples/2dexamples.tex", "max_stars_repo_stars_event_max_datetime": "2021-08-10T21:27:30.000Z", "max_stars_repo_stars_event_min_datetime": "2020-10-25T13:30:20.000Z", "num_tokens": 249, "size": 844 }
\documentclass[twocolumn]{aastex631} \usepackage{xspace} \usepackage{xcolor, fontawesome} \definecolor{twitterblue}{RGB}{64,153,255} \newcommand{\twitter}[1]{\href{https://twitter.com/#1}{\textcolor{twitterblue}{\faTwitter}\,\tt \textcolor{twitterblue}{@#1}}} \newcommand{\github}[1]{\href{https://github.com/#1}{\textcolor{black}{\faGithub}\,\tt \textcolor{black}{#1}}} \newcommand{\githubicon}{{\color{black}\faGithub}} \newcommand{\tess}{\textit{TESS}} \newcommand{\sname}{V1298~Tau\xspace} \newcommand{\allplanets}{V1298~Tau~bcde\xspace} \newcommand{\planetb}{V1298~Tau~b\xspace} \newcommand{\planetc}{V1298~Tau~c\xspace} \newcommand{\planetd}{V1298~Tau~d\xspace} \newcommand{\planete}{V1298~Tau~e\xspace} \newcommand{\planetknown}{V1298~Tau~bcd\xspace} \newcommand{\rearth}{$R_\oplus$\xspace} \newcommand{\exoplanet}{\texttt{exoplanet}\xspace} \submitjournal{ApJL} \shorttitle{V1298 Tau with \tess} \shortauthors{Feinstein et al.} \begin{document} \title{V1298~Tau with TESS: Updated Ephemerides, Radii, and Period Constraints from a Second Transit of V1298~Tau~e} \author[0000-0002-9464-8101]{Adina~D.~Feinstein} \altaffiliation{NSF Graduate Research Fellow} \affiliation{Department of Astronomy and Astrophysics, University of Chicago, Chicago, IL 60637, USA} \author[0000-0001-6534-6246]{Trevor J.\ David} \affiliation{Center for Computational Astrophysics, Flatiron Institute, New York, NY 10010, USA} \affiliation{Department of Astrophysics, American Museum of Natural History, New York, NY 10024, USA} \author[0000-0001-7516-8308]{Benjamin~T.~Montet} \affiliation{School of Physics, University of New South Wales, Sydney, NSW 2052, Australia} \affiliation{UNSW Data Science Hub, University of New South Wales, Sydney, NSW 2052, Australia} \author[0000-0002-9328-5652]{Daniel Foreman-Mackey} \affiliation{Center for Computational Astrophysics, Flatiron Institute, New York, NY 10010, USA} \author[0000-0002-4881-3620]{John~H.~Livingston} \affiliation{Department of Astronomy, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan} \author[0000-0003-3654-1602]{Andrew~W.~Mann} \affiliation{1Department of Physics and Astronomy, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA} \correspondingauthor{Adina~D.~Feinstein;\\ \twitter{afeinstein20}; \github{afeinstein20};} \email{[email protected]} \begin{abstract} \sname is a young (20--30~Myr) solar-mass K star hosting four transiting exoplanets with sizes between $0.5 - 0.9 R_J$. Given the system's youth, it provides a unique opportunity to understand the evolution of planetary radii at different separations in the same stellar environment. \sname was originally observed 6 years ago during \textit{K2} Campaign 4. Now, \sname has been re-observed during the extended mission of NASA's Transiting Exoplanet Survey Satellite (\tess). Here, we present new photometric observations of \sname from the 10-minute \tess\ Full-Frame Images. We use the \tess\ data to update the ephemerides for \allplanets as well as compare newly observed radii to those measured from the \textit{K2} light curve, finding shallower transits for \planetknown in the redder \tess\ bandpass at the $1-2\sigma$ level. We suspect the difference in radii is due to starspot-crossing events or contamination from nearby faint stars on the same pixels as \sname. Additionally, we catch a second transit of \planete and present a new method for deriving the marginalized posterior probability of a planet's period from two transits observed years apart. We find the highest probability period for \planete to be in a near 2:1 mean motion resonance with \planetb which, if confirmed, could make \allplanets a 4 planet resonant chain. \sname is the target of several ongoing and future observations. These updated ephemerides will be crucial for scheduling future transit observations and interpreting future Doppler tomographic or transmission spectroscopy signals. \end{abstract} %%%%%%%%%%%%%%%%%%%% \keywords{Exoplanets (498) --- Pre-main sequence (1289) --- Starspots (1572) --- Stellar activity (1580)} %%%%%%%%%%%%%%%%%%%% \section{Introduction} \label{sec:intro} Planetary radii are expected to evolve over time, due to a variety of endogenous and exogenous physical processes, such as gravitational contraction, atmospheric heating and mass-loss, and core-envelope interactions \citep[e.g.][]{OwenWu2013, Lopez2013, Jin2014, ChenRogers2016, Ginzburg2018}. The most dramatic changes are believed to occur at early stages, when planets are still contracting and radiating away the energy from their formation, and when host stars are heating planetary atmospheres with high levels of X-ray and ultraviolet radiation. Since the size evolution of any individual planet is believed to be slow relative to typical observational baselines, the best way to make inferences about the size evolution of exoplanets is by measuring the sizes of large numbers of planets across a range of ages. NASA's Transiting Exoplanet Survey Satellite \citep[\tess;][]{Ricker2015} has made significant inroads toward this objective. \tess's observations of $\sim 90 \%$ of the sky have allowed for exoplanet transit searches around stars ranging from the pre-main sequence to the giant branch. It is through targeted surveys of young stars such as the THYME \citep[e.g.][]{Newton2019}, PATHOS \citep[e.g.][]{Nardiello2020}, and CDIPS \citep[e.g.][]{Bouma2020} surveys, along with case studies of individual systems \citep[e.g.][]{benatti19, Plavchan2020, Hedges2021, Zhou2021} that the timeline for planetary radii evolution can be pieced together. The \sname planetary system is one particularly valuable benchmark for understanding the size evolution of exoplanets. \sname is a pre-main sequence, approximately solar-mass star that was observed in 2015 by NASA's \textit{K2} mission \citep{Howell2014}. Analysis of the \textit{K2} data revealed the presence of four transiting planets, all with sizes between that of Neptune and Jupiter \citep{David2019a, David2019b}. There are no other known examples of exoplanetary systems with so many planets larger than Neptune interior to 0.5~au, despite the high completeness of the \textit{Kepler} survey to large ($>5$~\rearth), close-in planets. This observation raises the possibility of a causal connection between the extreme youth of \sname and the uncommonly large sizes of its planets. The youth of \sname was initially established on the basis of its strong X-ray emission \citep{Wichmann1996}, high photospheric lithium abundance \citep{Wichmann2000}, and proper motion measurements \citep{frink1997}. An additional recent blind search for co-moving stars using Gaia DR1 astrometry data found \sname was co-moving with 8 other stars \citep[Group 29 in][]{Oh2017}. \cite{Luhman2018} conducted a kinematic study of the Taurus star-forming region using Gaia DR2 and found new members of this group. With this new sample, they derived an age of $\sim$~40~Myr. However, more recent analyses based on Gaia EDR3 astrometry suggests \sname may belong to either the D2 or D3 subgroups of Taurus, both of which have estimated ages $\lesssim$10~Myr \citep{gaidos21, Krolikowski2021}. Other studies focused specifically on the \sname system have estimated its age to be 23$\pm$4~Myr from comparison with empirical and theoretical isochrones \citep{David2019b}, or 28$\pm$4~Myr from isochrone fitting to the \citet{Luhman2018} Group 29 membership list given Gaia EDR3 data \citep{johnson21}. While the precise age of \sname remains uncertain, most estimates fall in the 10--40~Myr range and we adopt $t \approx$~20--30~Myr. Given the system's youth and potential to reveal information about the initial conditions of close-in planetary systems \citep[e.g.][]{Owen2020,Poppenhaeger2021}, \sname has been the target for extensive follow-up observations. These include efforts to constrain planet masses with radial velocities \citep{Beichman2019,suarez21}, measure the spin-orbit alignments of planet c \citep{Feinstein21} and planet b \citep{johnson21, gaidos21}, measure or constrain atmospheric mass-loss rates for the innermost planets \citep{Schlawin21, Vissapragada21}, and an approved program to study the planetary atmospheres using the James Webb Space Telescope \citep[JWST;][]{Desert2021}. Here we report on newly acquired \tess\ observations of \sname which help to refine the orbital ephemerides of the transiting planets and enable comparison of the planet sizes inferred from two different telescopes with different bandpasses (\tess\ and \textit{Kepler}). The goal of this letter is to provide a quick analysis of the new \tess\ data to help improve the transit timings for follow-up observations being performed by the community. We describe the observations and light curve extraction in Section~\ref{sec:observations}. In Section~\ref{sec:analysis}, we present our light curve modeling and method for computing the marginalized posterior probability of a transiting planet's period from two transits observed with a large time gap. In Section~\ref{sec:radii}, we discuss the differences in measured transit parameters between \textit{K2} and \tess\ data. We conclude in Section~\ref{sec:conclusions}. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{figure}[t!] \begin{center} \includegraphics[width=0.46\textwidth,trim={0.25cm 0 0 0}]{static/TESSaperture.pdf} \caption{The \tess\ \texttt{tica} FFI target pixel file (TPF) overlaid with a sky image of \sname taken with the Digitized Sky Survey (DSS) r-band. \sname is highlighted by the white circle; nearby sources with \tess\ magnitudes $< 14$ are marked with x's. The two stars with white x's (TICs 15756226 and 15756240) were simultaneously fit during our PSF-modeling. The one star with black x (TIC 15756236) is a bright nearby source that was not included in our PSF-modeling. While aperture photometry would be feasible for this system (yellow square), we found fitting three point-spread functions to the brightest stars extracted the cleanest light curve for \sname. \href{https://github.com/afeinstein20/v1298tau\_tess/blob/main/src/figures/tpf.py}{\githubicon}} \label{fig:tpf} \end{center} \end{figure} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{TESS Observations} \label{sec:observations} During its Extended Mission Cycle 4, \tess\ is re-observing many of the previous \textit{K2} fields. \sname (TIC 15756231) was observed by \tess\ in Sectors 43 (UT 16 Sep 2021 to UT 12 Oct 2021) and 44 (UT 12 Oct 2021 to UT 06 Nov 2021). For Sectors 43 and 44, we used the 2-minute light curve created by the Science Processing Operations Center pipeline \citep[SPOC;][]{jenkinsSPOC2016} and binned the data down to 10-minute cadence.% For Sector 44,} we used the \texttt{tica} \citep{fausnaugh20} software to download calibrated Full-Frame Images (FFIs), as these FFIs are quickly available after the data is downlinked. We compared these new light curves to our original FFI light curves. We created our initial light curves from the \texttt{tica}-processed FFIs by modeling the point-spread function (PSF) of \sname and the two nearby bright sources (white x's in Figure~\ref{fig:tpf}), following the PSF modeling routine in \cite{feinstein19}.\footnote{Our PSF-modeled light curves are available \href{https://github.com/afeinstein20/v1298tau_tess/tree/main/lightcurves}{here}.} In summary, we calculated and maximized the likelihood value of seven parameters per each Gaussian: the $x$ and $y$ width, 2D position, amplitude, a rotational term, and a background term. The Gaussian fits are allowed to vary at each time step. Aperture photometry (example square aperture shown in Figure~\ref{fig:tpf}) provided a light curve with more systematics and scatter. We found that modeling the three brightest stars simultaneously, including \sname, with a 2D Gaussian created the least contaminated light curve. Our extracted light curve is shown in the top row of Figure~\ref{fig:transits}. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{figure*}[hbtp] \begin{center} \includegraphics[width=0.9\textwidth,trim={0.25cm 0 0 0}]{static/transits.pdf} \caption{\sname extracted light curve from the SPOC-processed light curve for Sectors 43 and 44, with transits of \allplanets highlighted by color. Each subplot contains the raw, normalized \tess\ flux (top) and the GP model removed flux (de-trended flux; bottom). Top row: extracted light curve with over plotted with our best-fit GP model for stellar variability (black). Bottom three rows: zoomed-in regions around the transits present in the \tess\ data. The GP best-fit model is over-plotted on the raw, normalized flux. The best-fit transit models are over-plotted on the de-trended flux. For overlapping transits (sub-panel ``Planets b, c, \& e''), the sum of the transits is blotted in black. \href{https://github.com/afeinstein20/v1298tau\_tess/blob/main/src/figures/transits.py}{\githubicon}} \label{fig:transits} \end{center} \end{figure*} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Analysis} \label{sec:analysis} % planet fits We simultaneously modeled the transits of \allplanets and the stellar variability using the open-source packages \exoplanet \citep{exoplanet2019, exoplanet2021} and \texttt{PyMC3} \citep{Salvatier16}. Transit timings were originally identified using updated ephemerides from \textit{Spitzer} (Livingston et al. in prep) and we account for potential transit timing variations (TTVs) in our model using \texttt{exoplanet}'s \texttt{TTVOrbit} function. This functionality allows us to fit the individual transit time per each transit, while all other orbital and planetary parameters remain the same. \texttt{exoplanet.TTVOrbit} worked well when fitting \planetknown, due to there being multiple transits. However, with \planete being a single-transit event, we found the GP model with TTVs optimized our hyperparameters to accommodate for transits where there were none; as such, we used a model without accounting for TTVs to fit the parameters for \planete. All other transit parameters (presented in Table~\ref{tab:table}) were initialized using values from \cite{David2019a}. We assumed a quadratic limb darkening law, following the reparameterization described by \cite{kipping13}; this method allows for an efficient and uniform sampling of limb-darkened transit models. % light curve fits Since the \texttt{tica} FFIs do not provide an error estimate, we fit for flux errors within our Gaussian Process (GP) model. We define the flux error as \begin{equation} \sigma_y = e^{ln(\sigma_l)} + y^2 e^{2 ln(\sigma_j)} \end{equation} where $y$ is the normalized flux array about zero, and $\sigma_l$ and $\sigma_j$ are used to define the light curve noise and in-transit jitter, which is designed to capture the added noise produced by starspot crossing events. $\sigma_l$ and $\sigma_j$ are also used as the first and second terms in our rotation model, which we defined as a stochastically-driven, damped harmonic oscillator, defined by the \texttt{SHOTerm} in \texttt{celerite2} \citep{dfm17}. We modeled the background within our GP. We defined a quadratic trend with respect to time for varying the background flux, where each polynomial coefficient was drawn from a normal distribution. Then, we generated a Vandermonde matrix ($A$) of time. This is a way of introducing a polynomial least-squares regression with respect to time. The final background flux was calculated by taking $bkg = A \cdot trend$. We performed an MCMC sampling fit to each parameter.\footnote{A Jupyter notebook detailing our model can be found \href{https://github.com/afeinstein20/v1298tau\_tess/blob/main/notebooks/TESS\_V1298Tau.ipynb}{here}.} We ran 3 chains with 500 tuning steps and 5000 draws. We discarded the tuning samples from the posterior chains before calculating our best-fit parameters. Our results are presented in Table~\ref{tab:table}, along with our selected priors for each parameter we fit. These results are consistent with our original \texttt{tica}-processed point-spread function modeled light curves. We verified our chains converged via visual inspection and following the diagnostic provided by \cite{Geweke92}. We present our final GP model for stellar variability, planet transits, and best-fit transit models in Figure~\ref{fig:transits}. There is a $\sim 1\%$ flare at \tess\ BKJD $\approx$ 4659.18 that we do not fit. \subsection{Constraining \planete's Period} \sname (EPIC~210818897) was observed during Campaign 4 of the \textit{K2} mission. There was a single transit of \planete in the original \textit{K2} data, which occurred roughly in the middle of the campaign. Since no other transits were detected, this provides a lower period limit of 36~days. Additionally, there was only 1 transit of \planete between the two \tess\ sectors, which provides a new lower limit of 42.7~days. Using the original transit timing from \textit{K2} and this new transit timing from \tess, we developed a new method for constraining the period of \planete. For this analysis, we used the \texttt{EVEREST 2.0} \citep{luger18} version of the \textit{K2} light curve. Determining the period of a planet from two transits with a significant time gap between surveys has previously been constrained by fitting for orbital periods using MCMC sampling, phase-folding all available transits on the derived transit times and periods, and computing a reduced-$\chi^2$ fit to a flat line \citep{becker19}. Orbital periods providing a match to a flat line with a likelihood exceeding some threshold are then ruled out. In our new method, we fit transit models of many discrete periods at each step of the MCMC sampler, rather than post-processing from our posterior. First, we de-trended a localized 1-day region around the transit midpoint of \planete in the \textit{K2} and \tess\ light curves, assuming a constant transit depth and allowing the other transit parameters, $\theta$ to vary. Then, we fit a discrete period model, allowing all other transit parameters to vary. We set the GP model to sample over discrete periods ranging from $38 - 56$~days. We fit for $\theta$ assuming a constant transit depth between the \textit{K2} and \tess\ observations. We assumed there is no correlation between the other transit parameters and the period we are fitting for. For each step in our MCMC fit, we compute all possible periods \begin{equation}\label{eq:period} P = \frac{1}{q} \left(T_{mid,TESS} - T_{mid, K2}\right) \end{equation} where $T_{mid}$ are the transit midpoints from \textit{K2} and \tess\ and $q$ is an integer representing a specific harmonic. We assume a uniform prior, i.e. we have no prior preference for a specific harmonic. At each step of the sampling process, we compute a new light curve with different orbital periods, given by Equation~\ref{eq:period}. The log likelihood of the new light curves models are calculated as \begin{equation} \textrm{log} \mathcal{L}_q = \left[ \textrm{log}\, p \left( X | \theta^k, q^k = n \right) \right]_n \end{equation} where $X$ is the \tess\ light curve and $n$ is the period being tested. We additionally calculate the sum of all log likelihoods for each period \begin{equation} \textrm{log} \mathcal{L} = \textrm{log}\, \Sigma_q\, p(q)\, p(X|\theta^k, q) \end{equation} The summation of all log likelihoods is used to compute the posterior likelihood for each sampled value of $q$. This analysis assumes a circular orbit for planet e and uses stellar density constraints via priors on the stellar mass and radius.\footnote{A Jupyter notebook detailing our model for constraining the period for \planete can be found \href{https://github.com/afeinstein20/v1298tau\_tess/blob/main/notebooks/V1298Tau\_e.ipynb}{here}.} We ran 3 chains with 500 tuning steps and 5000 draws. We discarded the tuning steps before our analysis. Our results are presented in Figure~\ref{fig:period_e}, where we plot the median period for each tested harmonic against the posterior probability of each harmonic. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{figure}[h] \begin{center} \includegraphics[width=0.44\textwidth,trim={0.25cm 0 0cm 0}]{static/periode.pdf} \caption{Our calculated posterior probability to constrain the period of \planete using transit timings from \textit{K2} and \tess. We tested discrete periods from $q=38-56$\,days and find the most likely period to be 44.17~days. The 2:1 resonance (48.28~days) with \planetb is plotted as the dashed vertical line. \href{https://github.com/afeinstein20/v1298tau\_tess/blob/main/src/figures/period\_e.py}{\githubicon}} \label{fig:period_e} \end{center} \end{figure} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% We find the most likely period of \planete to be 44.17~days. We provide all tested periods and posterior probabilities in Table~\ref{tab:e}. Our period estimate is at a 4-$\sigma$ disagreement with the period measured in a potential radial velocity signal for \planete presented in \cite{suarez21}. This derived period estimate suggests that \planete is in a near 2:1 mean motion resonance with \planetb. If the period of \planete is confirmed to be within the presented range, this could indicate that \allplanets are in a nearly 4-planet resonant chain. Independent ground-based monitoring of the system may be able to observe another transit of the outermost known planet in this system. Using the Transit Service Query Form on the NASA Exoplanet Archive \citep{Akeson2013}, we provide several potential transit midpoint events for all fitted periods in Table~\ref{tab:e}. \section{Differences in Measured Radii} \label{sec:radii} We compare the differences in transit $R_p/R_\star$ between the \textit{K2} and \tess\ data in Figure~\ref{fig:compare}. We masked regions in the light curve where transits overlapped. The residuals of the \tess\ light curve with our model (color) are plotted as well. For \planetknown, the transit radii are smaller in the \tess\ data, while only the measured radius for \planete is larger (Figure~\ref{fig:compare}, bottom panel). The error bars from our MCMC fit on the radii of the planets are smaller than that provided by \cite{David2019b}. We initialized our MCMC to fit the transit depths with a Gaussian distribution around the fitted values from \cite{David2019b} with a standard deviation of 0.1 (Table~\ref{tab:table}). The smaller errors could be due to the higher cadence of the \tess\ data (10-minutes vs. 30-minutes) or due to larger spot-crossing events in \textit{K2}. Larger spot-crossings would result in a greater uncertainty of the transit depth, and this is potentially evident in comparing the transit depth and shape for \planete (Figure~\ref{fig:compare}). \subsection{Radii of \planetknown} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{figure*}[!htb] \begin{center} \includegraphics[width=\textwidth,trim={0.25cm 0 0 0}]{static/compare_together.pdf} \caption{Left: Phase-folded \tess\ data (gray) with the new best-fit model (color) compared to the original \textit{K2} data (black). The residuals between the \tess\ data and each fit are shown underneath. Top right: The percent difference in measured $R_p/R_\star$ from \textit{K2} vs. \tess\ $R_p/R_\star$. A dashed line is shown at 0\% to help visually differentiate between measured increases and decreases in planetary radii. The transit depths for \planetknown are shallower in the new \tess\ data, while the transit depth for \planete is deeper. Bottom right: The change in $R_p/R_\star$ as a function of pixel for \planetb (left) and \planete (right) overlaid with a sky image of \sname taken with the Digitized Sky Survey (DSS) r-band. We speculate the variation in transit depths could be due to contamination from nearby bright stars or starspot crossing events. \href{https://github.com/afeinstein20/v1298tau_tess/blob/main/src/figures/dilution_check.py}{\githubicon}} \label{fig:compare} \end{center} \end{figure*} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% A shallower transit depth at redder wavelengths is supported by the dusty outflow model presented in \cite{wang19}, while transits in the optical probe lower atmospheric pressures, resulting in larger transit depths \citep{gao20}. Therefore, the difference in transit depths could be due to the difference in bandpass wavelength coverage between \textit{Kepler} \citep[400-900~nm;][]{Howell2014} and \tess\ \citep[600-1000~nm;][]{Ricker2015}. The ability of a planet to retain hazes/transition hazes is negatively correlated with its equilibrium temperature, $T_{eq}$, internal temperature $T_{int}$, and positively correlated with mass, $M_{core}$. \allplanets have calculated $T_{eq} < 1000$\,K, assuming an albedo~=~0 \citep{David2019a}. Young planets are believed to have high $T_{int}$ due to ongoing gravitational contraction \citep{gu04}. The combination of these two parameters make these planets more likely to host extended high-altitude hazes in their atmospheres, while outflow winds lead to the formation of transition hazes \citep{gao20}. However, it is more likely we are seeing either contamination/dilution from another nearby star or the presence of starspots. As highlighted by a black x in Figure~\ref{fig:tpf}, there is a bright (\tess\ magnitude $<$ 14) just next to \sname. Since we did not include this source in our light curve PSF-modeling, it is possible there is some light from this source in our data, therefore making the transits of \planetknown appear $\sim 10$\% shallower in our \tess\ data than the original \textit{K2} measurements. The \tess\ Input Catalog \citep{stassun18} lists a contamination value of 0.315 for \sname, which could sufficiently produce the decrease in transit depths presented here. We check for signs of dilution by creating light curves for individual pixels around \sname and measure the transit depths of \planetb and \planete (Figure~\ref{fig:compare}). We localized a 1-day window around each transit and computed the a $\chi^2$-fit using a transit model computed with \texttt{batman} \citep{Kreidberg15} and an underlying 2\textsuperscript{nd} order polynomial. We find the transit depths to decrease falling off of the pixels centered on \sname. While fitting the \tess\ PSF with a 2D Gaussian function is a reasonable approximation, it is not the perfect model. It is possible that this light curve is diluted from nearby stars, including TIC~15756226, which is the closest (separation $= 49.15\arcsec$) star with $T_\textrm{mag} = 13.09$. A change in transit depth could additionally be due to starspots, either from a nearby source or from the surface of \sname. In the context of starspots on \sname, both starspot/active region crossing events, where the planets directly transit over these inhomogeneities, or asymmetric starspots/active region distributions located off the transit chord could lead to differences in transit depths and shapes. In the case of starspot crossing events, we would expect to see added variability to the transit shape. Assuming we are not underestimating our error bars, this is readily seen in the transit of \planetb, both in the \tess\ and \textit{K2} data (Figure~\ref{fig:compare}). \subsection{Radius of \planete} Contrary to \planetknown, we measure a transit radius that is $\sim 3\sigma$ larger in the new \tess\ data than what was found in the original \textit{K2} data \citep{David2019a}. The difference in radii could be consistent with a large scale height, low mean-molecular weight atmosphere around \planete \citep{deMooij12}. It is also possible the atmosphere of \planete is dominated by species with stronger absorption features at longer wavelengths, such as CO, H$_2$O, and CH$_4$ \citep{carter09}. We find it is more likely that the original single transit of \planete was filled-in via spot-crossing events, making it appear shallower in the \textit{K2} observations. Young stars are known to have very high spot coverage, anywhere from 30-80\% \citep[][]{grankin99, gully17, feinstein20}. It is therefore likely the surface of \sname is dominated by stellar inhomogeneities. This hypothesis is further strengthened by comparing the transit shape between the two data sets (Figure~\ref{fig:compare}). The center of the \textit{K2} transit is deeper than the edges and is consistent with the most recently observed transit depth. The lower contrast in starspot signals at longer wavelengths could potentially explain why the transit depth is consistent deeper in the \tess\ observations. Additionally, there is noise in-transit in the \tess\ data that could potentially be more starspot crossing events. Future \tess\ 20-second and 2-minute data may have the temporal resolution needed to yield insight into if there is evidence of starspot crossing events. Our transit of \planete from the FFIs shows some structure. At the 10-minute cadence, it is hard to rule out noise as the source of this structure. However if there is such evidence of starspot crossings in the higher-cadence \tess\ observations, it would be interesting if any of the events dilutes the transit enough to result in a similar transit depth to that which was seen in \textit{K2}. \section{Conclusions} \label{sec:conclusions} We present updated ephemerides for all four known planets in the \sname system. Our GP model accounts for TTVs for \planetknown. The transit timings for \planetc deviates from a linear ephemeris by -30 to 30 minutes, and \planetd deviates by -5 to 5 minutes. Additionally, we note the transits of \planetknown occur 1.92 later, and 5.83 and 4.72 hours earlier than what is expected if we extrapolate forward the ephemerides from \cite{David2019b}. We detected a second transit of \planete; this new transit time in combination from the transit observed with \textit{K2} allowed us to place tighter constraints on the period of the outermost planet. Our revised radius for planet e makes it now the largest planet in the system and extends an intriguing size--separation correlation in \sname such that planet size increases monotonically with separation. We find the transit depths of \planetknown as observed by \tess\ are shallower than those observed by \textit{K2} by $1-2\sigma$, with the exception of \planete, which is $\sim 3\sigma$ larger. While this could possibly be due to ongoing dusty outflows that make the transit depth appear shallower, it is more likely the differences are due to starspot crossing events, asymmetric starspots off the transit chord, or contamination from nearby faint stars in the same \tess\ pixels are \sname. Modeling potential starspot crossing events could be accomplished using the 2-minute and 20-second cadence light curves, which will be available in the coming months. The youth of these planets could additionally favor hosting haze-dominated atmospheres. However, without mass estimates for \allplanets, it is difficult to determine if this is the cause of the different transit depths measured between the \textit{K2} data and our presented work. Radial velocity mass measurements are challenging for young planets due to underlying stellar activity. Through an intensive radial velocity campaign, \cite{suarez21} presented a new mass detection for \planetb and \planete. Our updated radius estimate for \planetb in combination with the mass estimate provided by \cite{suarez21} would yield a density of $1.29$~g/cm\textsuperscript{3}, which is slightly higher than what was originally reported. Our updated radius for \planete would yield a lower density of $2.04$~g/cm\textsuperscript{3}, which is still within a 1-$\sigma$ agreement with \cite{suarez21}. However, the period estimate for \planete is at a 4-$\sigma$ disagreement for the period estimate provided in this study and at a 2-$\sigma$ disagreement with the minimum period constraint provided by this new \tess\ data. A more promising approach to measuring the masses would be through TTVs \citep{agol18}. A full analysis of system parameters and TTVs from both the \textit{K2} and \tess\ light curves, and additional \textit{Spitzer} transit photometry will be presented in Livingston et al. (in prep). Additional transits at longer wavelengths or simultaneous multi-band photometry or spectroscopy could corroborate the potential of constraining the properties of these young atmospheres. %\begin{acknowledgments} \vspace{0.5mm} We thank Rodrigo Luger for developing \texttt{showyourwork!} \citep{luger21} and helping us debug this letter. We thank Chas Beichman, Sarah Blunt, Jacob Bean, and Darryl Seligman for helpful comments on our \tess\ proposal (DDT 036) and thoughtful conversations. We thank our anonymous referee for their thoughtful insights which improved the quality of this manuscript. ADF acknowledges support from the National Science Foundation Graduate Research Fellowship Program under Grant No. (DGE-1746045). This research has made use of NASA's Astrophysics Data System Bibliographic Services. This research made use of Lightkurve, a Python package for \textit{Kepler} and \tess\ data analysis \citep{lightkurve}. %\end{acknowledgments} \begin{deluxetable*}{l r r r r}[hbtp] \tabletypesize{\footnotesize} \tablecaption{\sname light curve fitting results and predicted ground-based transit midpoint events for \planete. \label{tab:table}} \tablehead{\\ \hline\ \textit{Star} & \textit{Value} & \textit{Prior} & & \\ \hline $R_\star [R_\odot]$ & $1.33_{-0.03}^{+0.04}$ & $\mathcal{G}(1.305, 0.07)$ & & \\ $M_\star [M_\odot]$ & $1.095_{-0.047}^{+0.049}$ & $\mathcal{G}(1.10, 0.05)$& & \\ $u_1$ & $0.32_{-0.19}^{+0.20}$ & $\mathcal{U}[0, 1]$ in $q_1$ & & \\ $u_2$ & $0.16_{-0.29}^{+0.31}$ & $\mathcal{U}[0, 1]$ in $q_2$ & & \\ $P_{rot}$ [days] & $2.97_{-0.04}^{+0.03}$ & $\mathcal{G}(\textrm{ln} 2.87, 2)$ & & \\ ln($Q_0$) & $0.72_{-0.21}^{+0.24}$ & $\mathcal{H}(\sigma=2)$ & & \\ $\Delta Q_0$ & $4.09 \pm 1.01$ & $\mathcal{G}(0, 2)$ & & \\ f [ppt] & $0.85_{-0.19}^{+0.11}$ & $\mathcal{U}[0.1, 1]$ & & \\ \hline\ \textit{Light Curve} & \textit{Value} & \textit{Prior} & & \\ \hline $\mu$ & $-1.66_{-9.28}^{+9.42}$& $\mathcal{G}(0, 10)$ & &\\ ln($\sigma_l$) & $-2.499_{-4.463}^{+4.444}$ & $\mathcal{G}(ln(0.1\sigma_\textrm{flux}), 10)$ & & \\ ln($\sigma_j$) & $-1.36_{-5.06}^{+4.96}$ & $\mathcal{G}(ln(0.1\sigma_\textrm{flux}), 10)$ & & \\ \hline\ \textit{Planets} & \textit{c} & \textit{d} & \textit{b} & \textit{e}\\ \hline $T_0$ [BKJD - 2454833] & $4648.16636_{-0.00339}^{+0.00269}$ & $4645.41494_{-0.00157}^{+0.00172}$ & $4648.09023_{-0.00132}^{+0.00129}$ & $4648.79668_{-0.00114}^{+0.00121}$ \\ $P$ [days] & $8.2438_{-0.0020}^{+0.0024}$ & $12.3960_{-0.0020}^{+0.0019}$ & $24.1315_{-0.0034}^{+0.0033}$ & 44.1699 \\ $R_p/R_\star$ & $0.0337 \pm 0.0009$ & $0.0409_{-0.0015}^{+0.0014}$ & $0.0636 \pm 0.0018$ & $0.0664_{-0.0021}^{+0.0025}$ \\ Impact parameter, $b$ & $0.14_{-0.10}^{+0.14}$ & $0.19_{-0.13}^{+0.12}$ & $0.45_{-0.04}^{+0.05}$ & $0.48_{-0.07}^{+0.06}$ \\ T$_{14}$ [hours] & $4.66_{-0.43}^{+0.49}$ & $5.59_{-0.53}^{+0.57}$ & $6.42_{-0.61}^{+0.66}$ & $7.44_{-0.71}^{+0.79}$ \\ $R_p [R_\oplus]$ & $5.05 \pm 0.14$ & $6.13 \pm 0.28$ & $9.53 \pm 0.32$ & $9.94 \pm 0.39$\\ $R_p [R_J]$ & $0.45 \pm 0.01$ & $0.55 \pm 0.03$ & $0.85 \pm 0.03$ & $0.89 \pm 0.04$\\ TTVs [minutes] & $-0.41 \pm 25.38$ & $-0.12 \pm 4.08$ & --- & --- \\ \hline\ \textit{Priors} & \textit{c} & \textit{d} & \textit{b} & \textit{e}\\ \hline $T_0$ [BKJD - 2454833] & $\mathcal{G}(4648.53,0.1)$ & $\mathcal{G}(4645.4,0.1)$ & $\mathcal{G}(4648.1,0.1)$ & $\mathcal{G}(4648.8,0.1)$ \\ log(P [\textrm{days}]) & $\mathcal{G}(\textrm{ln} 8.25, 1)$ & $\mathcal{G}(\textrm{ln} 12.40, 1)$ & $\mathcal{G}(\textrm{ln} 24.14, 1)$ & $\mathcal{G}(\textrm{ln} 36.70, 1)$\\ log(depth [ppt]) & $\mathcal{G}(\textrm{ln} 1.45, 0.1)$ & $\mathcal{G}(\textrm{ln} 1.90, 0.1)$ & $\mathcal{G}(\textrm{ln} 4.90, 0.1)$ & $\mathcal{G}(\textrm{ln} 3.73, 0.1)$ \\ Impact parameter, $b$ & $\mathcal{U}[0, 1]$ & $\mathcal{U}[0, 1]$ & $\mathcal{U}[0, 1]$ & $\mathcal{U}[0, 1]$ \\ T$_{14}$ [days] & $\mathcal{G}(\textrm{ln} 0.19, 1)$ & $\mathcal{G}(\textrm{ln} 0.23, 1)$ & $\mathcal{G}(\textrm{ln} 0.27, 1)$ & $\mathcal{G}(\textrm{ln} 0.31, 1)$ \\ TTVs [days] & $\mathcal{G}(T_{0,c}, 0.1)$ & $\mathcal{G}(T_{0,d}, 0.1)$ & $\mathcal{G}(T_{0,b}, 0.1)$ & --- } \startdata \enddata \tablecomments{$u_1$ and $u_2$ are the limb-darkening parameters sampled following the reparameterization described by \cite{kipping13}; $P_{rot}$ is the rotation period of \sname; ln($Q_0$) is the quality factor for the secondary oscillation used to fit the stellar variability; $\Delta Q_0$ is the difference between the quality factors of the first and second modes; f is the fractional amplitude of the second mode compared the first; $\mu$ is the mean of the light curve; $\sigma_i$ and $\sigma_j$ are the light curve noise and in-transit jitter. Priors are noted for parameters that were directly sampled. The distributions are as follows -- $\mathcal{G}$: Gaussian; $\mathcal{H}$: Half-normal; $\mathcal{U}$: Uniform. $\sigma_\textrm{flux}$ is the standard deviation of the light curve.} \end{deluxetable*} \begin{deluxetable*}{l r r r r}[hbtp] \tabletypesize{\footnotesize} \tablecaption{Predicted transit midpoint events for \planete.} \label{tab:e} \tablehead{\colhead{P [days]} & \colhead{Posterior Prob.} & \colhead{Observable Dates UT} & & \\ \hline 44.1699 $\pm$ 0.0001 & 0.071 & 21/12/2021 15:17 & 03/02/2022 19:22 & 19/03/2022 23:27 \\ \hline 45.0033 $\pm$ 0.0001 & 0.069 & 23/12/2021 07:17 & 06/02/2022 07:22 & 23/03/2022 07:27 \\ \hline 45.8687 $\pm$ 0.0001 & 0.066 & 25/12/2021 00:50 & 08/02/2022 21:41 & 26/03/2022 18:32 \\ \hline 46.7681 $\pm$ 0.0001 & 0.064 & 26/12/2021 20:00 & 11/02/2022 14:26 & 30/03/2022 08:52 \\ \hline 47.7035 $\pm$ 0.0001 & 0.062 & 28/12/2021 16:54 & 14/02/2022 09:47 & 03/04/2022 02:40 \\ \hline 48.6770 $\pm$ 0.0001 & 0.061 & 30/12/2021 15:38 & 17/02/2022 07:53 & 07/04/2022 00:07 \\ \hline 49.6911 $\pm$ 0.0001 & 0.059 & 01/01/2022 16:18 & 20/02/2022 08:54 & 11/04/2022 01:29 \\ \hline 50.7484 $\pm$ 0.0001 & 0.057 & 03/01/2022 19:03 & 23/02/2022 13:01 & 15/04/2022 06:59 \\ \hline 51.8516 $\pm$ 0.0001 & 0.055 & 06/01/2022 00:01 & 26/02/2022 20:27 & 19/04/2022 16:53 \\ \hline 53.0039 $\pm$ 0.0001 & 0.053 & 08/01/2022 07:19 & 02/03/2022 07:25 & 24/04/2022 07:30 \\ \hline 54.2085 $\pm$ 0.0001 & 0.051 & 10/01/2022 17:08 & 05/03/2022 22:09 & 29/04/2022 03:09 \\ \hline 55.4692 $\pm$ 0.0001 & 0.049 & 13/01/2022 05:39 & 09/03/2022 16:55 & --- \\ \hline 56.7899 $\pm$ 0.0001 & 0.047 & 15/01/2022 21:03 & 13/03/2022 16:00 & --- \\ \hline 58.1750 $\pm$ 0.0001 & 0.045 & 18/01/2022 15:32 & 17/03/2022 19:44 & --- \\ \hline 59.6294 $\pm$ 0.0001 & 0.042 & 21/01/2022 13:21 & 22/03/2022 04:27 & --- \\ \hline 61.1583 $\pm$ 0.0001 & 0.039 & 24/01/2022 14:44 & 26/03/2022 18:32 & --- \\ \hline 62.7678 $\pm$ 0.0001 & 0.037 & 27/01/2022 19:59 & 31/03/2022 14:25 & --- } \startdata \enddata \tablecomments{Transit dates were calculated using the Transit Service Query Form on the NASA Exoplanet Archive \citep{Akeson2013}. We queried observable transits between December 3, 2021 through April 30, 2022. Dates presented in DD/MM/YYYY format. A machine-readable version of this table can be found here.} \end{deluxetable*} %\vspace{5mm} \facilities{\tess\ \citep{Ricker2015}, \textit{Kepler} \citep{Howell2014}} \software{\texttt{exoplanet} \citep{exoplanet2021}, \texttt{EVEREST 2.0} \citep{luger18}, \texttt{lightkurve} \citep{lightkurve}, \texttt{matplotlib} \citep{matplotlib}, \texttt{PyMC3} \citep{Salvatier16}, \texttt{starry} \citep{luger19}, \texttt{theano} \citep{theano}, \texttt{tica} \citep{fausnaugh20}, \texttt{showyourwork!} \citep{luger21}, \texttt{astropy} \citep{astropy:2013, astropy18}, \texttt{astroquery}\citep{astroquery19}, \texttt{numpy} \citep{numpy}, \texttt{celerite2} \citep{dfm17}, \texttt{batman} \citep{Kreidberg15} } \bibliography{main} \bibliographystyle{aasjournal} \end{document}
{ "alphanum_fraction": 0.7451495404, "avg_line_length": 113.0448179272, "ext": "tex", "hexsha": "3dd8f9275a995594fac4919a67b34b7666e4b582", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0e34cc86b06739a63bee9012a6b2d59d2932c3cd", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "afeinstein20/v1298tau_tess", "max_forks_repo_path": "src/ms.tex", "max_issues_count": 5, "max_issues_repo_head_hexsha": "0e34cc86b06739a63bee9012a6b2d59d2932c3cd", "max_issues_repo_issues_event_max_datetime": "2022-01-03T16:37:58.000Z", "max_issues_repo_issues_event_min_datetime": "2021-10-21T02:20:28.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "rodluger/v1298tau_tess", "max_issues_repo_path": "src/ms.tex", "max_line_length": 1576, "max_stars_count": 2, "max_stars_repo_head_hexsha": "0e34cc86b06739a63bee9012a6b2d59d2932c3cd", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "rodluger/v1298tau_tess", "max_stars_repo_path": "src/ms.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-03T23:49:17.000Z", "max_stars_repo_stars_event_min_datetime": "2021-11-17T04:34:48.000Z", "num_tokens": 11859, "size": 40357 }
\section{Building an Optic Touch-table} \begin{frame}%\frametitle{} \begin{block}{Building an Optic Touch-table} \begin{minipage}{1.0\linewidth} \begin{enumerate} \item Build Touch-table (MTMini)\\ \includegraphics[scale=.19]{images/mesa1.png}\\ \url{https://goo.gl/RvtIIv}\\ \url{https://goo.gl/3FO6jB} \item Add display to multi-touch table (See through LCD Screen)\\ \includegraphics[scale=.19]{images/mesa2.jpg}\\ \url{hhttps://goo.gl/RtLLIr} %\item Required software\\ %\includegraphics[scale=.09]{images/CCV.png}$\;$\url{https://goo.gl/eDPr9z} %\includegraphics[scale=.08]{images/reactivision04.png}$\;$\url{https://goo.gl/YjS2hy} \end{enumerate} \end{minipage} \end{block} \end{frame}
{ "alphanum_fraction": 0.7312138728, "avg_line_length": 36.4210526316, "ext": "tex", "hexsha": "70a027514e1d147924f8838578cc17d8693e3a12", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-09-10T16:07:04.000Z", "max_forks_repo_forks_event_min_datetime": "2021-09-10T16:07:04.000Z", "max_forks_repo_head_hexsha": "ea2af97ec9b8a74a09321951236a35470fbcac29", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "noSoulApophis/Tangible-User-Interface", "max_forks_repo_path": "docs/presentation/touchTable.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ea2af97ec9b8a74a09321951236a35470fbcac29", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "noSoulApophis/Tangible-User-Interface", "max_issues_repo_path": "docs/presentation/touchTable.tex", "max_line_length": 86, "max_stars_count": 4, "max_stars_repo_head_hexsha": "ea2af97ec9b8a74a09321951236a35470fbcac29", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "noSoulApophis/Tangible-User-Interface", "max_stars_repo_path": "docs/presentation/touchTable.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-10T16:07:03.000Z", "max_stars_repo_stars_event_min_datetime": "2020-06-30T00:17:23.000Z", "num_tokens": 234, "size": 692 }
\documentclass{article} \usepackage{graphicx} \usepackage{wrapfig} \usepackage{subcaption} \usepackage[margin=1in]{geometry} \usepackage{amsmath} % or simply amstext \usepackage{siunitx} \usepackage{booktabs} \usepackage[export]{adjustbox} \newcommand{\angstrom}{\textup{\AA}} \usepackage{cleveref} \usepackage{booktabs} \usepackage{gensymb} \usepackage{float} \title{Understanding the nanoscale structure of hexagonal phase lyotropic liquid crystal membranes} \author{Benjamin J. Coscia \and Douglas L. Gin \and Richard D. Noble \and Joe Yelk \and Matthew Glaser \and Xunda Feng \and Michael R. Shirts} \begin{document} \bibliographystyle{ieeetr} \graphicspath{{./figures/}} \maketitle \section*{Introduction} Nanostructured membrane materials have become increasingly popular for aqueous separations applications such as desalination and biorefinement because they offer the ability to control membrane architecture at the atomic scale allowing the design of solute-specific separation membranes. \cite{humplik_nanostructured_2011} \begin{itemize} \item Most membrane-based aqueous separations of small molecules can be achieved using reverse osmosis (RO) or nanofiltration (NF) \cite{van_der_bruggen_review_2003} \end{itemize} While RO and NF have seen many advances in the past few decades, they are far from perfect separation technologies. \begin{itemize} \item \textit{RO membranes} \item Inconsistent performance : Current state-of-the-art RO membranes are unstructured with tortuous and polydisperse diffusion pathways which leads to inconsistent performance \cite{song_nano_2011} \item High energy requirements : Necessarily high feed pressures drive up energy requirements which strains developing regions and contributes strongly to CO\textsubscript{2} emissions. \cite{mcginnis_global_2008} \item Separation based on differences in solubility and diffusivity: Moreover, designing RO membranes to achieve targeted separations of specific solutes is nearly impossible because various solutes dissolve into and diffuse through the polymer matrix at different rates. \cite{wijmans_solution-diffusion_1995} \item At best, one can exploit these differences to create a functional selective barrier. \item \textit{NF membranes} \item NF was introduced as an intermediate between RO and ultrafiltration, having the ability to separate organic matter and salts on the order of one nanometer in size. \item Larger and well-defined pores drive down energy requirements while still affording separation of solutes as small as ions to some degree \cite{van_der_bruggen_review_2003} \item NF is often used as a precursor to reverse osmosis \item Unfortunately, NF membranes, like RO, possess a pore size distribution which limits their ability to perform precise separations \cite{bowen_modelling_2002} \end{itemize} Nanostructured membranes can bypass many of the performance issues which plague traditional NF and RO membranes. \begin{itemize} \item Tune size and functionality of building blocks to control pore size and shape: One can accomplish targeted separations with high selectivity by tuning shape, size and functionality of the molecular building blocks which form these materials. % BJC: "these materials" --> "nanostructured membranes", or is that redundant? \item As a result, solute rejecting pores can have their sizes tuned uniformly, resulting in strict size cut-offs. \item Entirely different mechanisms may govern transport in a given nanostructured material which can inspire novel separation techniques. \end{itemize} Development of nanostructured materials has been limited by the ability to synthesize and scale various fundamentally sound technologies. \begin{itemize} \item Graphene sheets are atomically thick which results in excellent permeability but defects during manufacturing severely impact selectivity. \cite{cohen-tanugi_multilayer_2016} \item Molecular dynamics simulations of carbon nanotubes show promise \cite{humplik_nanostructured_2011} but synthetic techniques are unable to achieve scalable alignment and pore monodispersity.\cite{hata_water-assisted_2004,maruyama_growth_2005} \item Zeolites have sub-nm pores with a narrow pore size distribution and MD simulations exhibit complete rejection of solvated ions, \cite{murad_molecular_1998} however, experimental rejection was low and attributed to interstitial defects formed during membrane synthesis \cite{li_desalination_2004} \item There is a need for a scalable nanostructured membrane \end{itemize} Self assembling lyotropic liquid crystals (LLCs) are a suitable candidate for aqueous separation applications. \begin{itemize} \item LLCs share the characteristic ability of nanostructured membrane materials to create highly ordered structures with the added benefits of low cost and synthetic techniques feasible for large scale production \cite{feng_scalable_2014} \item Neat liquid crystal monomer forms the thermotropic, Col\textsubscript{h} phase. The presence of small amounts of water results in the H\textsubscript{II} phase. \item In both cases, monomers assemble into mesophases made of hexagonally packed, uniform size, cylinders with hydrophilic groups oriented inward towards the pore center and hydrophobic groups facing outward. \item H\textsubscript{II} and Col\textsubscript{h} phase systems created by the monomer named Na-GA3C11 has been extensively studied experimentally \cite{smith_ordered_1997, %BJC: IUPAC chemical name here? zhou_supported_2005,resel_h2-phase_2000,feng_scalable_2014,feng_thin_2016}. \item Until recently, mesophases formed by Na-GA3C11 could not be macroscopically aligned, resulting in a low flux membrane, slowing research in the field. \item In 2014, Feng et al. showed that the mesophases could be aligned using a magnetic field with subsequent crosslinking to lock the structure in place \cite{feng_scalable_2014} \item In 2016, Feng et al. showed that the same result could be obtained using a technique termed soft confinement \cite{feng_thin_2016}. \item Following this breakthrough, research into LLC membranes has been reinvigorated \end{itemize} A molecular level understanding of LLC membrane structure, enabled by molecular dynamics simulations, will provide guidelines to reduce the large chemical space available to design monomers for creation of separation-specific membranes. \begin{itemize} % \item LLCs are versatile and controllable with a \textbf{large chemical design % space} available for membrane design %BJC: redundant \item Over the past 20 years, HII phase LLC membrane studies have been limited primarily to Na-GA3C11 with some characterization done after minor structural modifications \cite{resel_structural_2000}. \item Rejection studies show that this membrane can not perform separations of solutes less than 1.2 nm in diameter because the pores are too large \cite{zhou_supported_2005}. \item We do not yet understand how to reduce the effective pore size or how to tune the chemical environment in the nanopores for effective water desalination or small organic molecule separations. \item It will be challenging to efficiently narrow down the large design space in a laboratory setting without a robust model. \item The only source of predictive modeling has been macroscopic models which likely do not adequately describe transport at these length scales. % BJC: Reference. I think it is just w.r.t. bicontinuous cubic \item Choice of head group may play a role in the rejection of charged or uncharged solutes. \item Choice of counterion may influence the establishment of a Donnan potential affecting the degree to which the membrane can exclude charged species. \item A good molecular model should incorporate a detailed picture of the nanoscopic pore structure which will be crucial to understanding the role of monomer structure in membrane design. \item Molecular dynamics will have the required level of detail \end{itemize} Our approach to constructing a general model will follow the development of a model of a specific LLC membrane with sufficient experimental characterization. \begin{itemize} \item We have chosen to focus on assemblies formed by Na-GA3C11 \item We have also narrowed our scope to the development of a model of the Col\textsubscript{h} phase membrane. \item Compared to the H\textsubscript{II} phase, the Col\textsubscript{h} phase is a simpler starting point, due to the absence of water, and has equivalent experimental structural data. \item Despite having structural data, there is still information which experiment cannot definitively answer. \end{itemize} %BJC: the following just seems like filler now that I've reorganized % A clear picture of the nanoscopic LLC membrane structure, gained by building % a molecular model will provide evidence to answer existing and newly proposed % questions. % \begin{itemize} % \item Despite having structural data, there is still information which % experiment cannot definitively answer % \end{itemize} %MRS: break this down into more hierarchy - there are 11 separate points. Can they be grouped at all for clearer understanding? elucidate what the aspects are we need a clearer picture. %BJC: I moved the logic for choice of monomer to system to model to a separate paragraph (above this). I then moved the other three things I talk about into their own paragraphs to keep the ideas separate (following paragraphs). %MRS: since the questions we answer in the paper have a lot to do with the structure of the membranes, somewhere relatively early in the paper we need to lay out the facts that 1) these membranes could be really good (I think you do establish this) 2) we really would need to know the structure better to really do useful things with these, and 3) MD is a very helpful way to gain more information about the structure, because of the limited range of experiments we can do with these systems. That is the main rationale, and it should be made clear fairly early in the introduction. Very important to make your story absolutely as clear as possible. %BJC: I moved and edited a paragraph that was originally later on to be 2 above this one. I think it does a better job addressing points 2 and 3. And now it addresses those points earlier in the intro (as early as it could go where I think it still makes sense in context) Despite having structural data, there is still information which experiment cannot definitively answer. There are several key questions that we intend to answer which will be laid out and numbered in subsequent paragraphs. %BJC: How question (1) and (2) used to be ordered: % % (1) Do monomers partition into and persist as defined monomer layers? % H\textsubscript{II} and Col\textsubscript{h} phase LC membrane pores are % thought to be composed of monomer layers stacked on top of each other. % \begin{itemize} % \item Their existence is supported by experimental evidence of strong % $\pi$-$\pi$ stacking interactions in the direction perpendicular to the % membrane plane. % \item $\pi$-$\pi$ stacking will only occur between monomer head groups which % leaves no description of what is happening in monomer tail region % \item It is possible that $\pi$-$\pi$ stacking occurs vertically % with no order in reference to neighboring stacked columns of the same pore. % BJC: not sure if this is actually true % \end{itemize} % % (2) If layers do exist, how many monomers constitute a single layer? % There has been no definitive answer to the question in literature. % \begin{itemize} % \item A simple molecular simulation study of a similar molecule suggested % that there are 4 monomers in each layer. Their estimation is based on a % simulated system containing only 16 total monomers which likely does not sufficiently % model the chemical environment present in the real system.~\cite{zhu_methacrylated_2006}. % \item A separate calculation based on the volume of the liquid crystal monomers proposes % that there are seven monomers in each layer~\cite{resel_structural_2000}. % \item A molecular model orders of magnitude larger than any other reported atomistic % liquid crystal membrane simulations has the best chance of directly answering this question. % \end{itemize} Monomers in the Col\textsubscript{h} system are theorized to be partitioned into stacked layers which form columnar pores. There has been no definitive answer in literature regarding the number of monomers in each layer. We want to know (1) If layers do exist, how many monomers constitute a single layer? \begin{itemize} \item A simple molecular simulation study of a similar molecule suggested that there are 4 monomers in each layer. Their estimation is based on a simulated system containing only 16 total monomers which likely does not sufficiently model the chemical environment present in the real system.~\cite{zhu_methacrylated_2006}. \item A separate calculation based on the volume of the liquid crystal monomers proposes that there are seven monomers in each layer~\cite{resel_structural_2000}. \item A molecular model orders of magnitude larger than any other reported atomistic liquid crystal membrane simulations has the best chance of directly answering this question. \item We can directly change the layer composition and note its effect on membrane structure. \end{itemize} (2) Does our model support the existence of layers and if so, how well defined are the layers? \begin{itemize} \item Experimentally, their existence is supported by evidence of strong $\pi$-$\pi$ stacking interactions in the direction perpendicular to the membrane plane. \item $\pi$-$\pi$ stacking will only occur between monomer head groups which leaves no description of what is happening in the monomer tail region \item The tails may entangle isotropically while stacking order is maintained among headgroups. %\item It is possible that $\pi$-$\pi$ stacking occurs vertically %with no order in reference to neighboring stacked columns of the same pore. % BJC: not sure if this is actually true \end{itemize} (3) How do monomers in each layer position themselves with respect to surrounding layers? % Even if we knew the number of monomers in each layer, we still would not know how % monomers in each layer are positioned with respect to other layers. \begin{itemize} \item A driving force of self assembly in this system is thought to be $\pi$-$\pi$ stacking interactions between aromatic headgroups \cite{gazit_possible_2002}. \item Gas phase ab initio studies of benzene dimers have shown a clear energetic advantage for parallel displaced and T-shaped $\pi$-$\pi$ stacking conformations versus a sandwiched conformation ~\cite{sinnokrot_estimates_2002}. \item Substituted benzene rings exhibit an even stronger $\pi$-$\pi$ stacking attraction which favors the parallel displaced configuration in all cases except where the substitutions are extremely electron withdrawing \cite{waller_hybrid_2006,ringer_effect_2006}. \item We can use simulated X-ray diffration patterns to compare the two stacking configurations. \end{itemize} % \item While we might be able to provide answers to these questions using a % molecular model, there remains the possibility that there is more than one % metastable state associated with a given LLC system. (4) Can the system exist in other metastable states or phases that are not accessed during experiments? There remains the possibility that there is more than one metastable state associated with a given LLC system. \begin{itemize} \item Simulating a membrane atomistically will require many atoms which further limits the timescales acessible with MD \item It is reasonable to expect that we will generate configurations which are kinetically trapped in a metastable free energy basin \item We must be able to identify which state is produced experimentally and why others are not. \end{itemize} Once we have addressed all of the above questions, we must show that the developed molecular model is consistent with physical observations so that we can rely on conclusions drawn about structural features characteristic of the system. \begin{itemize} \item In this study, we build a significantly more realistic atomistic model of LLC membranes than has ever previously been done, and explore what new structural information can be gained and what structure hypotheses are supported by this model. \item We validate the model using as much experimental information as possible. \item We are most interested in reproducing the conclusions about structure which have been made from X-ray diffraction (XRD) experiments and in matching ionic conductivity measurements~\cite{feng_thin_2016}. \begin{itemize} \item We have compared simulated X-ray diffraction patterns to experiment in order to match major features present in the 2D patterns. \item We calculated ionic conductivity using two agreeing methods. \item We examined the the influence of crosslinking on membrane structure. \end{itemize} \item The structure-building approach and analysis used in this paper can be readily extended to the H\textsubscript{II} phase and other similar LC systems. \end{itemize} \section*{Methods} Liquid crystal monomers were parameterized using the Generalized AMBER Forcefield \cite{wang_development_2004} with the Antechamber package \cite{wang_automatic_2006} provided with AmberTools16 \cite{case_ambertools16_2016}. Atomic charges were assigned using tools from Openeye Scientific Software. All molecular dynamics simulations were run using the latest version of Gromacs 2016. \cite{bekker_gromacs:_1993,berendsen_gromacs:_1995,van_der_spoel_gromacs:_2005,hess_gromacs_2008} An ensemble of characteristic, low-energy vacuum monomer configurations were constructed by applying a simulated annealing process to a parameterized monomer. \begin{itemize} \item Monomers were cooled from 1000K to 50K over 10 nanoseconds. \item A low energy configuration was randomly pulled from the trajectory and charges were reassigned using the am1bccsym method of molcharge shipped with Openeye Scientific software's QUACPAC % BJC: how to cite? \item Using the new charges, the monomer system was annealed again and monomer configurations were pulled from the trajectory to be used for full system construction (Figure~\ref{fig:python}a). \end{itemize} The timescale for self assembly of monomers into the hexagonal phase is unknown and likely outside of a reasonable length for an atomistic simulation, calling for a more efficient way to build the system. \begin{itemize} \item Previous work has shown a coarse grain model self assemble into the H\textsubscript{II} phase configuration in $\approx$ 1000 ns \cite{mondal_self-assembly_2013}. \item We attempted atomistic self-assembly by packing monomers into a box using Packmol \cite{martinez_packmol:_2009}. \item Simulations of greater than 100 ns show no indicators of progress towards an ordered system. \item To bypass the slow self-assembly process, python scripts are used to assemble monomers into a structure close to one of a number of hypothesized equilibrium configurations (Figure~\ref{fig:python}). \end{itemize} \begin{figure}[!ht] \centering \includegraphics[width=0.75\linewidth]{build.PNG} %BJC: put an xyz axis with the unit cell \caption{(a) A single monomer was parameterized and annealed to produce a low energy configuration. (b) Monomers are rotated and assembled into layers with hydrophlic centers. (c) Twenty layers are stacked on top of each other to create a pore. (d) Pores are duplicated and placed in a monoclinic unit cell}\label{fig:python} \end{figure} A typical simulation volume contains four pores in a monoclinic unit cell, the smallest unit cell that maintains hexagonal symmetry when extended periodically. \begin{itemize} \item Each pore is made of twenty stacked monomer layers with periodic continuity in the z direction, avoiding any edge effects and creating an infinite length pore ideal for studying transport. \item A small number of layers is preferred in order to reduce computational cost and to allow us to look at longer timescales. \item Ultimately, we chose to build a system with 20 monomer layers in each pore in order to obtain sufficient resolution when simulating X-ray diffraction patterns. This point will be explained in more detail later. %MRS: you should give the reasons in the same place you describe the choice, but the supporting data would probably be better in supporting information %BJC: so refer to supporting info here? Then put plots of diffraction data with different sized system there. I did not explain the resolution point here because I haven't talked about simulated X-ray diffraction yet and it wouldn't make sense. And it doesn't make sense to talk about XRD this early in the methods but I need to justify the number of layers. \item We chose initial guesses for the remaining structural parameters based on experimental data and treated them as variables during model development. \end{itemize} We used experimental wide angle X-ray scattering (WAXS) data (produced as described in~\cite{feng_scalable_2014}) and small angle X-ray scattering (SAXS) data from~\cite{feng_thin_2016} to inform some of our initial guess choices (Figure~\ref{fig:SWAXS}). We rely primarily on the 2D WAXS data since it encodes all structural details down to the sub-nm scale. \begin{itemize} \item There are five major features of interest present in the 2D experimental pattern shown in Figure~\ref{fig:WAXS}. \item The first is located at $q_z$ = 1.7 \angstrom$^-1$, corresponding to a real spacing of 3.7 \angstrom~. The reflection is attributed to $\pi$-$\pi$ stacking between aromatic rings in the direction perpendicular to the membrane plane, or z-axis \cite{feng_scalable_2014}. For simplicity, this reflection will be referred to as R-$\pi$. \item A weak intensity line is located at exactly half the $q_z$ value of R-$\pi$ ($q_z$ = 0.85 \angstrom$^-1$), corresponding to a real space periodic spacing of 7.4 \angstrom~. This reflection has been interpreted as 2\textsubscript{1} helical ordering of aromatic rings along the z axis meaning if the positions of the aromatic rings can be traced by a helix, then for each turn in the helix, there should be two aromatic rings. For this reason it will be referred to as R-helix. \item A third major reflection is marked by a low intensity ring located at r = 1.4 \angstrom$^-1$. The real space separation corresponds to 4.5 \angstrom~ which is characteristic of the average spacing between packed alkane chains. This reflection will be called R-alkanes. \item Within R-alkanes, are four spots of higher intensity which will be called R-spots. All are located $\approx 40$ degrees from the $q_z$ axis in their respective quadrants. In many liquid crystal systems this can be explained by the tilt angle of the alkane chains with respect to the xy plane. % BJC: Reference \item The first corresponds to the spacing and symmetry of the d\textsubscript{100} plane which can be related to the distance between pores. The feature, which will be called R-pores, is characterized by dots along $q_z$ = 0. The spacing between dots is indicative of the hexagonal symmetry of the packed pores. The same information at higher resolution is obtained using a SAXS setup. By radially integrating the 2D data one gets a 1D curve which is shown in Figure~\ref{fig:SAXS}. % BJC: I also have 2D SAXS data \end{itemize} \begin{figure}[!ht] \centering \begin{subfigure}[t]{0.43\linewidth} \centering % \vspace{12mm} \includegraphics[width=\linewidth]{SAXS.png} \caption{}\label{fig:SAXS} \end{subfigure} \begin{subfigure}[t]{0.47\linewidth} \centering \raisebox{.2\textwidth}{% \includegraphics[width=\linewidth]{WAXS_soft_confined.png} } \caption{}\label{fig:WAXS} \end{subfigure} \caption{(a) 2D wide angle X-ray scattering gives details about repeating features less than 1 nanometer apart. (b) 1D small angle X-ray scattering indicates hexagonal packing of pores as well as the spacing between pores.}\label{fig:SWAXS} \end{figure} % We chose an initial layer spacing based on experimental 2D WAXS data. % \begin{itemize} % \item The pattern shows reflections corresponding to features spaced 3.7 \angstrom~apart. % \item It has been hypothesized that the features are a result of $\pi$-$\pi$ % interactions between stacked aromatic rings \cite{feng_scalable_2014}. % \item Our simulations tend to equilibrate to a wider interlayer spacing of % $\approx$ 4.1 \angstrom~, which inspired separate systems starting with layer % spacings greater than 4 \angstrom~. % \end{itemize} We chose the initial layer spacing based on R-$\pi$. \begin{itemize} \item Each monomer was rotated so the plane of the aromatic head groups would be parallel to the xy plane. \item The layers are placed so aromatic rings are stacked 3.7 \angstrom~ apart in the z-direction. \item We extracted the equilibrium distance between layers based on the first peak of a spatial correlation function, $g(z)$, measured along the z-axis (perpendicular to the membrane plane) \item To calculate $g(z)$, we binned the z component distances between the center of mass of each benzene ring and all others of the same pore over 50 ns of equilibrated trajectory and then normalized by the average number density. \item To extract the average distance between layers we applied a discrete fourier transform to the data and extracted the highest intensity frequency \item We compare the degree of layering between systems based on the amplitude of the first peak in $g(z)$. We halve the difference between the maximum of the first peak and the following minimum. We compare the difference to the mean to get a percentage deviation from the average number density. \item Our simulations tend to equilibrate to a wider interlayer spacing of $\approx$ 4.1 \angstrom~, which inspired separate systems starting with layer spacings greater than 4 \angstrom~. \end{itemize} We placed pores at a chosen initial spacing based on R-pores, then allowed the system to settle into its preferred spacing. \begin{itemize} %MRS: somewhere, should explain there's a large dependence of (meta)stable structures based on this initial distance. It's sort of important to put somewhere. %BJC: Sure, but different choices of pore spacing were based on number of monomers per layer. More monomers per layer means a larger initial pore spacing. \item The model's pore centers are spaced 4.5 nm apart initially, $\approx$ 10 \% larger than the experimental value of 4.12 nm in order to reduce unintended repulsions resulting from a tightly packed initial configuration. \item To calculate the equilibrated pore spacing, we measured the distance between pore centers. \item Pore centers were located by averaging the coordinates of sodium ions in their respective pores. \item Statistics were generated using the bootstrapping technique (See Supplemental Information) %BJC: Do bootstrapping details go in supplemental? % Supplemental vvvv \begin{itemize} \item For each bootstrap trial, we recreate an equilibrium trajectory by randomly sampling from the original trajectory \item Each pore spacing has its own trajectory with its own average value \item The average value of each pore spacing is averaged to get the overall average pore spacing. \item The standard deviation of average values is reported as the uncertainty in pore spacing % BJC: The following will go in supplemental \item We are interested in 5 pore-to-pore distances which should all be equal in a perfect hexagonal array. Only 4 are independent % Supplemental ^^^^ \end{itemize} \end{itemize} %MRS: pore radius is a different category isn't it? Is this an input piece of data? % should be put in a different category. %BJC: Well I still have to initially place the monomers some distance away from the pore % centers. That distance is informed by the experimental evidence. We based the pore radii of our initial configuration on past TEM and size exclusion rejection data We used experimental Transmission electron microscopy (TEM) and size exclusion rejection data \cite{feng_scalable_2014,feng_thin_2016,zhou_supported_2005} to inform our definition of pore radius in the initial configuration. \begin{itemize} \item Experimental evidence suggests uniform pores with radii of 0.6 nm \item Comparing a geometric measurement of pore size derived from an atomistic model, to a less precise, experimentally derived pore size estimate, will give ambiguous results. \item What is meant by pore radius will not be clear until we establish a clear picture of the nanoscopic pore environment. \item When constructing pores, we chose the carboxylate carbon from the monomer head group as a reference atom, and placed it a distance r from the pore center, where r is the pore radius. (See Supplemental Information) %BJC: figure making that clear \item We will not make direct comparisons of pore radius between our model and experiment to avoid the ambiguity, however, we do define a pore radius based for our own purposes. \item To measure the pore radius in our model we calculate the distance between the center of mass of each aromatic ring and the center of mass of all aromatic rings in their respective pores. \end{itemize} The relative interlayer orientation was chosen based on clues from diffraction data as well as the various known stacking modes of benzene and substituted benzene rings: sandwiched, parallel-displaced and T-shaped ~\cite{sinnokrot_estimates_2002} (\Cref{fig:sandwiched,fig:pd,fig:tshaped}). \begin{itemize} \item The T-shaped configuration was ruled out based on the inconsistency of its $\approx$ 5 \angstrom~equilibrium stacking distance ~\cite{sinnokrot_estimates_2002}. \item The system's preference towards the sandwiched vs. parallel displaced stacking modes will be explored. \item Both have reported stacking distances near 3.7 \angstrom % Reference \item Headgroups in the sandwiched configuration are stacked directly on top of each other while stacked headgroups in the parallel displaced configuration are offset by $180/nmon$ degrees where $nmon$ equals the number of monomers per layer. %BJC: I think the following commentary belongs elsewhere, maybe even in the discussion %MRS: hard to say. It's certainly a big switch from things that are relatively straightforward to things that require % a fair amount of information, so I think it belongs in it's own section -- so far, you're just laying out the approach. This is what we fix, this is what we leave as a variable. The analysis probably belongs later. %BJC: Yeah, I will leave the rest for discussion % \item Visualization of each configuration (\Cref{fig:sandwichedlayers,fig:offsetlayers}) % suggests entropic differences based on the way the tails are able to pack. % \item In the sandwiched configuration, all tails start out directly on top % of each other which may prevent closely stacked benzene rings. % \item In the offset configuration, the tails are placed in between each other % which may allow layers to come together in a compact way. % \item This difference may explain, in part, which stacking mode is more favorable. \end{itemize} % This can be cut down if C and D prove to be no help. We tested a number of equilibration schemes of increasing complexity in an effort to match experiment while overcoming kinetic limitations that might trap the system in a metastable states. All start from an initial configuration generated with chosen structure variables. \begin{itemize} \item Equilibration Scheme A: \begin{itemize} \item Steepest descent energy minimization \item Short NPT simulation (~ 5 ns ) with berendsen barostat (tau-p = 1) at 300 K \item Switch to Parrinello-Rahman (tau-p = 10) and run NPT simulation for 400 - 500 ns %\item Final structure strongly influenced by monomer configuration used to build \end{itemize} \item Equilibration Scheme B: \begin{itemize} \item Restraints fix monomer head groups in the sandwiched or parallel-displaced configurations while allowing monomer tails to settle. \item Doing so also mitigates system dependence on initial monomer configuration. \item The restrained portion of the equilibration scheme is run in the NVT ensemble. \item Every 50 ps, we reduce the force constants by the square root of its previous value, starting from 1e6 KJ mol$^-1$ nm$^-2$. \item Once the force constant is below 10 KJ mol$^-1$ nm$^-2$, the restraints are slowly released until there is no more restraining potential. \item The resulting unrestrained structure is allowed to equilibrate further in the NPT ensemble for 400 - 500 ns in the same way as Scheme A. \end{itemize} \item Equilibration Scheme C: \begin{itemize} \item A system equilibrated according to scheme B is cut in half so we can access longer simulation timescales \item The system is simulated at 335K, close to its isotropic transition temperature, for 200 ns. % BJC: not entirely sure if it needs to be an already equilibrate structure \item The structure is then linearly cooled back down to 300 K over 500 ns. \item The system size is doubled back to its original size and equilibrated for 200 ns at 300 K. \item Tried with and without an applied electric field. \end{itemize} \item Equilibration Scheme D: \begin{itemize} \item Even in the near dry Col\textsubscript{h} system, there exists an equilibrium concentration of water. \item The hydrogen bonding network formed by the water may play a role in structuring the pore. \item We obtained an estimate of equilibrium pore water content by solvating an initial configuration with water baths above and below the membrane. \item We ran the solvated system according to Scheme A but for 1000 ns \item Using that number we added water to the pores of an initial configuration and equilibrated according to scheme B. \end{itemize} \item In all cases, the v-rescale thermostat was used with tau-t = 0.1 \end{itemize} % An equilibration scheme with position restraints placed on aromatic rings %MRS: %prevents % during early equilibration steps is required to prevent high energy repulsions . %MRS: should be clearer what you mean by ``unrealistic jumps''. Hard to visualize. %BJC: 'high energy repulsions' ? % \begin{itemize} % \item Restraints fix monomer head groups in the sandwiched or parallel-displaced % configurations while allowing monomer tails to settle. % \item Doing so also mitigates system dependence on initial monomer configuration. % \item The restrained portion of the equilibration scheme is run in the NVT ensemble. % \item Every 50 ps, we reduce the force constants by the square root of its % previous value, starting from 1e6 KJ mol\textsuperscript{-1} nm\textsuperscript{-2}. % \item Once the force constant is below 10 KJ mol\textsuperscript{-1} % nm\textsuperscript{-2}, the restraints are slowly released until there is no more % restraining potential. % \item The resulting unrestrained structure is allowed to equilibrate further in the NPT % ensemble for 400 - 500 ns. %BJC: I will reference an equilibration script that can be released with the supplemental info %MRS: we should release as many scripts as is feasible, can be in a github repository. %BJC: Okay, once we submit, I will need to spend some time cleaning the scripts %MRS: You've described two equilibration procedures (or two varieties of the same one?) - need to be crystal clear to the reader as to which is used where. %BJC: Yea I'm just saying that I do 50 ps restrained simulations in NVT ensemble and then once the restraints are gone I switch to NPT. I edited the language to try and make it clearer. % \end{itemize} %BJC: the following paragraph will be replaced with something that Joe writes %MRS: sounds good. % BJC: The following methodology is taken from the 2014 Feng et al. paper where % WAXS is reported. The data we are using comes from the 2016 paper which % I assume used the same set up but I will need to verify this with Xunda % before this gets sent out. The SAXS is the same as shown in the 2016 paper % however I will likely reproduce it with matplotlib %\Experimental wide angle X-ray scattering measurements were performed using a %Rigaku 007 HF+ instrument with a rotating anode Cu K$\alpha$ X-ray source %($\lambda$~= 1.542 \angstrom~) and a 2-D Saturn 994+ CCD detector. The %calibrations of the resultant 2-D WAXS patterns were done by using a %silicon powder standard (d-spacing of 3.1355 \angstrom~). 2-D SAXS scattering %patterns were integrated into 1-D plots of scattering intensity (I) versus %q, where q = 4$\pi$sin($\theta$)/$\lambda$ and the scattering angles is %2$\theta$. The data shown is reproduced from data collected elsewhere~\ref{feng_scalable_2014}. Simulated X-ray diffraction patterns were generated based on atomic coordinates for a direct experimental comparison. \begin{itemize} \item All atomic coordinates were simulated as gaussian spheres of electron density corresponding to each atom's atomic number. \item A three dimensional fourier transform (FT) of the array of electron density results in a three dimensional structure factor which represents the unit cell in reciprocal space. % \item We perform an angular average of the structure factor about the z axis to generate a 2D cross section close to what one would see experimentally. \item We matched experimental 2D WAXS patterns by iterative improvement of our choice of initial structure and equilibration procedure. \end{itemize} \begin{figure}[!ht] \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{sandwiched.png} \caption{}\label{fig:sandwiched} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{PD.png} \caption{}\label{fig:pd} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{Tshaped.png} \caption{}\label{fig:tshaped} \end{subfigure} \vskip\baselineskip \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=\textwidth]{sandwichedlayers.png} \caption{}\label{fig:sandwichedlayers} \end{subfigure} \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=\textwidth]{offsetlayers.png} \caption{}\label{fig:offsetlayers} \end{subfigure} \caption{(a) Sandwiched benzene dimers stack 3.8 \angstrom~apart. (b) Parallel-Displaced benzene dimers stack 3.4 \angstrom~vertically and 1.6 \angstrom~horizontally apart. (c) T-shaped benzene dimers stack 5.0 \angstrom~apart. (d) Two monomer layers stacked in the sandwiched configuration (e) Two monomer layers stacked in the parallel-displaced configuration }\label{fig:stacking} \end{figure} We calculated ionic conductivity using two different methods for robustness. \begin{itemize} \item The Nernst-Einstein relation relates the DC ionic conductivity to ion diffusivity, $D$, concentration, $C$, ion charge, $q$, the boltzmann constant, $k_b$, and temperature, $T$: $$\sigma = \dfrac{q^2CD}{k_b T}$$ %BJC: need citation \item Sodium ion diffusion coefficients were found by calculating the slope of the linear region of the mean square displacement curve as indicated by the einstein relation \cite{einstein_investigations_1956}.] %MRS: state how you decided which was the linear region. %BJC: nothing fancy other than looking at the msd plot and determining where to % start fitting. I suppose I could minimize R^2 but that seems like overkill \item We looked at the MSD plot to determine where to begin and end a linear fit \item Ion concentration was measured with respect to the entire unit cell. \item The second method, termed the 'Collective Diffusion' model, measures the movement of the collective variable, Q, which is defined as the amount of charge transfer through the system and can be thought to represent the center of charge of the system. \item The conductance, $\gamma$ of the system can be calculated as: $$ \gamma = \dfrac{D_Q}{k_b T} $$ Conversion to ionic conductivity is achieved by multiplying by channel length and dividing by the membrane cross sectional area. \item $D_Q$ is the diffusion coefficient of the collective variable Q. It can be calculated using the einstein relation. \item A full derivation of the model can be accessed elsewhere \cite{liu_collective_2013}. \end{itemize} Using an equilibrated structure, a crosslinking procedure was performed in order to better parallel synthetic procedures. \begin{itemize} \item The purpose of crosslinking is to maintain macroscopic alignment of the crystalline domains, ensuring aligned, hexagonally packed pores. \item For that reason, we are not concerned with replicating the kinetics of the reaction, but instead emphasize the consistency of the final structure with experimental structural data. \item The algorithm was developed based on the known reaction mechanism. \item Crosslinking of this system is a free radical polymerization (FRP) taking place between terminal vinyl groups present on each of the three monomer tails. \item FRPs require an initiator which bonds to the system, meaning new atoms were introduced into the system. \item For simplicity, the initiator was simulated as hydrogen and made present in the simulation by including them in all possible bonding positions as dummy atoms. \item The crosslinking procedure is carried out iteratively. \item During each iteration, bonding carbon atoms are chosen based on a distance cut-off. \item The topology is updated with new bonds and dummy hydrogen atoms are changed to appropriate hydrogen types. \item Head-to-tail addition was the only propagation mode considered due to its dominance the real system. \item Direction of attack was not considered because the resultant mixture is racemic. %BJC: The following items belongs in results/discussion %MRS: probably. \item The resulting crosslinked structure has an even distribution of crosslinks between monomer tails of the same monomer, monomers stacked on top of each other and monomers in other pores, including across periodic boundaries. \item The pore spacing shrinks by $\approx$ 1 \angstrom~ and stays constant under a range of simulation conditions. \end{itemize} \section*{Results and Discussion} \subsection*{Determination of Nanoscopic Structural Details} %In order to construct an initial configuration which gives reliable %trends, we need to understand the composition of layers, how far apart %to stack the layers, and how to orient them with respect to each other. %\begin{itemize % \item To verify our choices for each parameter, we compare our calculations % to experimental small angle X-ray scattering (SAXS), wide angle X-ray % scattering (WAXS), and ionic conductivity measurements. %\end{itemize} We will now address the questions raised in the introduction in the order that they were asked. %BJC: Not a fan of this transition % BJC: How this section (||||) used to be ordered until deciding to put monomers / layer first % VVVV %To answer (1), we verified that the system stays partitioned into layers by %plotting the pair correlation function calculated between aromatic rings along %the length of the pores (Fig~\ref{fig:zdf}). % BJC: add how this calculation is done to methods section (put where I talk about % choosing distance between layers %\begin{itemize} % \item Period of fluctuation (layer spacing) % \item Magnitude of fluctuations % \item Comment on difference between offset/layered, number of monomers per layer % \end{itemize} % BJC: might replace with a figure containing alllllll of the zdf's %\begin{figure}[!ht] % \centering % \begin{subfigure}{0.45\textwidth} % \centering % \includegraphics[width=\textwidth]{zdf5layered.png} % \caption{}\label{fig:zdf_layered} % \end{subfigure} % \begin{subfigure}{0.45\textwidth} % \centering % \includegraphics[width=\textwidth]{zdf5offset.png} % \caption{}\label{fig:zdf_offset} % \end{subfigure} % \caption{Pair distribution functions of aromatic carbons for the % (a) 5 monomer per layer, sandwiched and (b) 5 monomer per layer, % parallel displaced configurations. Clear periodic maxima in the % $z$ probability density indicate distinct layers. The magnitude % of the spikes with respect to the average suggest that the 5 % monomer per layer, sandwiched configuration possesses a higher % degree of layer partitioning.}\label{fig:zdf} % \end{figure} % To discern the composition of the monomer layers, addressing (2), we ran % simulations of systems created with 4 - 8 monomers per layer. % BJC: can't figure out how to word this : should be 'with between 4 and 8 monomers' or 'with from 4 to 8 monomers'? %\begin{itemize} % \item Both the sandwiched and parallel displaced configurations were tested. % \item All systems are stable after 400 ns of simulation. % BJC: edited this for clarity %MRS: you only ran for 50 ns, or after 50 ns some of them started to destabilize? Is the point you are making that they are ALL pretty stable, or all at least a little stable. %BJC: Working on getting them all run out to 400 ns % \item Table ~\ref{table:p2p} shows the pore spacing for all systems tested. % \item Systems built with 5 monomers in each layer equilibrate to a pore spacing % that is most consistent with the experimental value of 4.12 nm derived from % SAXS measurements (Figure~\ref{fig:SAXS}). % \item The remainder of this discussion will focus on the analysis of systems % built with 5 monomers per layer. % \end{itemize} To discern the composition of the monomer layers, addressing (1), we ran simulations of systems created with 4 - 8 monomers per layer. % BJC: can't figure out how to word this : should be 'with between 4 and 8 monomers' or 'with from 4 to 8 monomers'? \begin{itemize} \item Both the sandwiched and parallel displaced configurations were tested. \item All systems are stable after 400 ns of simulation. % BJC: edited this for clarity %MRS: you only ran for 50 ns, or after 50 ns some of them started to destabilize? Is the point you are making that they are ALL pretty stable, or all at least a little stable. %BJC: Working on getting them all run out to 400 ns \item Table ~\ref{table:p2p} shows the pore spacing for all systems tested. \item Systems built with 5 monomers in each layer equilibrate to a pore spacing that is most consistent with the experimental value of 4.12 nm derived from SAXS measurements (Figure~\ref{fig:SAXS}). \item The remainder of this discussion will focus on the analysis of systems built with 5 monomers per layer. \end{itemize} To answer (2), we verified that the system stays partitioned into layers by plotting the pair correlation function, $g(z)$ calculated between aromatic rings along the length of the pores (Fig~\ref{fig:zdf}). \begin{itemize} \item Sandwiched configuration layers stack 4.29 \angstrom apart while parallel displaced configuration layers stack 4.37 \angstrom apart. The power spectrums used to calculate the reported layer spacing are given in the Supplemental Information. \item Even though the layer spacing is similar, the sandwiched configuration exhibits more defined layers with its first peak deviating from the mean number density by 61 \%. \item The parallel displaced configuration deviates from the mean by only 12 \% in comparison. \end{itemize} % BJC: might replace with a figure containing alllllll of the zdf's \begin{figure}[!ht] \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{zdf5layered.png} \caption{}\label{fig:zdf_layered} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{zdf5offset.png} \caption{}\label{fig:zdf_offset} \end{subfigure} \caption{Pair distribution functions of aromatic carbons for the (a) 5 monomer per layer, sandwiched and (b) 5 monomer per layer, parallel displaced configurations. Clear periodic maxima in the $z$ probability density indicate distinct layers. The magnitude of the spikes with respect to the average suggest that the 5 monomer per layer, sandwiched configuration possesses a higher degree of layer partitioning.}\label{fig:zdf} \end{figure} % BJC: Even though I will shift focus to only 5 monomers per layer, I can put other data % such as XRD of 4, 6, 7, 8 mon/layer in supplemental information \begin{table}[h] \centering \begin{tabular}{ccc} \toprule & \multicolumn{2}{c}{Starting Configuration} \\ \hline Monomers per layer & Sandwiched & Parallel Displaced \\ \midrule 4 & $3.71 \pm 0.04$ & $3.84 \pm 0.02$ \\ 5 & $4.20 \pm 0.04$ & $4.23 \pm 0.04$ \\ 6 & $4.83 \pm 0.03$ & $4.85 \pm 0.02$ \\ 7 & $4.73 \pm 0.03$ & $4.84 \pm 0.03$ \\ 8 & $5.08 \pm 0.04$ & $5.46 \pm 0.03$ \\ \bottomrule \end{tabular} \caption{The pore spacing (given in nm) of the model increases as number of monomers in each layer increases. The pore spacing of a system starting in the sandwiched configuration is systematically lower than that started in an offset configuration. Systems built with 5 monomers per layer in a parallel displaced configuration result in a pore spacing closest to the experimental 4.12 nm} \label{table:p2p} \end{table} % To answer question (3) we require high resolution experimental evidence which we % can match to our simulations. Experimental WAXS measurements encode all structural % details and we would like to match them as well as possible. % %MRS: also encodes the pore spacing % %BJC: got rid of 'remaining' in 'remaining structural details' % %BJC: I'm not exactly sure where to place this paragraph. Maybe it can go in methods? % \begin{itemize} % \item There are five major features present in the 2D experimental % pattern which our model intends to reproduce (Figure~\ref{fig:WAXS}). % \item The first is located at $q_z$ = 1.7 \angstrom$^-1$, % corresponding to a real spacing of 3.7 \angstrom~. The reflection is % attributed to $\pi$-$\pi$ stacking between aromatic rings in the direction % perpendicular to the membrane plane, or z-axis. For simplicity, this % reflection will be referred to as R-$\pi$. % %MRS: why R in R-pi? for reflection? % %BJC: yes, R = reflection % \item A weak intensity line is located at exactly half the $q_z$ value of % R-$\pi$ ($q_z$ = 0.85 \angstrom$^-1$), corresponding to a % real space periodic spacing of 7.4 \angstrom~. This reflection has been % interpreted as 2\textsubscript{1} helical ordering of aromatic rings % along the z axis meaning if the positions of the aromatic rings can % be traced by a helix, then for each turn in the helix, there should be % two aromatic rings. For this reason it will be referred to as R-helix. % \item A third major reflection is marked by a low intensity ring located % at r = 1.4 \angstrom$^-1$. The real space separation % corresponds to 4.5 \angstrom~ which is characteristic of the average % spacing between packed alkane chains. This reflection will be called R-alkanes. % \item Within R-alkanes, are four spots of higher intensity which % will be called R-spots. All are located $\approx 40$ degrees from the $q_z$ axis % in their respective quadrants. In many liquid crystal systems this can be % explained by the tilt angle of the alkane chains with respect to the xy plane. % BJC: Reference % \item The final major reflections correspond to the spacing and symmetry of % the d\textsubscript{100} plane which can be related to the distance between % pores. The feature, which will be called R-pores, is characterized by dots % along $q_z$ = 0. The spacing between dots is indicative of the hexagonal % symmetry of the packed pores. It is easiest to interpret the data by % radially integrating the 2D data to get a 1D curve which is shown in Figure~\ref{fig:SAXS}. % \end{itemize} % \begin{figure}[!ht] % \centering % \begin{subfigure}[t]{0.47\linewidth} % \centering % \raisebox{.2\textwidth}{% % \includegraphics[width=\linewidth]{WAXS_soft_confined.png} % } % \caption{}\label{fig:WAXS} % \end{subfigure} % \begin{subfigure}[t]{0.43\linewidth} % \centering % \vspace{12mm} % \includegraphics[width=\linewidth]{SAXS.png} % \caption{}\label{fig:SAXS} % \end{subfigure} % \caption{(a) 2D wide angle X-ray scattering gives details about repeating % features less than 1 nanometer apart. (b) 1D small angle X-ray scattering % indicates hexagonal packing of pores as well as the spacing between pores.}\label{fig:SWAXS} %MRS: maybe compare the 2D and 1D from simulation here as well? Need to state the above figure is from another paper (I think at least the 1D is?) %BJC: 1D is from a Yale paper. I'm going to remake the figure from the raw data but will reference that %BJC: The simulated 1D shows the right peaks, but the resolution makes it kind of pointy. I'll let you take a look at it but maybe it belongs in the supplemental % \end{figure} The $z$-direction correlation functions show that layers in our model prefer to stack further apart than the 3.7 \angstrom~ suggested by experiment. We attempted equilibration with layers stacked 4.0 nm apart and to our surprise, we observed long-term stability of a qualitatively different configurations suggesting that we have found more than one metastable free energy basin. %that the initial distance between layers can influence the approach towards %equilibrium. \begin{itemize} \item Equilibrated systems built according to the 3.7 \angstrom~layer spacing implied by R-$\pi$ are characterized by a defined, cylindrical and open pore structure. \item We will refer to this large set of configurations, with an open pore, as the OP Basin (open pore) (Figure~\ref{fig:OPbasin}). \item Simulations of systems built with layers stacked greater than 4 \angstrom~ apart results in a pore structure characterized by high radial disorder, while still maintaining partitioning between hydrophobic and hydrophilic regions. \item This will be called the CP Basin (closed pore) (Figure~\ref{fig:CPbasin}). \item This LLC membrane may exist in at least two metastable states. \item The distinct difference in pore structure exhibited by each phase will likely lead to different transport mechanisms. \item Because the pore structure will vary between each Basin, understanding which phase exists experimentally is necessary in order to ensure we are studying the system which actually dominates. \end{itemize} \begin{figure}[!ht] \centering \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=\textwidth]{280K_tramp_close.png} \caption{}\label{fig:OPbasin} \end{subfigure} \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=\textwidth]{340K_tramp_close_full.png} \caption{}\label{fig:CPbasin} \end{subfigure} \caption{From a qualitative standpoint, the OP Basin (a) is characterized by a hollow cylindrical pore while pores in the CP Basin (b) are characterized by a higher degree of radial disorder}\label{fig:basins} \end{figure} % BJC: This paragraph needs to change since, upon inspection, it appears that the % offset configuration is correct. So it needs to shine a more positive light on offset % BJC: I think it can be deleted %We varied the initial relative interlayer orientation between sandwiched and %parallel-displaced based on our knowledge of the stability of the two %pi-stacking modes. %\begin{itemize} % \item We generated simulated X-ray diffraction patterns using simulation % trajectories from the highly restrained initial configurations created during % equilibration % \item The simulated patterns establish a difference between the two stacking % modes (Figure~\ref{fig:XRDrestrained}). % \item In each pattern, R-alkanes is present at a distance of $\approx % 1.5$ \angstrom$^-1$ (4.2 \angstrom~ in real space). % \item R-$\pi$ is also present in each pattern, although it appears to % intersect R-alkanes because the spacing between aromatic rings is % similar to the alkane chain packing. % %MRS: for above, may need to clarify that this is in simulation, which is different than experiment. % %BJC: okay added some more specific language % \item The difference in aromatic ring spacing between experiment and % simulation is likely a result of the inability of GAFF to properly handle % aromatic interactions. %MRS: provide citations, some mechanistic reasoning. % \item The sandwiched configuration shows R-spots in the expected location. % \item A faint reflection is present in the location of R-helix %BJC: but it fades upon further simulation. % \item The parallel displaced configuration contains two lines of high % intensity with maximum intensity occurring where they intersect the alkane % chain region due to constructive interference between the two features. % \item The lines are located where one would expect to see R-helix and, % perhaps coincidentally, the intersection with R-alkanes occurs where one % would expect to see R-spots. %BJC: might save the following sentence for equilibrium trajectories % \item As the simulation is progressed, the full line begins to fade % (Figure~\ref{fig:XRDsim}). Most significantly, the line is no longer present % where R-helix exists. %\end{itemize} %\begin{figure}[!ht] % \centering % \begin{subfigure}{0.475\textwidth} % \hspace{-1.2cm} % \centering % \includegraphics[width=\textwidth,valign=t]{rzlayeredrestrained.png} % \caption{}\label{fig:rzplayeredrestrained} % \end{subfigure} % \begin{subfigure}{0.475\textwidth} % \hspace{-1.2cm} % \centering % \includegraphics[width=\textwidth,valign=t]{rzoffsetrestrained.png} % \caption{}\label{fig:rzoffsetrestrained} % \end{subfigure} % \caption{(a) Simulated X-ray diffraction of a sandwiched configuration % with restraints placed on aromatic carbons shows all major features % present in experimental WAXS. Near solid lines at constant z are a result of % the highly ordered aromatic rings. (b) Simulated X-ray diffraction of a similarly % restrained parallel displaced configuration may also contain all % major experimental WAXS features. One can argue that R-spots is not present % but it is difficult to distinguish because of its intersection with the solid % R-helix line. %MRS: Longer simulations are necessary to determine which structure % is the best match to experiment %MRS: we probably want to avoid saying this (above) if we can. %}\label{fig:XRDrestrained} % \end{figure} %MRS: one thought: maybe organize it a different way -- it's not the disordered (and go through the anlysis there), and then separately show that it's more likely the sandwich. Something to think about. %MRS: anyway, this is the key part of the paper, so clarity and organization important. %MRS: make sure the figures are comparable. Could someone look at the figures, and see immediately what you are explaining -- for example, experiment and fully equilibrated disordered, sandwich and parallel displaced all in a row? %Full comparison of experimental 2D WAXS with simulated X-ray diffraction %patterns produced from equilibrated MD trajectories shows the most consistency %with the offset configuration in the OP basin. We answer question (3) by simulating X-ray diffraction patterns produced from equilibrated MD trajectories. We leave open the possibility that the experimental structure might be reminiscent of the parallel displaced or sandwiched configuration in the OP or CP basin. %We created systems using three distinct initial configurations and examined %their structure by simulating X-ray diffraction patterns produced from equilibrated MD trajectories. \begin{itemize} \item OP Basin systems were built in both the parallel displaced and sandwiched configurations with an intial layer spacing of 3.7 \angstrom~. \item A third system was created by stacking layers in the sandwiched configuration 5 \angstrom~apart in order to guide it towards the CP Basin. % BJC: I've done an additional offset disordered system which goes to Basin CP but I think that will confuse the point \item The three systems were equilibrated according to our procedure with NPT simulations of greater than 400 ns. \item Simulated diffration patterns were generated using portions of the trajectory after equilibration. \item We assume the simulation has equilibrated when the distance between pores and the membrane thickness stopped changing. \item Simulated diffraction patterns for all three structures are shown in Figure ~\ref{fig:XRDsim}. \end{itemize} %BJC: below (commented out) thesis is from when I had the following three paragraphs combined %Full comparison of experimental 2D WAXS with simulated X-ray diffraction %patterns shows the most consistency with the offset configuration in the OP basin. Simulated diffraction of the disordered pore structure in the both the sandwiched and parallel displaced configurations does not match the experimental pattern. \begin{itemize} \item The disordered pore structure exhibits R-alkanes and R-pores but R-helix, R-$\pi$ and R-spots are not present. \item Due to low resolution, making out the individual spots of R-pores is challenging and can be validated with slightly better resolution upon full spherical integration of the 3D structure factor. However, the same information can be extracted by measuring the pore spacing as described earlier. %BJC: ^^^This can be more concise \item Although the structure's diffraction pattern is very different from experiment, its long-term stability suggests that the structure is realistic. We will explore this further when addressing (4). \end{itemize} Simulated XRD of the sandwiched configuration contains all experimental features except for R-helix. \begin{itemize} \item R-alkanes and R-pores appear in the expected locations. \item R-$\pi$ is also present, intersecting R-alkanes at a lower q value than in experiment. The rings prefer to stack $\approx 4.1 \angstrom$ apart as % update distance if needed opposed to 3.7 \angstrom~. \item Most notably, R-spots appears in the expected location, which suggests that there is something intrinsic to closer packing that gives rise to such features. \end{itemize} The parallel displaced configuration results in a simulated XRD pattern with the closest match to experiment. \begin{itemize} \item It produces the only pattern that exhibits all major reflections \item R-alkanes, R-pores and R-$\pi$ appear as they do in the sandwiched configuration. \item R-spots appears, however with a lower intensity relative to R-alkanes when compared to the sandwiched configuration. \item R-helix appears apparently due to the parallel displaced aromatic rings %BJC: Maybe a figure to explain why this happens \end{itemize} % BJC: Replacement figure below this one % \begin{figure}[!ht] % \centering % \begin{subfigure}{0.31\textwidth} % \centering % \hspace{-0.9cm} % \includegraphics[width=\textwidth]{sandwich_rzplot.png} % \caption{}\label{fig:sandwich_rzplot} % \end{subfigure} % \begin{subfigure}{0.31\textwidth} % \centering % \hspace{-0.9cm} % \includegraphics[width=\textwidth]{offset_rzplot.png} % \caption{}\label{fig:offsetrzplot} % \end{subfigure} % \begin{subfigure}{0.31\textwidth} % \centering % \hspace{-0.9cm} % \includegraphics[width=\textwidth]{disorder_rzplot.png} % \caption{}\label{fig:disorder_rzplot.png} % \end{subfigure} % \caption{(a) All major features except R-helix are present in % XRD patterns resulting from an equilibrated sandwiched configuration % in Basin A. R-$\pi$ intersescts R-alkanes. R-helix faded during equilbration. % (b) All major features execpt R-spots are present in XRD patterns % resulting from an equilibrated parallel displaced configuration in Basin A. % R-helix exists faintly in the expected location. (c) R-pores and R-alkanes % are the only major features present in XRD patterns resulting from an % equilibrated disordered phase}\label{fig:XRDsim} % \end{figure} \begin{figure}[ht] \begin{subfigure}{.83\linewidth} \centering \begin{subfigure}{0.4\linewidth} \centering \includegraphics[width=\linewidth, trim={2.5cm 0 4cm 2cm}, clip]{WAXS_raw.png}% \caption{}~\label{fig:raw_waxs} \end{subfigure}% \begin{subfigure}{0.4\linewidth} \centering \includegraphics[width=\linewidth, trim={2cm 0 2.5cm 1.25cm}, clip]{rzplot_offset.png} \caption{}~\label{fig:rz_offset} \end{subfigure} \begin{subfigure}{0.4\linewidth} \centering \includegraphics[width=\linewidth, trim={2cm 0 2.5cm 1.25cm}, clip]{rzplot_layered.png} \caption{}~\label{fig:rz_layered} \end{subfigure}% \begin{subfigure}{0.4\linewidth} \centering \includegraphics[width=\linewidth, trim={2cm 0 2.5cm 1.25cm}, clip]{rzplot_disordered.png} \caption{}~\label{fig:rz_disordered} \end{subfigure} \end{subfigure}% \begin{subfigure}{0.14\linewidth} \centering \vspace{-1cm} \includegraphics[width=\linewidth]{colorbar_jet.png} \end{subfigure} \caption{(a) Experimental 2D WAXS data contains 5 major reflections which we aim to match. The remaining three images are diffraction patterns simulated from MD trajectories. (b) The parallel displaced configuration gives rise to all reflections of interest. (c) The sandwiched configurations gives rise to a pattern with all major reflections except R-helix. R-spots is strong relative to R-alkanes in comparison to the parallel displaced configuration. (d) The disordered pore configuration creates a pattern with only R-alkanes and R-pores in common with experiment}~\label{fig:xrd} \end{figure} R-spots appearing in the simulated XRD pattern of the OP basin conformations are a result of the way alkane tails pack together. \begin{itemize} \item Previously, the spots in the diffraction pattern had been explained as the product of tilted alkane chains. % BJC: reference \item We measured the tilt angle of the alkane chains and showed that our system equilibrates to an average tilt angle close to zero degrees (See Supplemental Information). \item To understand the origin of the spots, we determined which atoms gave rise to the feature \item Since R-spots is present as higher intensity spots within R-alkanes, it is likely that the spots arise as a consequence of the tails. \item By removing atoms from the trajectory and simulating a diffraction pattern, we were able to isolate the cause of the spots to the tails (Figure~\ref{fig:tails}). \item Since the tails stay nearly flat, we plotted the centroids of the tails and measured the angle between each centroid and its nearest neighbors with respect to the plane of the membrane (Figure~\ref{fig:centroids}). \item The distribution of these angles is consistent with the location of the spots (Figure~\ref{fig:tail_packing}). \item The peaks of interest in Figures~\ref{fig:offset_tails}~and~\ref{fig:layered_tails} are located at $\pm$ 33 $\degree$ which is the same location where the highest intensity of spots are located on the simulated patterns (See Supplemental Information for quantitative proof) \item We integrated the raw experimental 2D WAXS data in the region bounding R-alkanes and found the angle at which R-spots reaches its highest intensity to be $\pm$ 37 $\degree$ which is a reconcilable difference with our simulated results. \end{itemize} \begin{figure} \centering \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=\textwidth]{tails_topview.png} % picture of top of unit cell with only tail atoms shown \caption{}\label{fig:topdown_tails_only} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=\textwidth]{tails_rzplot.png} \caption{}\label{fig:tails_rzplot} \end{subfigure} \caption{(a) All atoms except carbon atoms making up the tails are removed from the trajectory. (b) The simulated diffraction pattern of the tail-only trajectory still shows R-spots}\label{fig:tails} \end{figure} \begin{figure}[ht] \centering \begin{subfigure}{\linewidth} \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\linewidth]{offset_tail_packing.png} \caption{}~\label{fig:offset_tails} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\linewidth]{offset_angle_v_I.png} \caption{}~\label{fig:layered_tails} \end{subfigure} \end{subfigure} \begin{subfigure}{\linewidth} \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\linewidth]{angles_traj_layered.png} \caption{}~\label{fig:rz_layered} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\linewidth]{layered_angle_v_I.png} \caption{}~\label{fig:test} \end{subfigure} \end{subfigure} \caption{The distribution of angles w.r.t. the xy plane between alkane chain tail centroids and nearest neighbor centroids for equilibrated parallel displaced (a) and sandwiched (c) configurations. The same peaks are visible when the 2D simulated diffraction data is radially integrate in the R-alkanes region, (b) and (d) respectively.}~\label{fig:tail_packing} \end{figure} %BJC: old version of above figure v v v %\begin{figure}[ht] %\centering %\begin{subfigure}{0.3\linewidth} % \centering % \includegraphics[width=\linewidth]{offset_tail_packing.png} % \caption{}~\label{fig:offset_tails} %\end{subfigure} %\begin{subfigure}{0.3\linewidth} % \centering % \includegraphics[width=\linewidth]{angles_traj_layered.png} % \caption{}~\label{fig:layered_tails} %\end{subfigure} %\begin{subfigure}{0.3\linewidth} % \centering % \includegraphics[width=\linewidth]{integrated_WAXS_ring.png} % \caption{}~\label{fig:rz_layered} %\end{subfigure} %\caption{The distribution of angles w.r.t. xy plane between alkane chain tail centroids and nearest %neighbor centroids for equilibrated parallel displaced (a) and sandwiched (b) configurations both show %distinct peaks at $\pm$ 33 $\degree$. (c) Integrated 2D WAXS data between $q_r$=1 and $q_r$=1.42 shows %distinct peaks at with maxima at $\pm$ 37 $\degree$}~\label{fig:tail_packing} %\end{figure} The disordered basin shares little in common with the ordered basin but its long term stability suggests that it can exist under some conditions. We observed that Basin B is the dominant configuration at higher temperatures. \begin{itemize} \item We linearly ramped the temperature of a system in Basin A from 280K to 340K (just below the experimental isotropic transition temperature) over 100 ns. \item Visually, there is a distinct change in pore structure from one characteristic of the OP Basin (Figure~\ref{fig:280K_pore}) to one characteristic of the CP Basin (Figure~\ref{fig:340K_pore}). \item The slope of all order parameters changes between 315K and 325K (\Cref{fig:p2p_tramp,fig:thickness_tramp,fig:order_tramp}) indicating the possibility of an abrupt change in system ordering. \item Our 100 ns temperature ramp was likely too fast and caused the system to suffer from hysteresis. \end{itemize} We can not immediately classify the OP Basin and the CP Basin as separate phases. \begin{itemize} \item To prove the existence of two phases we need evidence of a a first order phase transition. \item A first order phase transition can be denoted by a discontinuity of some order parameter in response to an external condition such as temperature. \item We chose three easily measurable order parameters: the distance between pores, the membrane thickness and the ratio of pore radius to the uncertainty in pore radius. % BJC: these may change \item The pore radius is divided by its uncertainty as a way of quantifying the degree to which monomers obstruct the pore region. \end{itemize} In an attempt to mitigate hysteresis, we performed slow, stepwise temperature ramps on a parallel displaced and a sandwiched configuration previously equilibrated at 300K. \begin{itemize} \item Every 200 ns, the temperature was raised 5K until we reached 345K \item We performed the same procedure with a system equilibrated in the CP Basin and used it as a benchmark for comparison. \end{itemize} \begin{figure}[!ht] \centering \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=\textwidth]{280K_tramp_close.png} \caption{}\label{fig:280K_pore} \end{subfigure} \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=\textwidth]{340K_tramp_close_full.png} \caption{}\label{fig:340K_pore} \end{subfigure} \vskip\baselineskip \begin{subfigure}[b]{0.325\textwidth} \centering \includegraphics[width=\textwidth]{p2p_tramp.png} \caption{}\label{fig:p2p_tramp} \end{subfigure} \begin{subfigure}[b]{0.325\textwidth} \centering \includegraphics[width=\textwidth]{thickness_tramp.png} \caption{}\label{fig:thickness_tramp} \end{subfigure} \begin{subfigure}[b]{0.325\textwidth} \centering \includegraphics[width=\textwidth]{order_tramp.png} \caption{}\label{fig:order_tramp} \end{subfigure} \caption{(a) The open pore structure exhibited by a structure equilibrated at 280K is characteristic of the OP Basin. (b) The closed pore structure with a high degree of radial disorder exhibited when the structure in (a) is heated to 340K is characteristic of the CP Basin. (c) A plot of distance between pores vs. temperature changes slope near 325K. (d) A plot of membrane thickness vs. temperature changes slope near 325K. (e) The plot of the ratio of pore radius to its uncertainty changes slope near 315K.}\label{fig:tramp} \end{figure} The CP and OP basins are two configurationally metastable basins. \begin{itemize} \item There is litle change in CP basin properties during the temperature ramp. \item We observed smooth changes in order parameters as temperature of the OP Basin system was increased implying that we cannot claim the existence of two phases (Figure~\ref{fig:phase_transition}). \item Qualitatively, the pore structure of the OP basin system becomes comparable to one characteristic of the CP basin as temperature is raised. (\Cref{fig:BasinA_280K_pore,fig:BasinB_340K_pore}) \item The OP basin system does not converge to the same order parameter values as the CP basin, however it is trending towards Basin B values. (\Cref{fig:p2p_step,fig:thickness_step,fig:order_step}) \item To resolve the quanitative discrepancy, we would need a slower temperature ramp \item Since there are no abrupt changes in any order parameter along the trajectory, we can conclude that the two basins are not separate phases \item The OP basin is the closest match to what is seen experimentally \item The CP basin is likely an intermediate between the Col\textsubscript{h} phase and isotropic phase \item The CP basin is present in our simulations at lower temperatures %MRS: lower temperatures than experiment. Though nobody has really done the experiments. %MRS: can we conclude anything from some of the in-situ temperature measurements done by Xunda? We talked about them, but I don't know they are referenced here. %BJC: Not with what we have because it's in-situ SAXS data which only shows the pore spacing than experiment because our model lacks sufficient $\pi$-$\pi$ iteractions necessary to stabilize the system into the OP basin. \end{itemize} \begin{figure}[!ht] \centering % BJC: I'll get actual pictures of the system here. I'm not sure how % necessary this figure is though. It could go in the supplemental % information and I can reference the pictures in the previous figure % since they show basically the same idea : Ordered vs. disordered pore % Alternatively, I could show two diffraction patterns here from the % beginning and end temperature \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=\textwidth]{280K_tramp_close.png} \caption{}\label{fig:BasinA_280K_pore} \end{subfigure} \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=\textwidth]{340K_tramp_close_full.png} \caption{}\label{fig:BasinB_340K_pore} \end{subfigure} \vskip\baselineskip \begin{subfigure}[b]{0.325\textwidth} \centering \includegraphics[width=\textwidth]{p2p_layered.png} \caption{}\label{fig:p2p_step} \end{subfigure} \begin{subfigure}[b]{0.325\textwidth} \centering \includegraphics[width=\textwidth]{thickness_layered.png} \caption{}\label{fig:thickness_step} \end{subfigure} \begin{subfigure}[b]{0.325\textwidth} \centering \includegraphics[width=\textwidth]{order_layered.png} \caption{}\label{fig:order_step} \end{subfigure} \caption{In all cases, blue lines represent the measured value of the order parameter, black lines are average values calculated from equilibrated Basin B systems at each temperature, green shaded regions represent the standard deviation of each of the black line values, and red dashed lines show where the temperature is bumped to the next level. (a) At 280K, the system is in a configuration reminiscent of Basin A. (b) At 335K, the system resembles a Basin B configuration. (c) The pore spacing of Basin A decreases with temperature approaching the value exhibited by Basin B. (d) The thickness of Basin A increases smoothly with temperature but is far from the Basin B value. Longer equilibrations at each temperature are needed to allow the system to fully expand. (e) The ratio of pore radius to uncertainty for Basin A changes smoothly with temperature, converging to a value below that exhibited by Basin B.}\label{fig:phase_transition} \end{figure} \subsection*{Ionic conductivity calculation} We use the equilibrated offset system in the OP basin to calculate ionic conductivity since its structure is the closest match to experiment. The model gives reasonable estimates of ionic conductivity when compared to experiment. \begin{itemize} \item Calculated values of ionic conductivity obtained using the Nernst Einstein relation and Collective Diffusion model are compared in Table~\ref{table:conductivity}. \item The two methods agree with each other within error, although the uncertainty obtained using the Collective Diffusion model is much higher. \item Much longer simulations are needed to lower the uncertainty. % TODO: Run Basin B out longer at same conditions as A \item Collective diffusion calculations were generated from 500 ns simulations. \item Our calculated values would benefit from longer simulations. \item For this reason we will likely only use the Nernst Einstein relation in future calculations of this type. \end{itemize} The CP basin has a higher ionic conductivity than the OP basin. \begin{itemize} \item We hypothesize that conductivity is enhanced in Basin B due to a higher sodium ion diffusivity. \item Transport of sodium is likely facilitated by the homogeneity of Basin B. Sodium ions have less nearby sites to move to in Basin A. \item There is currently no experimental evidence of this trend. Maybe Xunda will find something \item In both cases, our calculated values for Basin A are higher than the experimental values, as expected. \item Some of the discrepancy is likely a result of using an imperfect forcefield. \item However, the real system, although mostly aligned and straight, has a distribution of azimuthal angles, meaning that the pores have a degree of tortuosity which lowers the effective ionic conductivity of the bulk membrane. \item The ordering from isotropic to mostly aligned mesophases showed an 85 fold increase in ionic conductivity. We would expect additional gains in a perfectly aligned system. \end{itemize} \begin{table}[h] \centering \begin{tabular}{ccc} \toprule \multicolumn{3}{c}{Calculated Ionic Conductivity \si{\siemens\per\meter}} \\ \hline Method & Basin A & Basin B \\ \midrule Nernst Einstein & \num{1.23e-4} (0.01) & \num{1.76e-4} (0.02) \\ Collective Diffusion & \num{1.40e-4} (0.32) & \num{4.6e-4}(2.4) \\ Experiment & \num{1.33e-5} (0.10) & -- \\ \bottomrule \end{tabular} \caption{Calculated ionic conductivity using Nernst-Einsten and Collective Diffusion agree within error. Both methods give calculated values of ionic conductivity which are an order of magnitude higher than experimental values~\label{table:conductivity}}. \end{table} %MRS: the next sections maybe should be supporting? %MRS: you say ``the distance decreases by 1 A'' but is that true %over all of the different structures? %BJC: limiting the discussion to the OP basin in the offset configuration %MRS: Maybe show the difference before and after crosslinking in X-ray, to show that you can't tell? %MRS: you don't otherwise discuss clearly what effect crosslinking would have. \subsection*{Implementation of the crosslinking algorithm} We applied the crosslinking algorithm to the equilibrated sandwiched structure in the OP basin. \begin{itemize} %BJC: working on the following. The following is what I expect \item There is an even distribution of crosslinks between same monomer tails, between monomers in the same pore and between monomers in different pores including periodic boundaries. \item We reach 95 \% conversion of terminal vinyl groups \item The distance between pores shrinks by 1 \angstrom~ after the system is crosslinked \item Major features are still present in the X-ray diffraction \item The ionic conductivity is higher/lower in the crosslinked system \end{itemize} %BJC: THe following figure will be replaced with one representative of the new % data I am collecting above. \begin{figure}[!ht] \centering \begin{subfigure}{0.31\textwidth} \centering \includegraphics[width=\textwidth]{p2p_diagram.PNG} \caption{}\label{fig:p2p_diagram} \end{subfigure} \begin{subfigure}{0.31\textwidth} \centering \includegraphics[width=\textwidth]{no_xlink_p2p.png} \caption{}\label{fig:no_xlink_p2p} \end{subfigure} \begin{subfigure}{0.31\textwidth} \centering \includegraphics[width=\textwidth]{xlink_p2p.png} \caption{}\label{fig:xlink_p2p} \end{subfigure} \caption{(a) The legends of the plots in (b) and (c) refer to the numbers shown. Each numbered circle represents a pore. Distances are measured along each of the lines shown in addition to the distance from pore 1 to pore 4. (b) The positions of individual pores fluctuate in an uncrosslinked system. (c) The positions of individual pores in the crosslinked system are stable relative to the uncrosslinked system}\label{fig:xlink} \end{figure} \section*{Conclusion} We have used a detailed molecular model of the Col\textsubscript{h} phase formed by NA-GA3C11 in order to study the nanoscopic structure. \begin{itemize} \item While there have been efforts to model formation of various liquid crystalline phases with molecular dynamics, to our knowledge there have been no studies which attempt to examine their structure with the same level of detail presented here. \item We have confirmed that monomers stay partitioned in layers. \item We were able to deduce that layers are composed of 5 monomers. \item We have identified two metastable basins which each consist of a set of similar monomer configurations characterized by the degree of order in the pore region. \item We verified that the basins are not separate phases. \item We have explored the affect of two different $\pi$-$\pi$ stacking modes on the equilibrated membrane strucure. \item Simulated diffration patterns generated from MD trajectories suggest that the offset configuration produces a structure with the closest match to experiment. \item Even though our model only answers these questions for a system created by NA-GA3C11, it can be adapted to other study systems formed by other LCs with little extra effort. \end{itemize} \clearpage \bibliography{llc} \end{document}
{ "alphanum_fraction": 0.7402419436, "avg_line_length": 56.7786989796, "ext": "tex", "hexsha": "be68f9c21276094a142093183d13148a3d9dc51c", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2021-01-27T17:59:13.000Z", "max_forks_repo_forks_event_min_datetime": "2019-07-06T15:41:53.000Z", "max_forks_repo_head_hexsha": "e94694f298909352d7e9d912625314a1e46aa5b6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "shirtsgroup/LLC_Membranes", "max_forks_repo_path": "Ben_Manuscripts/metastable_paper/Outline.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "e94694f298909352d7e9d912625314a1e46aa5b6", "max_issues_repo_issues_event_max_datetime": "2019-08-22T22:35:17.000Z", "max_issues_repo_issues_event_min_datetime": "2019-08-22T20:11:46.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "shirtsgroup/LLC_Membranes", "max_issues_repo_path": "Ben_Manuscripts/metastable_paper/Outline.tex", "max_line_length": 650, "max_stars_count": 4, "max_stars_repo_head_hexsha": "e94694f298909352d7e9d912625314a1e46aa5b6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "shirtsgroup/LLC_Membranes", "max_stars_repo_path": "Ben_Manuscripts/metastable_paper/Outline.tex", "max_stars_repo_stars_event_max_datetime": "2021-08-11T18:57:39.000Z", "max_stars_repo_stars_event_min_datetime": "2019-06-18T15:26:49.000Z", "num_tokens": 22278, "size": 89029 }
% Default to the notebook output style % Inherit from the specified cell style. \definecolor{orange}{cmyk}{0,0.4,0.8,0.2} \definecolor{darkorange}{rgb}{.71,0.21,0.01} \definecolor{darkgreen}{rgb}{.12,.54,.11} \definecolor{myteal}{rgb}{.26, .44, .56} \definecolor{gray}{gray}{0.45} \definecolor{lightgray}{gray}{.95} \definecolor{mediumgray}{gray}{.8} \definecolor{inputbackground}{rgb}{.95, .95, .85} \definecolor{outputbackground}{rgb}{.95, .95, .95} \definecolor{traceback}{rgb}{1, .95, .95} % ansi colors \definecolor{red}{rgb}{.6,0,0} \definecolor{green}{rgb}{0,.65,0} \definecolor{brown}{rgb}{0.6,0.6,0} \definecolor{blue}{rgb}{0,.145,.698} \definecolor{purple}{rgb}{.698,.145,.698} \definecolor{cyan}{rgb}{0,.698,.698} \definecolor{lightgray}{gray}{0.5} % bright ansi colors \definecolor{darkgray}{gray}{0.25} \definecolor{lightred}{rgb}{1.0,0.39,0.28} \definecolor{lightgreen}{rgb}{0.48,0.99,0.0} \definecolor{lightblue}{rgb}{0.53,0.81,0.92} \definecolor{lightpurple}{rgb}{0.87,0.63,0.87} \definecolor{lightcyan}{rgb}{0.5,1.0,0.83} % commands and environments needed by pandoc snippets % extracted from the output of `pandoc -s` \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \newenvironment{Shaded}{}{} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{{#1}}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{{#1}}}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{{#1}}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{{#1}}} \newcommand{\RegionMarkerTok}[1]{{#1}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}} \newcommand{\NormalTok}[1]{{#1}} % Define a nice break command that doesn't care if a line doesn't already % exist. \def\br{\hspace*{\fill} \\* } % Math Jax compatability definitions \def\gt{>} \def\lt{<} % Document parameters \title{} % Pygments definitions \makeatletter \def\PY@reset{\let\PY@it=\relax \let\PY@bf=\relax% \let\PY@ul=\relax \let\PY@tc=\relax% \let\PY@bc=\relax \let\PY@ff=\relax} \def\PY@tok#1{\csname PY@tok@#1\endcsname} \def\PY@toks#1+{\ifx\relax#1\empty\else% \PY@tok{#1}\expandafter\PY@toks\fi} \def\PY@do#1{\PY@bc{\PY@tc{\PY@ul{% \PY@it{\PY@bf{\PY@ff{#1}}}}}}} \def\PY#1#2{\PY@reset\PY@toks#1+\relax+\PY@do{#2}} \expandafter\def\csname PY@tok@gd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@gu\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.50,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@gt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.27,0.87}{##1}}} \expandafter\def\csname PY@tok@gs\endcsname{\let\PY@bf=\textbf} \expandafter\def\csname PY@tok@gr\endcsname{\def\PY@tc##1{\textcolor[rgb]{1.00,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@cm\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@vg\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@m\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@go\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.53,0.53}{##1}}} \expandafter\def\csname PY@tok@ge\endcsname{\let\PY@it=\textit} \expandafter\def\csname PY@tok@vc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@il\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@cs\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.74,0.48,0.00}{##1}}} \expandafter\def\csname PY@tok@gi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.63,0.00}{##1}}} \expandafter\def\csname PY@tok@gh\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@ni\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.60,0.60,0.60}{##1}}} \expandafter\def\csname PY@tok@nl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.63,0.00}{##1}}} \expandafter\def\csname PY@tok@nn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@no\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@na\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.49,0.56,0.16}{##1}}} \expandafter\def\csname PY@tok@nb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@nc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@nd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}} \expandafter\def\csname PY@tok@ne\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.82,0.25,0.23}{##1}}} \expandafter\def\csname PY@tok@nf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@si\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}} \expandafter\def\csname PY@tok@s2\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@vi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@nt\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@nv\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@s1\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@kd\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@sh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sx\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@bp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@c1\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@kc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@c\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@mf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@err\endcsname{\def\PY@bc##1{\setlength{\fboxsep}{0pt}\fcolorbox[rgb]{1.00,0.00,0.00}{1,1,1}{\strut ##1}}} \expandafter\def\csname PY@tok@mb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@ss\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@sr\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}} \expandafter\def\csname PY@tok@mo\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@kn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@mi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@gp\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@o\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@kr\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@s\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@kp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@w\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.73,0.73}{##1}}} \expandafter\def\csname PY@tok@kt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.69,0.00,0.25}{##1}}} \expandafter\def\csname PY@tok@ow\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}} \expandafter\def\csname PY@tok@sb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@k\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@se\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.13}{##1}}} \expandafter\def\csname PY@tok@sd\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \def\PYZbs{\char`\\} \def\PYZus{\char`\_} \def\PYZob{\char`\{} \def\PYZcb{\char`\}} \def\PYZca{\char`\^} \def\PYZam{\char`\&} \def\PYZlt{\char`\<} \def\PYZgt{\char`\>} \def\PYZsh{\char`\#} \def\PYZpc{\char`\%} \def\PYZdl{\char`\$} \def\PYZhy{\char`\-} \def\PYZsq{\char`\'} \def\PYZdq{\char`\"} \def\PYZti{\char`\~} % for compatibility with earlier versions \def\PYZat{@} \def\PYZlb{[} \def\PYZrb{]} \makeatother % Exact colors from NB \definecolor{incolor}{rgb}{0.0, 0.0, 0.5} \definecolor{outcolor}{rgb}{0.545, 0.0, 0.0} % Prevent overflowing lines due to hard-to-break entities \sloppy % Setup hyperref package \hypersetup{ breaklinks=true, % so long urls are correctly broken across lines colorlinks=true, urlcolor=blue, linkcolor=darkorange, citecolor=darkgreen, } % Slightly bigger margins than the latex defaults \begin{document} \maketitle \section{Control Flow Statements}\label{control-flow-statements} \subsection{If}\label{if} if some\_condition: \begin{verbatim} algorithm \end{verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}1}]:} \PY{n}{x} \PY{o}{=} \PY{l+m+mi}{12} \PY{k}{if} \PY{n}{x} \PY{o}{\PYZgt{}}\PY{l+m+mi}{10}\PY{p}{:} \PY{k}{print} \PY{l+s}{\PYZdq{}}\PY{l+s}{Hello}\PY{l+s}{\PYZdq{}} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] Hello \end{Verbatim} \subsection{If-else}\label{if-else} if some\_condition: \begin{verbatim} algorithm \end{verbatim} else: \begin{verbatim} algorithm \end{verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}2}]:} \PY{n}{x} \PY{o}{=} \PY{l+m+mi}{12} \PY{k}{if} \PY{n}{x} \PY{o}{\PYZgt{}} \PY{l+m+mi}{10}\PY{p}{:} \PY{k}{print} \PY{l+s}{\PYZdq{}}\PY{l+s}{hello}\PY{l+s}{\PYZdq{}} \PY{k}{else}\PY{p}{:} \PY{k}{print} \PY{l+s}{\PYZdq{}}\PY{l+s}{world}\PY{l+s}{\PYZdq{}} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] hello \end{Verbatim} \subsection{if-elif}\label{if-elif} if some\_condition: \begin{verbatim} algorithm \end{verbatim} elif some\_condition: \begin{verbatim} algorithm \end{verbatim} else: \begin{verbatim} algorithm \end{verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}3}]:} \PY{n}{x} \PY{o}{=} \PY{l+m+mi}{10} \PY{n}{y} \PY{o}{=} \PY{l+m+mi}{12} \PY{k}{if} \PY{n}{x} \PY{o}{\PYZgt{}} \PY{n}{y}\PY{p}{:} \PY{k}{print} \PY{l+s}{\PYZdq{}}\PY{l+s}{x\PYZgt{}y}\PY{l+s}{\PYZdq{}} \PY{k}{elif} \PY{n}{x} \PY{o}{\PYZlt{}} \PY{n}{y}\PY{p}{:} \PY{k}{print} \PY{l+s}{\PYZdq{}}\PY{l+s}{x\PYZlt{}y}\PY{l+s}{\PYZdq{}} \PY{k}{else}\PY{p}{:} \PY{k}{print} \PY{l+s}{\PYZdq{}}\PY{l+s}{x=y}\PY{l+s}{\PYZdq{}} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] x<y \end{Verbatim} if statement inside a if statement or if-elif or if-else are called as nested if statements. \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}4}]:} \PY{n}{x} \PY{o}{=} \PY{l+m+mi}{10} \PY{n}{y} \PY{o}{=} \PY{l+m+mi}{12} \PY{k}{if} \PY{n}{x} \PY{o}{\PYZgt{}} \PY{n}{y}\PY{p}{:} \PY{k}{print} \PY{l+s}{\PYZdq{}}\PY{l+s}{x\PYZgt{}y}\PY{l+s}{\PYZdq{}} \PY{k}{elif} \PY{n}{x} \PY{o}{\PYZlt{}} \PY{n}{y}\PY{p}{:} \PY{k}{print} \PY{l+s}{\PYZdq{}}\PY{l+s}{x\PYZlt{}y}\PY{l+s}{\PYZdq{}} \PY{k}{if} \PY{n}{x}\PY{o}{==}\PY{l+m+mi}{10}\PY{p}{:} \PY{k}{print} \PY{l+s}{\PYZdq{}}\PY{l+s}{x=10}\PY{l+s}{\PYZdq{}} \PY{k}{else}\PY{p}{:} \PY{k}{print} \PY{l+s}{\PYZdq{}}\PY{l+s}{invalid}\PY{l+s}{\PYZdq{}} \PY{k}{else}\PY{p}{:} \PY{k}{print} \PY{l+s}{\PYZdq{}}\PY{l+s}{x=y}\PY{l+s}{\PYZdq{}} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] x<y x=10 \end{Verbatim} \subsection{Loops}\label{loops} \subsubsection{For}\label{for} for variable in something: \begin{verbatim} algorithm \end{verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}5}]:} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{5}\PY{p}{)}\PY{p}{:} \PY{k}{print} \PY{n}{i} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] 0 1 2 3 4 \end{Verbatim} In the above example, i iterates over the 0,1,2,3,4. Every time it takes each value and executes the algorithm inside the loop. It is also possible to iterate over a nested list illustrated below. \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}6}]:} \PY{n}{list\PYZus{}of\PYZus{}lists} \PY{o}{=} \PY{p}{[}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{]}\PY{p}{,} \PY{p}{[}\PY{l+m+mi}{4}\PY{p}{,} \PY{l+m+mi}{5}\PY{p}{,} \PY{l+m+mi}{6}\PY{p}{]}\PY{p}{,} \PY{p}{[}\PY{l+m+mi}{7}\PY{p}{,} \PY{l+m+mi}{8}\PY{p}{,} \PY{l+m+mi}{9}\PY{p}{]}\PY{p}{]} \PY{k}{for} \PY{n}{list1} \PY{o+ow}{in} \PY{n}{list\PYZus{}of\PYZus{}lists}\PY{p}{:} \PY{k}{print} \PY{n}{list1} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] [1, 2, 3] [4, 5, 6] [7, 8, 9] \end{Verbatim} A use case of a nested for loop in this case would be, \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}7}]:} \PY{n}{list\PYZus{}of\PYZus{}lists} \PY{o}{=} \PY{p}{[}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{]}\PY{p}{,} \PY{p}{[}\PY{l+m+mi}{4}\PY{p}{,} \PY{l+m+mi}{5}\PY{p}{,} \PY{l+m+mi}{6}\PY{p}{]}\PY{p}{,} \PY{p}{[}\PY{l+m+mi}{7}\PY{p}{,} \PY{l+m+mi}{8}\PY{p}{,} \PY{l+m+mi}{9}\PY{p}{]}\PY{p}{]} \PY{k}{for} \PY{n}{list1} \PY{o+ow}{in} \PY{n}{list\PYZus{}of\PYZus{}lists}\PY{p}{:} \PY{k}{for} \PY{n}{x} \PY{o+ow}{in} \PY{n}{list1}\PY{p}{:} \PY{k}{print} \PY{n}{x} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] 1 2 3 4 5 6 7 8 9 \end{Verbatim} \subsubsection{While}\label{while} while some\_condition: \begin{verbatim} algorithm \end{verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}8}]:} \PY{n}{i} \PY{o}{=} \PY{l+m+mi}{1} \PY{k}{while} \PY{n}{i} \PY{o}{\PYZlt{}} \PY{l+m+mi}{3}\PY{p}{:} \PY{k}{print}\PY{p}{(}\PY{n}{i} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)} \PY{n}{i} \PY{o}{=} \PY{n}{i}\PY{o}{+}\PY{l+m+mi}{1} \PY{k}{print}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{Bye}\PY{l+s}{\PYZsq{}}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] 1 4 Bye \end{Verbatim} \subsection{Break}\label{break} As the name says. It is used to break out of a loop when a condition becomes true when executing the loop. \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}9}]:} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{100}\PY{p}{)}\PY{p}{:} \PY{k}{print} \PY{n}{i} \PY{k}{if} \PY{n}{i}\PY{o}{\PYZgt{}}\PY{o}{=}\PY{l+m+mi}{7}\PY{p}{:} \PY{k}{break} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] 0 1 2 3 4 5 6 7 \end{Verbatim} \subsection{Continue}\label{continue} This continues the rest of the loop. Sometimes when a condition is satisfied there are chances of the loop getting terminated. This can be avoided using continue statement. \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}10}]:} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{10}\PY{p}{)}\PY{p}{:} \PY{k}{if} \PY{n}{i}\PY{o}{\PYZgt{}}\PY{l+m+mi}{4}\PY{p}{:} \PY{k}{print} \PY{l+s}{\PYZdq{}}\PY{l+s}{The end.}\PY{l+s}{\PYZdq{}} \PY{k}{continue} \PY{k}{elif} \PY{n}{i}\PY{o}{\PYZlt{}}\PY{l+m+mi}{7}\PY{p}{:} \PY{k}{print} \PY{n}{i} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] 0 1 2 3 4 The end. The end. The end. The end. The end. \end{Verbatim} \subsection{List Comprehensions}\label{list-comprehensions} Python makes it simple to generate a required list with a single line of code using list comprehensions. For example If i need to generate multiples of say 27 I write the code using for loop as, \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}11}]:} \PY{n}{res} \PY{o}{=} \PY{p}{[}\PY{p}{]} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,}\PY{l+m+mi}{11}\PY{p}{)}\PY{p}{:} \PY{n}{x} \PY{o}{=} \PY{l+m+mi}{27}\PY{o}{*}\PY{n}{i} \PY{n}{res}\PY{o}{.}\PY{n}{append}\PY{p}{(}\PY{n}{x}\PY{p}{)} \PY{k}{print} \PY{n}{res} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] [27, 54, 81, 108, 135, 162, 189, 216, 243, 270] \end{Verbatim} Since you are generating another list altogether and that is what is required, List comprehensions is a more efficient way to solve this problem. \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}12}]:} \PY{p}{[}\PY{l+m+mi}{27}\PY{o}{*}\PY{n}{x} \PY{k}{for} \PY{n}{x} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,}\PY{l+m+mi}{11}\PY{p}{)}\PY{p}{]} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{outcolor}Out[{\color{outcolor}12}]:} [27, 54, 81, 108, 135, 162, 189, 216, 243, 270] \end{Verbatim} That's it!. Only remember to enclose it in square brackets Understanding the code, The first bit of the code is always the algorithm and then leave a space and then write the necessary loop. But you might be wondering can nested loops be extended to list comprehensions? Yes you can. \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}13}]:} \PY{p}{[}\PY{l+m+mi}{27}\PY{o}{*}\PY{n}{x} \PY{k}{for} \PY{n}{x} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,}\PY{l+m+mi}{20}\PY{p}{)} \PY{k}{if} \PY{n}{x}\PY{o}{\PYZlt{}}\PY{o}{=}\PY{l+m+mi}{10}\PY{p}{]} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{outcolor}Out[{\color{outcolor}13}]:} [27, 54, 81, 108, 135, 162, 189, 216, 243, 270] \end{Verbatim} Let me add one more loop to make you understand better, \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}14}]:} \PY{p}{[}\PY{l+m+mi}{27}\PY{o}{*}\PY{n}{z} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{50}\PY{p}{)} \PY{k}{if} \PY{n}{i}\PY{o}{==}\PY{l+m+mi}{27} \PY{k}{for} \PY{n}{z} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,}\PY{l+m+mi}{11}\PY{p}{)}\PY{p}{]} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{outcolor}Out[{\color{outcolor}14}]:} [27, 54, 81, 108, 135, 162, 189, 216, 243, 270] \end{Verbatim} % Add a bibliography block to the postdoc \newpage \input{06} \end{document}
{ "alphanum_fraction": 0.6008533176, "avg_line_length": 40.7005988024, "ext": "tex", "hexsha": "b8535ded103abb8fcb09bc06b5f48214b15bdb41", "lang": "TeX", "max_forks_count": 240, "max_forks_repo_forks_event_max_datetime": "2022-03-24T16:18:18.000Z", "max_forks_repo_forks_event_min_datetime": "2017-09-07T01:01:50.000Z", "max_forks_repo_head_hexsha": "b04b84fb5b82fe7c8b12680149e25ae0d27a0960", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "webdevhub42/Lambda", "max_forks_repo_path": "WEEKS/CD_Sata-Structures/_JUPYTER/Python-Lectures/tex/05.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "b04b84fb5b82fe7c8b12680149e25ae0d27a0960", "max_issues_repo_issues_event_max_datetime": "2015-10-08T16:29:27.000Z", "max_issues_repo_issues_event_min_datetime": "2015-10-08T15:39:14.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "webdevhub42/Lambda", "max_issues_repo_path": "WEEKS/CD_Sata-Structures/_JUPYTER/Python-Lectures/tex/05.tex", "max_line_length": 366, "max_stars_count": 247, "max_stars_repo_head_hexsha": "b04b84fb5b82fe7c8b12680149e25ae0d27a0960", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "webdevhub42/Lambda", "max_stars_repo_path": "WEEKS/CD_Sata-Structures/_JUPYTER/Python-Lectures/tex/05.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-28T17:02:15.000Z", "max_stars_repo_stars_event_min_datetime": "2017-09-14T19:36:07.000Z", "num_tokens": 8826, "size": 20391 }
% Options for packages loaded elsewhere \PassOptionsToPackage{unicode}{hyperref} \PassOptionsToPackage{hyphens}{url} % \documentclass[ ]{article} \usepackage{lmodern} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{textcomp} % provide euro and other symbols \else % if luatex or xetex \usepackage{unicode-math} \defaultfontfeatures{Scale=MatchLowercase} \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1} \fi % Use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} \IfFileExists{microtype.sty}{% use microtype if available \usepackage[]{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \makeatletter \@ifundefined{KOMAClassName}{% if non-KOMA class \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt}} }{% if KOMA class \KOMAoptions{parskip=half}} \makeatother \usepackage{xcolor} \IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available \IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}} \hypersetup{ pdftitle={ASSIGNMENT 4}, pdfauthor={Ninad Patkhedkar}, hidelinks, pdfcreator={LaTeX via pandoc}} \urlstyle{same} % disable monospaced font for URLs \usepackage[margin=1in]{geometry} \usepackage{longtable,booktabs} % Correct order of tables after \paragraph or \subparagraph \usepackage{etoolbox} \makeatletter \patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{} \makeatother % Allow footnotes in longtable head/foot \IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}} \makesavenoteenv{longtable} \usepackage{graphicx,grffile} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} % Set default figure placement to htbp \makeatletter \def\fps@figure{htbp} \makeatother \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{-\maxdimen} % remove section numbering \title{ASSIGNMENT 4} \author{Ninad Patkhedkar} \date{2020-09-26} \begin{document} \maketitle \hypertarget{markdown-basics}{% \section{Markdown Basics}\label{markdown-basics}} \hypertarget{favorite-foods}{% \subsection{Favorite Foods}\label{favorite-foods}} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Dal fry \item Fish Currey \item Pav Bhaji \end{enumerate} \hypertarget{images}{% \subsection{Images}\label{images}} \begin{figure} \centering \includegraphics{/cloud/project/completed/assignment04/plots/10-all-cases-log.png} \caption{All Cases (Log Plot)} \end{figure} \hypertarget{add-a-quote}{% \subsection{Add a Quote}\label{add-a-quote}} \begin{quote} ``Don't be encumbered by history, just go out and do something wonderful.'' --- Robert Noyce \end{quote} \hypertarget{add-an-equation}{% \subsection{Add an Equation}\label{add-an-equation}} Summation and Parenthesis \[\sum_{i=1}^{n}\left( \frac{X_i}{Y_i} \right)\] \hypertarget{add-a-footnote}{% \subsection{Add a Footnote}\label{add-a-footnote}} This is a footnote\footnote{That is a footnote.} \hypertarget{add-citations}{% \subsection{Add Citations}\label{add-citations}} \begin{itemize} \tightlist \item R for Everyone \item Discovering Statistics Using R \end{itemize} \hypertarget{inline-code}{% \section{Inline Code}\label{inline-code}} \hypertarget{ny-times-covid-19-data}{% \subsection{NY Times COVID-19 Data}\label{ny-times-covid-19-data}} \includegraphics{assignment_04_PatkhedkarNinad_files/figure-latex/unnamed-chunk-2-1.pdf} \hypertarget{r4ds-height-vs-earnings}{% \subsection{R4DS Height vs Earnings}\label{r4ds-height-vs-earnings}} \includegraphics{assignment_04_PatkhedkarNinad_files/figure-latex/unnamed-chunk-3-1.pdf} \hypertarget{tables}{% \section{Tables}\label{tables}} \hypertarget{knitr-table-with-kable}{% \subsection{Knitr Table with Kable}\label{knitr-table-with-kable}} \begin{longtable}[]{@{}llllr@{}} \caption{One Ring to Rule Them All}\tabularnewline \toprule name & race & in\_fellowship & ring\_bearer & age\tabularnewline \midrule \endfirsthead \toprule name & race & in\_fellowship & ring\_bearer & age\tabularnewline \midrule \endhead Aragon & Men & TRUE & FALSE & 88\tabularnewline Bilbo & Hobbit & FALSE & TRUE & 129\tabularnewline Frodo & Hobbit & TRUE & TRUE & 51\tabularnewline Galadriel & Elf & FALSE & FALSE & 7000\tabularnewline Sam & Hobbit & TRUE & TRUE & 36\tabularnewline Gandalf & Maia & TRUE & TRUE & 2019\tabularnewline Legolas & Elf & TRUE & FALSE & 2931\tabularnewline Sauron & Maia & FALSE & TRUE & 7052\tabularnewline Gollum & Hobbit & FALSE & TRUE & 589\tabularnewline \bottomrule \end{longtable} \hypertarget{pandoc-table}{% \subsection{Pandoc Table}\label{pandoc-table}} \begin{longtable}[]{@{}lllll@{}} \toprule \begin{minipage}[b]{0.11\columnwidth}\raggedright Name\strut \end{minipage} & \begin{minipage}[b]{0.11\columnwidth}\raggedright Race\strut \end{minipage} & \begin{minipage}[b]{0.25\columnwidth}\raggedright In Fellowship?\strut \end{minipage} & \begin{minipage}[b]{0.24\columnwidth}\raggedright Is Ring Bearer?\strut \end{minipage} & \begin{minipage}[b]{0.07\columnwidth}\raggedright Age\strut \end{minipage}\tabularnewline \midrule \endhead \begin{minipage}[t]{0.11\columnwidth}\raggedright Aragon\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright Men\strut \end{minipage} & \begin{minipage}[t]{0.25\columnwidth}\raggedright Yes\strut \end{minipage} & \begin{minipage}[t]{0.24\columnwidth}\raggedright No\strut \end{minipage} & \begin{minipage}[t]{0.07\columnwidth}\raggedright 88\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.11\columnwidth}\raggedright Bilbo\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright Hobbit\strut \end{minipage} & \begin{minipage}[t]{0.25\columnwidth}\raggedright No\strut \end{minipage} & \begin{minipage}[t]{0.24\columnwidth}\raggedright Yes\strut \end{minipage} & \begin{minipage}[t]{0.07\columnwidth}\raggedright 129\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.11\columnwidth}\raggedright Frodo\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright Hobbit\strut \end{minipage} & \begin{minipage}[t]{0.25\columnwidth}\raggedright Yes\strut \end{minipage} & \begin{minipage}[t]{0.24\columnwidth}\raggedright Yes\strut \end{minipage} & \begin{minipage}[t]{0.07\columnwidth}\raggedright 51\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.11\columnwidth}\raggedright Sam\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright Hobbit\strut \end{minipage} & \begin{minipage}[t]{0.25\columnwidth}\raggedright Yes\strut \end{minipage} & \begin{minipage}[t]{0.24\columnwidth}\raggedright Yes\strut \end{minipage} & \begin{minipage}[t]{0.07\columnwidth}\raggedright 36\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.11\columnwidth}\raggedright Sauron\strut \end{minipage} & \begin{minipage}[t]{0.11\columnwidth}\raggedright Maia\strut \end{minipage} & \begin{minipage}[t]{0.25\columnwidth}\raggedright No\strut \end{minipage} & \begin{minipage}[t]{0.24\columnwidth}\raggedright Yes\strut \end{minipage} & \begin{minipage}[t]{0.07\columnwidth}\raggedright 7052\strut \end{minipage}\tabularnewline \bottomrule \end{longtable} \hypertarget{references}{% \section{References}\label{references}} \end{document}
{ "alphanum_fraction": 0.7632238656, "avg_line_length": 31.1640625, "ext": "tex", "hexsha": "85b54c3729504373481ef8918df26f0494179259", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "29d0ca5728a6c918033e2d6ebde94556aa30df25", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ninadcpa/dsc520", "max_forks_repo_path": "completed/assignment04/assignment_04_PatkhedkarNinad.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "29d0ca5728a6c918033e2d6ebde94556aa30df25", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ninadcpa/dsc520", "max_issues_repo_path": "completed/assignment04/assignment_04_PatkhedkarNinad.tex", "max_line_length": 88, "max_stars_count": null, "max_stars_repo_head_hexsha": "29d0ca5728a6c918033e2d6ebde94556aa30df25", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ninadcpa/dsc520", "max_stars_repo_path": "completed/assignment04/assignment_04_PatkhedkarNinad.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2846, "size": 7978 }
\begin{figure} \subcaptionbox{Data center} {\includegraphics[width=.49\columnwidth]{figures/compilation-times-dc.png}} \subcaptionbox{Backbone network} {\includegraphics[width=.49\columnwidth]{figures/compilation-times-backbone.png}} \\ \caption{Compilation time. \label{fig:compilation-times}} \vspace{-1em} \end{figure} \section{Evaluation} \label{sec:evaluation} We apply \sysname on real policies for backbone and data center networks. Our mains goals are to evaluate if its front-end is expressive enough for real-world policies and the time the compiler takes to generate router configurations. \subsection{Networks studied} We obtained routing policy for the backbone network and for the data centers of a large cloud provider. Multiple data centers share this policy. The backbone network connects to the data centers and also has many external BGP neighbors. The high-level policies of these networks are captured in an English document which guides operators when writing configuration templates for data center routers or actual configurations for the backbone network (where templates are not used because the network has less regular structure). The networks have the type of policies that we outline earlier (\S\ref{sec:motivation}). The backbone network classifies external neighbors into several different categories and prefers paths through them in order. It does not want to provide transit among certain types of neighbors. For some neighbors, it prefers some links over the others. It supports communities based on which it will not announce certain routes externally or announce them only within a geographic region (e.g., West Coast of the USA). Finally, it has many filters, e.g., to prevent bogons (private address space) from external neighbors, prevent customers from providing transit to other large networks, prevent traversing providers through peers, etc. All routers in the datacenter network run BGP using a private AS number and peer with each other and with the backbone network over eBGP. The routers aggregate prefixes when announcing them to the backbone network, they keep some prefixes internal, and attach communities for some other prefixes that should not traverse beyond the geographic region. They also have policies by which some prefixes should not be announced beyond a certain tier in the datacenter hierarchy. \begin{figure} \subcaptionbox{Data center} {\includegraphics[width=.49\columnwidth]{figures/config-compression-dc.png}} \subcaptionbox{Backbone network} {\includegraphics[width=.49\columnwidth]{figures/config-compression-backbone.png}} \\ \caption{Configuration minimization. \label{fig:config-min}} \vspace{-1em} \end{figure} \subsection{Expressiveness} We found that we could translate all network policies to \sysname. We verified with the operators that our translation preserved intended semantics.\footnote{Not intended as a scientific test, but we also asked the two operators if they would find it easy to express their policies in \sysname. The data center operator said that he found the language intuitive. The backbone operator said that formalizing the policy in \sysname seemed equally easy or difficult as formalizing in RPSL~\cite{RFC2622}, but he appreciated that he would have to do it only once for the whole network (not per-router) and did not have to manually compute various local preferences, import-export filters, and MEDs.} We found that the data center policies were correctly translated. For the backbone network, the operator mentioned an additional policy that was not present in the English document, which we added later. Not counting the lines for various definitions like prefix and customer groups or for prefix ownership constraints, which we cannot reveal because of confidentiality concerns, the \sysname policies were 43 lines for the backbone network and 31 lines for the data center networks. \subsection{Compilation time} %\begin{figure}[t!] % \centering % \begin{minipage}[b]{0.45\linewidth} % \includegraphics[width=1.1\columnwidth]{figures/compilation-times-dc.png} % \end{minipage} % \quad % \begin{minipage}[b]{0.45\linewidth} % \includegraphics[width=1.1\columnwidth]{figures/compilation-times-backbone.png} % \end{minipage} % \caption{Compilation times.} % \label{fig:compilation-times} %\end{figure} %\begin{figure}[t!] % \centering % \begin{minipage}[b]{0.45\linewidth} % \includegraphics[width=1.1\columnwidth]{figures/config-compression-dc.png} % \end{minipage} % \quad % \begin{minipage}[b]{0.45\linewidth} % \includegraphics[width=1.1\columnwidth]{figures/config-compression-backbone.png} % \end{minipage} % \caption{Configuration minimization.} % \label{fig:config-minimization} %\end{figure} We study the compilation of time for both policies as a function of network size. Even though the networks we study have a fixed topology and size, we can explore the impact of size because our converted policies are network-wide and the compiler takes topology itself as an input. For the data center network, we build and provide as input fat tree~\cite{fattree} topologies of different sizes, assign a /24 prefix to each ToR switch, and randomly map prefixes to each type of prefix group with a distinct routing policy. We take this approach to smoothly explore different sizes. %There is a parameterized way to build fat trees~\cite{fattree}, which does not exist for our concrete data center topologies. For a given size, our reported results match those for the concrete topologies. For the backbone network, the internal topology does not matter since all routers connect in a full iBGP mesh. We explore different mesh sizes and randomly map neighboring networks to routers. Even though each border router connects to many external peers, we count only the mesh size. All experiments are run on an 8 core, 3.6 GHz Intel Xeon processor running Windows 7. % Figure~\ref{fig:compilation-times} shows the compilation times for data centers (a), and backbone networks (b) of different sizes. For both policies, we measure the mean compilation time per prefix since the compiler operates on each prefix in parallel. At their largest sizes, the per-prefix compilation time is roughly 10 seconds for the data center network and 45 seconds for the backbone network. %From the break down of the time by compilation phase, we see that no single compilation phase dominates the running time of the compiler. However, construction and minimization of the product graph take the most time. Total compilation for the largest data center is less than 9 minutes total. Unlike the data center policy, the number of prefixes for the backbone policy remains relatively fixed as the topology size increases. Compilation for the largest backbone network, takes less than 3 minutes total. The inclusion of more preferences in the backbone policy increases the size of the PGIR, which leads to PGIR construction and minimization taking proportionally more time. % \subsection{Configuration size} Figure~\ref{fig:config-min} shows the size of the compiled ABGP policies as a function of the topology size. The naive translation of PGIR to ABGP outlined in \S\ref{sec:compilation} generates extremely large ABGP policies by default. To offset this, the compiler performs ABGP configuration minimization both during and after the PGIR to ABGP translation phase. %Such minimization is useful for limiting the computational expense of matching routes on BGP routers, reducing the number of forwarding entries in routers in certain cases, and making configurations more readable for humans. Minimization is highly effective for both the data center and backbone policies. In all cases, minimized policies are a small fraction of the size of their non-minimized counterparts. %for i in *.set; do grep bgp $i | wc; done However, even minimized configurations are hundreds or thousands of lines per router. For the backbone network, the size of \sysname configurations is roughly similar to the BGP components of actual router configurations, though qualitative differences exist (see below). We did not have actual configurations for the data center network; they are dynamically generated from templates. \subsection{Propane vs. operator configurations} Finally, we comment briefly on how \sysname-generated configurations differ from configurations or templates generated by operators. % In some cases, \sysname configurations are similar. For example, preferences among neighboring ASes are implemented with a community value to tag incoming routes according to preference, which is then used at other border routers to influence decisions. In other cases, the \sysname configurations are different, relying on a different BGP mechanism to achieve the same result. Some key differences that we observed were: $i)$ operators used the no-export community to prevent routes from leaking beyond a certain tier of the datacenter, while \sysname selectively imported the route only below the tier; %\sysname could use a similar implementation mechanism in the future as an optimization. $ii)$ operators prevented unneeded propagation of more-specific route announcements from a neighboring AS based on their out-of-band knowledge about the topology, whereas \sysname propagated these advertisements; and $iii)$ operators used a layer of indirection for community values, using community groups and re-writing values, to implement certain policies in a more maintainable manner, where \sysname uses flat communities. We are currently investigating if such differences matter to operators (e.g., if they want to read \sysname configurations) and, if necessary, how to reduce them.
{ "alphanum_fraction": 0.801578354, "avg_line_length": 88.7, "ext": "tex", "hexsha": "9d1b476ef5cb07fec2577bdc2007ac40727183ef", "lang": "TeX", "max_forks_count": 17, "max_forks_repo_forks_event_max_datetime": "2021-04-26T08:07:58.000Z", "max_forks_repo_forks_event_min_datetime": "2016-06-15T18:31:35.000Z", "max_forks_repo_head_hexsha": "3a7bc9b2da0da0c3a3d3ea10db6c8d9bdef25d86", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "rabeckett/bgpc", "max_forks_repo_path": "paper/propane/v1/evaluation.tex", "max_issues_count": 5, "max_issues_repo_head_hexsha": "3a7bc9b2da0da0c3a3d3ea10db6c8d9bdef25d86", "max_issues_repo_issues_event_max_datetime": "2021-05-31T19:51:01.000Z", "max_issues_repo_issues_event_min_datetime": "2016-06-19T19:52:07.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "rabeckett/bgpc", "max_issues_repo_path": "paper/propane/v1/evaluation.tex", "max_line_length": 899, "max_stars_count": 76, "max_stars_repo_head_hexsha": "3a7bc9b2da0da0c3a3d3ea10db6c8d9bdef25d86", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "rabeckett/bgpc", "max_stars_repo_path": "paper/propane/v1/evaluation.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-22T12:50:36.000Z", "max_stars_repo_stars_event_min_datetime": "2016-08-27T05:51:39.000Z", "num_tokens": 2070, "size": 9757 }
\section{Database Design} MongoDB is used for \serviceName, using MongoDB Atlas as the database provider. The database currently looks like this: \begin{figure}[ht] \centering \includegraphics[scale=0.5]{twine-erd.png} \caption{\serviceName ERD} \label{} \end{figure} \section{User Journey} The User Journey will look like this: \begin{figure}[ht] \centering \includegraphics[scale=0.2]{twine-user-journey.png} \caption{\serviceName user model} \label{find use} \end{figure} \section{Data Interaction} \serviceName will read and write to a MongoDB database via an API written in ExpressJS. The API will communicate with a ReactJS frontend.
{ "alphanum_fraction": 0.7437037037, "avg_line_length": 28.125, "ext": "tex", "hexsha": "e49b9a588322ba7871cd5ba6fb391ca1417a8565", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "669c673b2dfb24b7f29cc01bc0adbc4ef1955dc6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "steevejoseph/twine", "max_forks_repo_path": "SDD/chapters/4-DataDesign.tex", "max_issues_count": 11, "max_issues_repo_head_hexsha": "669c673b2dfb24b7f29cc01bc0adbc4ef1955dc6", "max_issues_repo_issues_event_max_datetime": "2021-12-29T14:40:23.000Z", "max_issues_repo_issues_event_min_datetime": "2021-12-21T18:11:37.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "steevejoseph/twine", "max_issues_repo_path": "SDD/chapters/4-DataDesign.tex", "max_line_length": 112, "max_stars_count": null, "max_stars_repo_head_hexsha": "669c673b2dfb24b7f29cc01bc0adbc4ef1955dc6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "steevejoseph/twine", "max_stars_repo_path": "SDD/chapters/4-DataDesign.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 174, "size": 675 }
\section{Group cohomology} Today we will connect the two topics of 2-cocycles and group extensions and finally define the notion of group cohomology. % \begin{mdframed} \begin{mdframed} \adjustbox{scale=1,center}{% \begin{tikzcd} &\mbox{Adding 2-digit numbers} \ar[ddddl, leftrightarrow] \ar[ddddr, leftrightarrow, end anchor={[xshift=0ex]}] & \\\\\\\\ \mbox{2-cocycle condition}\ar[rr, dashed, leftrightarrow]& & \text{Group extensions} \end{tikzcd} } \end{mdframed} In the case of two digit numbers, this correspondence looks as follows. \begin{mdframed} \adjustbox{scale=0.73,center}{% \begin{tikzcd} &\bbz/100 \mbox{ under standard addition} \ar[ddddl, leftrightarrow] \ar[ddddr, leftrightarrow, end anchor={[xshift=0ex]}] & \\\\\\\\ \mbox{carry: }\bbz/10 \times \bbz/10 \rightarrow \bbz/10\ar[rr, dashed, leftrightarrow]& & {0 \rightarrow \bbz/10 \rightarrow \bbz/100 \rightarrow \bbz/10 \rightarrow 0} \end{tikzcd} } \end{mdframed} But there is nothing special about $\bbz/10$ or $\bbz/100$ and all our proofs and correspondences can be generalized to arbitrary group extensions. \begin{mdframed} \adjustbox{scale=0.88,center}{% \begin{tikzcd} & G = \set{\tens{a}\units{b} : a \in H, b \in K} \ar[ddddl, leftrightarrow] \ar[ddddr, leftrightarrow, end anchor={[xshift=0ex]}] & \\\\\\\\ c:K \times K \rightarrow H \ar[rr, dashed, leftrightarrow]& & {0 \rightarrow H \rightarrow (G,+_c) \rightarrow K \rightarrow 0} \end{tikzcd} } \end{mdframed} % \end{mdframed} % \subsection{Warmup: Commutative diagrams} % Algebraists love commutative diagrams. % Commutative diagrams simplify a lot of complex arguments and allow us to ``visualize'' how elements move around but % % The following \emph{commutative diagram} represents the equation $i_2 = \varphi \circ i_1$. % \begin{equation*} % \begin{tikzcd} % & G_1 \ar[dd, "\varphi"] \\ % H \ar [ru, "i_1"] \ar[dr, "i_2", swap]& \\ % & G_2 % \end{tikzcd} % \end{equation*} % % \begin{qbox}[Practice problems] % For the following commutative diagram, find the homomorphism (if possible) % \begin{equation*} % \begin{tikzcd} % & \bbz/100 \ar[dd, "\varphi"] \\ % \bbz/10 \ar [ru, "i_1"] \ar[dr, "i_2", swap]& \\ % & \bbz/100 % \end{tikzcd} % \end{equation*} % \begin{enumerate} % \item $i_1 : $ % \end{enumerate} % \end{qbox} \newpage \subsection{Maps between extensions} We will fix two abelian groups $H$ and $K$. Let $G_c$ and $G_d$ be two group extensions of $H$ and $K$, given by the 2-cocycles $c: K \times K \rightarrow H$ and $d:K \times K \rightarrow H$. This means that in $G_c$ and $G_d$ the additions are given by \begin{align} \label{eq:groupAdditionGroups} \begin{split} \tens{a_1}\units{b_1} +_c \tens{a_2}\units{b_2} &= \tens{a_1 + a_2 + c(b_1, b_2)}\units{b_1 + b_2} \\ \tens{a_1}\units{b_1} +_d \tens{a_2}\units{b_2} &= \tens{a_1 + a_2 + d(b_1, b_2)}\units{b_1 + b_2} \end{split} \end{align} where $a_1$, $a_2 \in H$ and $b_1$, $b_2 \in K$. And there are short exact sequences \begin{equation*} \begin{tikzcd} 0 \ar[r] & H \ar[r,"i_c"] & (G_c,+_c) \ar[r, "p_c"] & K \ar[r] & 0, \\ 0 \ar[r] & H \ar[r,"i_d"] & (G_d,+_d) \ar[r, "p_d"] & K \ar[r] & 0. \end{tikzcd} \end{equation*} % Set $S_{100}$ be the set of two digit numbers and let $c: \bbz/10 \times \bbz/10 \rightarrow \bbz/10$ be normalized symmetric 2-cocycle. % Denote by $(S_{100}, +_c)$ the abelian group with addition given by % \begin{equation*} % \tens{a_1}\units{b_1} + \tens{a_2}\units{b_2} % = % \tens{a_1 + a_2 + c(b_1, b_2)}\units{b_1 + b_2} % \end{equation*} % Hence $(S_{100}, +_c)$ sits in a short exact sequence % \begin{equation*} % \begin{tikzcd} % 0 \ar[r] & H \ar[r,"i"] & (S_{100}, +_c) \ar[r, "p"] & K \ar[r] & 0 % \end{tikzcd} % \end{equation*} % % % A group homomorphism between $(S_{100}, +_c)$ and $(S_{100}, +_d)$ is a map $\varphi: (S_{100}, +_c) \rightarrow (S_{100}, +_d)$ that satisfies % \begin{align*} % \varphi([a_1][b_1] +_c [a_2][b_2]) % &= % \varphi([a_1][b_1]) +_d \varphi([a_2][b_2]) % \end{align*} % There is nothing more we can do here as we do not know anything about the right hand side. So we need to put more restrictions on what group homomorphisms are allowed. \begin{definition} A \emph{morphism between extensions} is a group homomorphism $\varphi: G_c \rightarrow G_d$ which satisfies the following properties: \begin{enumerate} \item $\varphi$ restricted to $H$ is just the identity map, \item the map induced by $\varphi$ on $K$ is the identity map. \end{enumerate} % If further $\varphi$ is bijective (=one-to-one and onto), we say that $\varphi$ is an \emph{isomorphism}. In the language of short exact sequences, this is written as \begin{equation*} \begin{tikzcd} 0 \ar[r] & H \ar[r,"i_c"] \ar[d,"\id_H"] & G_c \ar[d,"\varphi"] \ar[r, "p_c"] & K \ar[r] \ar[d,"\id_K"]& 0 \\ 0 \ar[r] & H \ar[r,"i_d"] & G_d \ar[r, "p_d"] & K \ar[r] & 0 \end{tikzcd} \end{equation*} \end{definition} \begin{qbox} What are all the group homomorphisms $\bbz/100 \rightarrow \bbz/100$? Of these, which group homomorphisms are also morphisms from the standard extension $0 \rightarrow \bbz/10 \rightarrow \bbz/100 \rightarrow \bbz/10 \rightarrow 0$ to itself. \end{qbox} \begin{qbox} What are all the group homomorphisms $\bbz/10 \times \bbz/10 \rightarrow \bbz/10 \times \bbz/10$? Of these, which group homomorphisms are also morphisms from the extension $0 \rightarrow \bbz/10 \rightarrow \bbz/10 \times \bbz/10 \rightarrow \bbz/10 \rightarrow 0$ to itself. \end{qbox} \begin{qbox} For $a \in H$ and $b \in K$, show that $\varphi(\tens{a}\units{0}) = \tens{a}\units{0}$ and $\varphi(\tens{0}\units{b}) = \tens{a'}\units{b}$ for some $a' \in H$. \end{qbox} For each $b \in K$, let $\alpha(b)$ be the element in $H$ such that $p(\tens{0}\units{b}) = \tens{\alpha(b)}\units{b}$, so that $\alpha$ is a function (not a group homomorphism) $K \rightarrow H$. \begin{qbox} For $a \in H$ and $b \in K$, show that $\varphi(\tens{a}\units{b}) = \tens{a + \alpha(b)}\units{b}$. \end{qbox} \begin{qbox} Show that every morphism between extensions $G_c$ and $G_d$ is bijective. \end{qbox} As we did with group axioms, we want to rewrite what a group homomorphism means in terms of the 2-cocycles $c$ and $d$. The group homomorphism $\varphi: (G_1,+_c) \rightarrow (G_2,+_d)$ satisfies the identity \begin{align} \label{eq:groupHom} \varphi(\tens{a_1}\units{b_1} +_c \tens{a_2}\units{b_2}) = \varphi(\tens{a_1}\units{b_1}) +_d \varphi(\tens{a_2}\units{b_2}) \end{align} % And the additions $+_c$ and $+_d$ are given by the identities in Equation \eqref{eq:groupAdditionGroups}. \begin{qbox} \label{q:2coboundaryIdentity} Expand the identity \eqref{eq:groupHom} using the equation \eqref{eq:groupAdditionGroups} and find a new identity involving the functions $c$, $d$, and $h$. \end{qbox} \begin{definition} A \emph{normalized 2-coboundary} is a map $e(b_1,b_2): K \times K \rightarrow H$ such that \begin{equation*} e(b_1, b_2) = \alpha(b_1 + b_2) - \alpha(b_1) - \alpha(b_2) \end{equation*} for some function $h:K \rightarrow H$. \end{definition} \begin{qbox} Check that the identity in Q.\ref{q:2coboundaryIdentity} is saying that $c - d$ is a normalized 2-coboundary. \end{qbox} \begin{qbox} Show that a normalized 2-coboundary is also a normalized, symmetric, 2-cocycle. \end{qbox} \newpage \subsection{Group cohomology} \begin{qbox} Show that the set of normalized, symmetric, 2-cocycles $c:K \times K \rightarrow H$ forms a group under addition. This group is denoted $\calz^2(K;H)$. \end{qbox} \begin{qbox} Show that the set of normalized 2-coboundaries $c: K \times K \rightarrow H$ forms a group under addition. This group is denoted $\calb^2(K; H)$. \end{qbox} \begin{qbox} Show that $\calb^2(K; H)$ is a subgroup of $\calz^2(K;H)$. \end{qbox} \begin{definition} The second cohomology group of $K$ with coefficients $H$ is defined as \begin{align*} H^2(K;H) := \calz^2(K;H) / \calb^2(K; H) \end{align*} \end{definition} We say that two extensions are equivalent if there is a morphism between them. \begin{qbox} Show that this defines an equivalence relation on the set of group extensions. \end{qbox} Denote by $\ext^1(K;H)$ the equivalence classes of extensions under this equivalence relation. \begin{qbox} Prove that there is a 1-1 correspondence between $H^2(K;H)$ and $\ext^1(K;H)$. \end{qbox}
{ "alphanum_fraction": 0.6503384192, "avg_line_length": 34.7290836653, "ext": "tex", "hexsha": "3f3ca939aa09e1d5d7335ff8910c58c918122a93", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "14a7f5f0e2ae64f3ceaa602b50fa80269e3800a5", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "apurvnakade/mc2019-group-cohomology", "max_forks_repo_path": "03.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "14a7f5f0e2ae64f3ceaa602b50fa80269e3800a5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "apurvnakade/mc2019-group-cohomology", "max_issues_repo_path": "03.tex", "max_line_length": 196, "max_stars_count": null, "max_stars_repo_head_hexsha": "14a7f5f0e2ae64f3ceaa602b50fa80269e3800a5", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "apurvnakade/mc2019-group-cohomology", "max_stars_repo_path": "03.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3242, "size": 8717 }
This section covers basic performance tests, i.e. how specific algorithms scale with grid resolution and with polynomial degree, on a \emph{single compute node}. % -------------------------------------------------------------------------------- \section{Solver Performance - Poisson/Stokes problems} \label{sec:SolverPerformancePoisson} % -------------------------------------------------------------------------------- Two groups of solver are compared: \begin{itemize} \item Direct Solvers: directs sparse methods, such as PARDISO\footnote{ \url{http://www.pardiso-project.org/}} and MUMPS\footnote{ \url{http://mumps.enseeiht.fr/}} are compared. Their performance also serves as a comparative baseline. %\item %Iterative Algorithms without preconditioning, resp. low-impact, generic preconditioning: %This includes solver libraries such as \code{monkey} (BoSSS-specific, supports GPU) %as well as %HYPRE\footnote{ %\url{https://computation.llnl.gov/projects/hypre-scalable-linear-solvers-multigrid-methods}} %(native library, used via wrappers). \item Iterative Algorithms with \ac{dg}-specific preconditioners, such as aggregation multigrid and multi-level additive Schwarz \end{itemize} The scaling and performance is profiled subsequent sections. For Performance profiling we stick to our working horse: the kcycle-Schwarz algorithm (with optional p-two-grid as block solver). The performance profile of the krylov V-cycle with Schwarz pre and post-smoother is investigated. A direct solver (PARDISO) is used to solve the deepest coarse system. One may choose another direct solver for the coarse system, e.g. MUMPS. In practise PARDISO is more robust to ill-conditioned system, therefore in this performance analysis investigation we will stick to PARDISO as solver, wherever a direct solver is needed. NOTE: the p-two-grid used in Schwarz or as a standalone preconditioner, the coarse system is solved by a direct solver. We distinguish four phases of every solver-scenario: \begin{itemize} \item MatrixAssembly: assemble Block matrix \item Aggregation basis init: create multigrid sequence, contains information about the transformation at the multigrid levels \item Solver Init: hand over/assemble relevant data for the chosen solver, e.g. operator matrix etc. \item Solver Run: solves the equation system: operator matrix, vector of dg coordinates and given RHS \end{itemize} Matrix assembly and aggregation init is discritization specific, whereas, Solver init and run ist specific for the used solver. \subsection{Introduction of solvers} \subsubsection{linear Solver: p-two-grid} \label{sec:ptg_gmres} The p two grid algorithm can be used as left preconditioner for the well known GMRES-algorithm. Or as block solver of the Schwarz blocks in the Orthonormalization-multigrid algorithm, described in \ref{alg:OrthoMG}. \subsubsection{linear Solver: V-krylov-cycle with Schwarz smoother} \label{sec:kcycle} The orthonormalization multigrid is a combination of a v-cycle of a geometric multigrid (or algebraic as the agglomeration of cells is graph based) with an additive Schwarz smoother and a projection method onto the history of residual contributions of parts of the algorithm (smoother and coarse grid correction). A schematic of the solver can be checked out in \ref{fig:SolverScheme}. For more details on the solvers check \cite{OpenSoftwarePDE}. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.65]{./apdx-NodeSolverPerformance/solver-scheme.png} \input{./apdx-NodeSolverPerformance/solver-scheme.png} \end{center} \caption{ scheme of Orthonormalization Multigrid. The block solver is usually a direct solver like PARDISO. Note: for the smoother it is sufficient to solve the system approximately. This gives rise to approximate solutions of the Schwarz blocks, like a ILU algorithm. } \label{fig:SolverScheme} \end{figure} \subsection{DG-Poisson test problem} \label{sec:ConstantDiffusionCoefficient} The stationary 3D-problem: \begin{equation} \left\{ \begin{array} {rclll} - \Delta T & = & g_{\domain} & \text{in}\ \Omega = (0,10) \times (-1,1) \times (-1,1) & \\ % ---- T & = & g_D = 0 & \text{on}\ \Gamma_D = \{ (x,y,z) \in \real^3; \ x = 0 \} & \text{Dirichlet-boundary} \\ % ---- \nabla T \cdot \vec{n}_{\partial \domain} & = & g_N & \text{on}\ \Gamma_N = \partial \Omega \setminus \Gamma_D & \text{Neumann-boundary} \end{array} \right. \label{eq:ContantCoeffPoissonBenchmark} \end{equation} where $g_{\domain}=-sin(x)$. is investigated on a non-uniform, Cartesian grid (equidistant in $z$, sinus-spacing in $x$ and $y$ direction). The large $\Gamma_N$ makes the problem harder for non-preconditioned iterative methods. See Figure \ref{fig:ConstantCoeffRuntimes} for results. \subsection{DG-Poisson: scaling of solvers} \graphicspath{{./apdx-NodeSolverPerformance/PoissonConstCoeff/plots/}} \begin{figure}[!h] \begin{center} \input{./apdx-NodeSolverPerformance/PoissonConstCoeff/plots/ConstCoeffPoissonScaling.tex} \end{center} \caption{ Solver wallclock-time vs. degrees-of-freedom, for different polynomial degrees $k$, for problem/Equation (\ref{eq:ContantCoeffPoissonBenchmark}). } \label{fig:ConstantCoeffRuntimes} \end{figure} \newpage \subsubsection{DG-Poisson: krylov-cycle Profiling} \begin{figure}[!h] \begin{center} \input{./apdx-NodeSolverPerformance/PoissonConstCoeff/plots/ConstCoeffPoissonexp_Kcycle_schwarz.tex} \end{center} \caption{ Investigation of runtime of different code parts of the V-kcycle with additive Schwarz (p-two-grid as block solver) smoother. wallclock-time vs. degrees-of-freedom, for different polynomial degrees $k$, for problem/Equation (\ref{eq:ContantCoeffPoissonBenchmark}). } \label{fig:SIP_blockJacobianPCG} \end{figure} \newpage \subsubsection{DG-Poisson: preconditioned GMRES Profiling} \begin{figure}[!h] \begin{center} \input{./apdx-NodeSolverPerformance/PoissonConstCoeff/plots/ConstCoeffPoissonexp_gmres_levelpmg.tex} \end{center} \caption{ Investigation of runtime of different code parts of the preconditioned GMRES algorithm. Wallclock-time vs. degrees-of-freedom, for different polynomial degrees $k$, for problem/Equation (\ref{eq:ContantCoeffPoissonBenchmark}). } \label{fig:SIP_SchwarzPGC} \end{figure} \newpage \subsection{Xdg-Poisson test problem} \label{sec:XdgPoisson} \newcommand{\frakA}{\mathfrak{A}} \newcommand{\frakB}{\mathfrak{B}} \newcommand{\nOmega}{\vec{n}_{\partial \Omega } } %\newcommand*{\jump}[1]{\left\llbracket {#1} \right\rrbracket} \newcommand{\frakI}{\mathfrak{I}} \newcommand{\nI}{\vec{n}_\frakI} The test problem can be considered as stationary 3 dimensional heat equation with source-term and with two phases: \begin{equation} \left\{ \begin{array}{rll} - \mu \Delta u & = f & \text{ in } \Omega \setminus \frakI , \\ \jump{u} & = 0 & \text{ on } \frakI , \\ \jump{\mu \nabla u \cdot \nI} & = 0 & \text{ on } \frakI , \\ u & = g_\text{Diri} & \text{ on } \Gamma_\mathrm{Diri} , \\ \nabla u \cdot \nOmega & = g_\text{Neu} & \text{ on } \Gamma_\mathrm{Neu} . \\ \end{array} \right. \label{eq:XdgPoissonBenchmark} \end{equation} with a constant diffusion coefficient in each subdomain \begin{equation} \mu (\vec{x}) = \left\{ \begin{array}{ll} \mu_\frakA & \text{for } \vec{x} \in \frakA, \\ \mu_\frakB & \text{for } \vec{x} \in \frakB. \\ \end{array} \right. \label{eq:DiscDiffKoeff} \end{equation} where $\mu_1=1$ (inner) and $\mu_2=1000$ (outer) characterize the two phases. is investigated on a uniform, equidistant Cartesian grid. See \ref{fig:XdgRuntimes} for results. \graphicspath{{./apdx-NodeSolverPerformance/XDGPoisson/plots/}} \subsubsection{Xdg-Poisson: scaling of solvers} \begin{figure}[!h] \begin{center} \input{./apdx-NodeSolverPerformance/XDGPoisson/plots/XdgPoissonScaling.tex} \end{center} \caption{ Solver runtime vs. degrees-of-freedom, for different polynomial degrees $k$, for problem/Equation (\ref{eq:XdgPoissonBenchmark}). } \label{fig:XdgRuntimes} \end{figure} \newpage \subsubsection{Xdg-Poisson: krylov-cycle Profiling} \begin{figure}[!h] \begin{center} \input{./apdx-NodeSolverPerformance/XDGPoisson/plots/XdgPoissonexp_Kcycle_schwarz.tex} \end{center} \caption{ Investigation of runtime of different code parts of the block Jacobian PCG. Solver runtime vs. degrees-of-freedom, for different polynomial degrees $k$, for problem/Equation (\ref{eq:XdgPoissonBenchmark}). } \label{fig:Xdg_blockJacobianPCG} \end{figure} \subsubsection{Xdg-Poisson: preconditioned GMRES Profiling} \newpage \begin{figure}[!h] \begin{center} \input{./apdx-NodeSolverPerformance/XdgPoisson/plots/XdgPoissonexp_gmres_levelpmg.tex} \end{center} \caption{ Investigation of runtime of different code parts of the Schwarz PCG. Solver runtime vs. degrees-of-freedom, for different polynomial degrees $k$, for problem/Equation (\ref{eq:XdgPoissonBenchmark}). } \label{fig:Xdg_SchwarzPGC} \end{figure} \newpage \subsection{Xdg-Stokes test problem} As a test case for a two-phase stokes problem with Xdg approach, we choose an ellipsoid within a closed cube. The body is not touching the cube. There is no gravitational force and the boundaries are considered as walls ($\vec{u}_\mathrm{Diri}=\vec{0}$). \newcommand{\divergence}[1]{{\mathrm{div}\left({#1}\right)}} \newcommand{\normI}{{\vec{n}_{\frakI}}} \begin{equation} \left\{ \begin{array}{rll} \nabla p - \divergence{\mu ( \nabla \vec{u} + ( \nabla \vec{u})^T ) } & = 0 & \text{ in } \Omega \setminus \frakI = (-1,1)^3 , \\ \textrm{div}(\vec{u}) &= 0 & \text{ in } \Omega \setminus \frakI , \\ \jump{\vec{u}} & = 0 & \text{ on } \frakI \\ \jump{ p \nI - \mu ( \nabla \vec{u} + ( \nabla \vec{u})^T ) \cdot \nI } & = \sigma \kappa \normI & \text{ on } \frakI(t), \\ u & = \vec{u}_\mathrm{Diri} & \text{ on } \Gamma_\mathrm{Diri} . \\ \end{array} \right. \label{eq:XdgStokes-Benchmark} \end{equation} with piece-wise constant density and viscosity for both phases, i.e. \begin{equation} \rho(\vec{x}) = \left\{ \begin{array}{ll} \rho_\frakA & \textrm{for } \vec{x} \in \frakA \\ \rho_\frakB & \textrm{for } \vec{x} \in \frakB \\ \end{array} \right. \quad \textrm{and} \quad \mu(\vec{x}) = \left\{ \begin{array}{ll} \mu_\frakA & \textrm{for } \vec{x} \in \frakA \\ \mu_\frakB & \textrm{for } \vec{x} \in \frakB \\ \end{array} \right. . \label{eq:defRhoAndMu} \end{equation} Furthermore, $\sigma$ denotes surface tension and $\kappa$ denotes the mean curvature of $\frakI$. The body (ellipsoid) is defined by a level-set function: \begin{equation} (x/(\beta*r))^2 + (y/r)^2 +(z/r)^2-1=0 \end{equation} where $\beta=0.5$ is the spherical aberration and $r=0.5$ the radius. The physical parameters are: \begin{table}[h] \centering \begin{tabular}{l|c} $\rho_A$ & 1e-3 $kg / cm^3$\\ $\rho_B$ & 1.2e-6 $kg / cm^3$\\ $\mu_A$ & 1e-5 $kg / cm / sec$\\ $\mu_B$ & 17.1e-8 $kg / cm / sec$\\ $\sigma$ & 72.75e-3 $kg / sec^2$\\ \end{tabular} \end{table} The surface tension is inducing a velocity field around the ellipsoid. This test case is non-physical due to the static body. A more realistic body would reshape to compensate the surface tension, which leads to oscillation of the body. \graphicspath{{./apdx-NodeSolverPerformance/XDGStokes/plots/}} \subsubsection{Xdg-Poisson: scaling of solvers} \begin{figure}[!h] \begin{center} \input{./apdx-NodeSolverPerformance/XDGStokes/plots/XdgStokesScaling.tex} \end{center} \caption{ Solver runtime vs. degrees-of-freedom, for different polynomial degrees $k$, for problem/Equation (\ref{eq:XdgStokes-Benchmark}). } \label{fig:XdgStokes-scaling} \end{figure} The size of Schwarzblocks was set to 10.000 DOF. It is known, that this raises the number of iterations and therefore the number of Schwarz blocks shall be constant for next study ... \newpage \subsubsection{Xdg-Poisson: krylov-cycle Profiling} \begin{figure}[!h] \begin{center} \input{./apdx-NodeSolverPerformance/XDGStokes/plots/XdgStokesexp_Kcycle_schwarz.tex} \end{center} \caption{ Investigation of runtime of different code parts of the block Jacobian PCG. Solver runtime vs. degrees-of-freedom, for different polynomial degrees $k$, for problem/Equation (\ref{eq:XdgStokes-Benchmark}). } \label{fig:XdgStokes-kcylce} \end{figure} \cleardoublepage \section{Solver Performance - Navier-Stokes problems} \label{sec:SolverPerformanceNSE} Different solver strategies are conducted to solve the fully coupled incompressible Navier-Stokes equations. At the moment the following strategies can be examined: \begin{itemize} \item Linearizsation of the NSE with: Newton(Gmres) or Picard \item Solving the linear problem with a Gmres approach or the direct solver MUMPS \item Preconditioning with Additive-Schwarz domain decomposition (with coarse solve on the coarsest multigrid level) and direct solver MUMPS for the Blocks (Automatic) \item Preconditioning with Additive-Schwarz kcycle Blocks on the coarsest multigrid level (with coarse solve on the coarsest multigrid level) and direct solver MUMPS for the Blocks \end{itemize} \subsection{Driven Cavity 3D} The problem \begin{equation} \left\{ \begin{array} {rclll} \rho_f\Big(\frac{\partial \vec{u}}{\partial t}+ \vec{u} \cdot \nabla \vec{u}\Big) +\nabla p - \mu_f \Delta \vec{u} & = & \vec{f} & \text{and}\ & \\ % ---- \nabla \cdot \vec{u} & = & 0 & \text{in}\ \Omega = (-0.5,0.5) \times (-0.5,0.5) \times (-0.5,0.5) & \\ \vec{u}_D & = & \{1,0,0 \} & \text{on}\ \Gamma_D = \{ (x,y,0z) \in \real^3; \ z = 0.5 \} & \text{Dirichlet-boundary}\\ \vec{u}_W & = & 0 & \text{on}\ \Gamma_W = \partial \Omega \setminus \Gamma_D & \text{Dirichlet-boundary}\\ \vec{u}_0(x,y,z) & = & \{1,0,0\} & \text{in}\ \Omega = (-0.5,0.5) \times (-0.5,0.5) \times (-0.5,0.5) & \text{Initial Condition} \end{array} \right. \label{eq:NavierStokesCavityBenchmark} \end{equation} is investigated on different cartesian grids. The physical parameters of the fluid are choosen to be $\rho_f=1$ and $\mu_f=0.0025$ which renders down to a Reynoldsnumber of 400. \graphicspath{{./apdx-NodeSolverPerformance/NavierStokesDrivenCavity/plots/}} \begin{figure}[h!] \begin{center} \input{./apdx-NodeSolverPerformance/NavierStokesDrivenCavity/plots/NodePerformance.tex} \end{center} \caption{ Solver runtime vs. DoFs, for polynomial degree $k=2/1$, for problem/Equation (\ref{eq:NavierStokesCavityBenchmark}). } \label{fig:DrivenCavity} \end{figure}
{ "alphanum_fraction": 0.7155988289, "avg_line_length": 40.2383561644, "ext": "tex", "hexsha": "ea3fe6f357106990f7e57ca8d501ebc031f36986", "lang": "TeX", "max_forks_count": 12, "max_forks_repo_forks_event_max_datetime": "2021-05-07T07:49:27.000Z", "max_forks_repo_forks_event_min_datetime": "2018-01-05T19:52:35.000Z", "max_forks_repo_head_hexsha": "974f3eee826424a213e68d8d456d380aeb7cd7e9", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "FDYdarmstadt/BoSSS", "max_forks_repo_path": "doc/handbook/apdx-NodeSolverPerformance/NodeSolverPerformance.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "974f3eee826424a213e68d8d456d380aeb7cd7e9", "max_issues_repo_issues_event_max_datetime": "2020-07-20T15:34:22.000Z", "max_issues_repo_issues_event_min_datetime": "2020-07-20T15:32:56.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "FDYdarmstadt/BoSSS", "max_issues_repo_path": "doc/handbook/apdx-NodeSolverPerformance/NodeSolverPerformance.tex", "max_line_length": 426, "max_stars_count": 22, "max_stars_repo_head_hexsha": "974f3eee826424a213e68d8d456d380aeb7cd7e9", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "FDYdarmstadt/BoSSS", "max_stars_repo_path": "doc/handbook/apdx-NodeSolverPerformance/NodeSolverPerformance.tex", "max_stars_repo_stars_event_max_datetime": "2021-05-25T13:12:17.000Z", "max_stars_repo_stars_event_min_datetime": "2017-06-08T05:53:17.000Z", "num_tokens": 4600, "size": 14687 }