{ "query": "Please summarize the whole context. It is important that you include a summary for each file. All files should be included, so please make sure to go through the entire context", "namespace": "f5e11584-d869-4baa-9ec3-5241e389cc7e", "messages": [], "stream": false, "language_level": "", "chat_channel": "", "language": "German", "tone": "neutral", "writing_style": "standard", "model": "gemini-1.5-flash", "knowledgebase": "ki-dev-large", "seed": 0, "client_id": 0, "all_context": true, "follow_up_for": null, "knowledgebase_files_count": 0, "override_command": "", "disable_clarity_check": true, "custom_primer": "", "logging": true, "query_route": "" } INITIALIZATION Knowledgebase: ki-dev-large Base Query: Please summarize the whole context. It is important that you include a summary for each file. All files should be included, so please make sure to go through the entire context Model: gemini-1.5-flash **Elapsed Time: 0.00 seconds** ROUTING Query type: summary **Elapsed Time: 1.54 seconds** RAG PARAMETERS Max Context To Include: 120 Lowest Score to Consider: 0 ================================================== **Elapsed Time: 0.00 seconds** ================================================== VECTOR SEARCH ALGORITHM TO USE Use MMR search?: True Use Similarity search?: False ================================================== **Elapsed Time: 0.10 seconds** ================================================== VECTOR SEARCH DONE ================================================== **Elapsed Time: 10.10 seconds** ================================================== PRIMER Primer: You are Simon, a highly intelligent personal assistant in a system called KIOS. You are a chatbot that can read knowledgebases through the "CONTEXT" that is included in the user's chat message. In the "CONTEXT", prioritize information from data tagged "FEEDBACK:". Your role is to act as an expert at summarization and analysis. In your responses to enterprise users, prioritize clarity, trustworthiness, and appropriate formality. Be honest by admitting when a topic falls outside your scope of knowledge, and suggest alternative avenues for obtaining information when necessary. Make effective use of chat history to avoid redundancy and enhance response relevance, continuously adapting to integrate all necessary details in your interactions. Use as much tokens as possible to provide a detailed response. **Elapsed Time: 0.18 seconds** FINAL QUERY Final Query: CONTEXT: ########## File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 157 Context: # Chapter 10. Words to Paragraphs The finished paragraphs of type are arranged in a **galley**. This will be used to make prints of the page (or pages – two or four may be printed from one galley, then folded and cut). You can imagine how long it takes to make up the galleys for a book, and how much time is required to justify each line by inserting exactly the right spaces and hyphenating by hand. Mistakes found after test prints can be very costly to fix, since they necessitate taking apart the work. Image Analysis: I'll analyze the provided visual content according to the specified aspects. ### 1. Localization and Attribution - **Image 1**: Upper section depicts a hand holding what appears to be a galley (a long tray for type). - **Image 2**: Lower section shows a man preparing or arranging type at a desk. ### 2. Object Detection and Classification - **Image 1**: - **Objects**: Hand, galley. - **Classification**: Tools for typesetting. - **Image 2**: - **Objects**: Man, desk, type, copy chart. - **Classification**: Human figure engaged in typesetting. ### 3. Scene and Activity Analysis - **Image 1**: A hand is manipulating a galley, indicating the process of preparing type. - **Image 2**: A man is working at a desk with a copy chart, actively arranging type. ### 4. Text Analysis - **Image 1**: No text present. - **Image 2**: - Text reads "copy chart." - **Significance**: Indicates a reference guide used in the typesetting process. ### 5. Diagram and Chart Analysis - No diagrams or charts are present for analysis. ### 6. Product Analysis - **Image 1**: Galley—features a long, rectangular shape typically used in traditional printing processes. - **Image 2**: The type and copy chart on the desk are essential for the typesetting process. ### 7. Anomaly Detection - No significant anomalies detected in either image. ### 8. Color Analysis - The colors in the images are predominantly monochromatic or grayscale with variations in shading, typical of illustrations from printing processes. ### 9. Perspective and Composition - **Image 1**: Close-up perspective focuses on the hand and galley, emphasizing the action. - **Image 2**: Eye-level perspective shows the man at his workspace, providing context to the typesetting environment. ### 10. Contextual Significance - The images illustrate traditional typesetting practices, highlighting the precision and care required in the printing process, aligning with the educational theme of the document. ### 11. Metadata Analysis - Metadata cannot be analyzed as it is not provided. ### 12. Graph and Trend Analysis - No graphs present for analysis. ### 13. Graph Numbers - No data points to list. ### Additional Aspects - **Ablaufprozesse (Process Flows)**: The images depict the manual process of setting type, emphasizing the steps involved in preparing printed material. - **Prozessbeschreibungen (Process Descriptions)**: The process involves arranging type in a galley and referencing a copy chart for accurate typesetting. - **Typen Bezeichnung (Type Designations)**: The images depict traditional typesetting equipment and processes but do not specify particular type designs. - **Trend and Interpretation**: The manual process illustrates the evolution of printing technology and the shift from traditional to more modern typesetting techniques. - **Tables**: No tables are depicted in the images. This thorough analysis outlines the content captured in the visual materials, reflecting their educational and historical significance in the context of typesetting. #################### File: our-upcebu-edu-ph-58438.txt Page: 1 Context: However, if you intend to request an LOA for this term and wish to avoid any deduction from your eligible remaining semesters for the Free Higher Education (FHE) subsidy, it is essential to complete your LOA form by **October 5, 2023**. A completed LOA application entails that the form is entirely signed by all required signatories, the LOA fee has been paid, and the signed LOA forms have been delivered to both the OCS and OUR. The deadline for submitting the CHED Billing Report for the Free Higher Education (FHE) Subsidy to the UP System for the First Semester AY 2023-2024 is October 12, 2023\. To safeguard your eligibility and prevent any deductions from your remaining FHE subsidy semesters, it is crucial to finalize the LOA application form by **October 5, 2023**, which is one week before the CHED Billing Report deadline. Please be aware that if your LOA is completed after **October 5, 2023**, one semester of your FHE Subsidy will be deducted. A student who does not intend to enroll in a semester may apply for a leave of absence (LOA). This also applies to a student currently enrolled and who intends to withdraw his/her enrollment for the rest of the term. A student on leave of absence does not sever their ties with the University. A request for a leave of absence should be made in writing to the Dean. The request should state the reason for the leave and should specify the period. The leave may be approved for a period of one (1) year but may be renewed for at most another year. _Art. 403_. Students who withdraw from a college or school without formal leave of absence (considered “absent without leave” status or AWOL) shall have their registration privileges curtailed or entirely withdrawn. ##### Steps #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 106 Context: # Chapter 7. Doing Sums ==> insert 53 (insert 9 (insert 2 (insert 6 (insert 19 (sort []))))) ==> insert 53 (insert 9 (insert 2 (insert 6 (insert 19 [])))) ==> insert 53 (insert 9 (insert 2 (insert 6 (insert 19 [])))) ==> insert 53 (insert 9 (insert 2 (insert 6 (insert 19 [])))) ==> insert 53 (insert 9 [2,6,19]) ==> insert 53 [2,6,9,19] Now we must define `insert`. It is a function which takes two things: the item `x` to be inserted and the (already-sorted) list in which to insert it. If the list is empty, we can simply build the list `[x]`: ``` insert x l = if l = [] then [x] else ... ``` There are two other cases. If `x` is less than or equal to the head of the list, we can just put it at the front of the list, and we are done: ``` insert x l = if l = [] then [x] if x <= head l then [x] else ... ``` Otherwise, we have not yet found an appropriate place for our number, and we must keep searching. The result should be our head followed by the insertion of our number in the tail: ``` insert x l = if l = [] then [x] if x <= head l then [x] else [head] ++ insert x (tail l) ``` Consider the evaluation of insert `[1, 1, 2, 3, 5, 9]`: ``` insert [1, 1, 2, 3, 5, 9] ==> [1] ++ insert [1, 2, 3, 5, 9] ==> [1] ++ (insert [2, 3, 5, 9]) ==> [1] ++ [1] ++ insert [2, 3, 5, 9] ==> [1] ++ [1, 1] ++ insert [2, 3, 5, 9] ==> [1, 1] ++ [2, 3, 5, 9] ``` Final result: ``` [1, 1, 2, 3, 5, 9] ``` #################### File: test.ppt Page: 2 Context: # #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 54 Context: 42 # CHAPTER 8. SUPPORT VECTOR MACHINES The theory of duality guarantees that for convex problems, the dual problem will be concave, and moreover, that the unique solution of the primal problem corresponds to the unique solution of the dual problem. In fact, we have: \[ L_P(w^*) = L_D(\alpha^*), \] i.e., the "duality-gap" is zero. Next we turn to the conditions that must necessarily hold at the saddle point and thus the solution of the problem. These are called the KKT conditions (which stands for Karush-Kuhn-Tucker). These conditions are necessary in general, and sufficient for convex optimization problems. They can be derived from the primal problem by setting the derivatives w.r.t to zero. Also, the constraints themselves are part of these conditions and we need that for inequality constraints the Lagrange multipliers are non-negative. Finally, an important constraint called “complementary slackness” needs to be satisfied: \[ \begin{align*} \partial_w L_P = 0 & \rightarrow w - \sum_{i} \alpha_i y_i x_i = 0 \quad (8.12) \\ \partial_{\alpha_i} L_P = 0 & \rightarrow \sum_{i} \alpha_i y_i = 0 \quad (8.13) \\ \text{constraint - 1} & \quad y_i(w^T x_i - b) - 1 \geq 0 \quad (8.14) \\ \text{multiplier condition} & \quad \alpha_i \geq 0 \quad (8.15) \\ \text{complementary slackness} & \quad \alpha_i [y_i(w^T x_i - b) - 1] = 0 \quad (8.16) \end{align*} \] It is the last equation which may be somewhat surprising. It states that either the inequality constraint is satisfied, but not saturated: \( y_i(w^T x_i - b) - 1 > 0 \) in which case \( \alpha_i \) for that data-case must be zero, or the inequality constraint is saturated \( y_i(w^T x_i - b) - 1 = 0 \), in which case \( \alpha_i \) can be any value \( \alpha_i \geq 0 \). Equality constraints which are saturated are said to be “active”, while unsaturated constraints are inactive. One could imagine the process of searching for a solution as a ball which runs down the primary objective function using gradient descent. At some point, it will hit a wall which is the constraint and although the derivative is still pointing partially towards the wall, the constraints prohibits the ball to go on. This is an active constraint because the ball is glued to that wall. When a final solution is reached, we could remove some constraints, without changing the solution, these are inactive constraints. One could think of the term \( \partial_{\alpha_i} L_P \) as the force acting on the ball. We see from the first equation above that only the forced with \( \alpha_i \neq 0 \) exert a force on the ball that balances with the force from the curved quadratic surface \( w \). The training cases with \( \alpha_i > 0 \), representing active constraints on the position of the support hyperplane are called support vectors. These are the vectors. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 1 Context: If you enjoy this free book, please leave a review on [Amazon](https://www.amazon.com), or buy a [paper copy](https://www.amazon.com) for yourself or a friend. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 187 Context: # Templates The following pages contain blank templates for answering problems 1.2, 1.3, 1.4, 2.1, 8.1, 8.2, and 8.3. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 82 Context: # Chapter 6. Saving Space ## Compression Whether it 04 embarrassment or impatience, 00 judge rocked backwards 01 forwards on 08 seat. The 98 behind 45, whom he 14 61 talking 07 earlier, leant forward again, either to 88 45 a few general 15s of encouragement or 40 specific piece of advice. Below 38 in 00 hall 00 people talked to 27 33 quietly 16 animatedly. The 50 factions 14 earlier seemed to view strongly opposed to 27 33 16 69 began to intermingling, a few individuals pointed up to K, 33s pointed at 00 judge. The air in 00 room 04 foggy 01 extremely oppressive, those 63 20 standing furthest away could hardly ever be 53n through it. It must 11 61 especially troublesome 05 those visitors 6320 in 00 gallery, as 09 20 forced to quietly ask 00 participants in 00 assembly 18 exactly 04 happening, albeit 07 timid glances at 00 judge. The replies 09 received 2094 as quiet, 01 given behind 00 protection of a raised hand. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 52 Context: | D | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | | E | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | | F | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | | G | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | | H | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | | I | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | ⬤ | #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 93 Context: # Bibliography 1. **Author, A. A.** (Year). *Title of the Book*. Publisher. 2. **Author, B. B.** (Year). *Title of the Article*. *Title of the Journal*, Volume(Issue), pages. DOI or URL 3. **Author, C. C.** (Year). *Title of the Thesis or Dissertation*. University. 4. **Author, D. D.** (Year). *Title of the Report*. Publisher. DOI or URL 5. **Author, E. E.** (Year). *Title of the Website*. Retrieved from URL 6. **Author, F. F.** (Year). *Title of the Conference Paper*. In *Proceedings of the Conference Name* (pp. pages). Publisher. DOI or URL ### Additional References - **Author, G. G.** (Year). *Title of the Book*. Publisher. - **Author, H. H.** (Year). *Title of the Article*. *Title of the Journal*, Volume(Issue), pages. DOI or URL ### Notes - Ensure all references are formatted consistently. - Verify the accuracy of all citations and links. #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 8 Context: I'm unable to assist with viewing images or extracting information from them. If you can provide the text in Markdown format, I can help you correct any formatting issues based on what you provide. #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 28 Context: # CHAPTER 3. LEARNING This chapter focuses on various learning methodologies and their practical applications. ## Key Learning Concepts 1. **Active Learning** - Engage with material through discussions, practice, and feedback. 2. **Collaborative Learning** - Work with peers to enhance understanding and retention. 3. **Experiential Learning** - Learn through experience, often involving hands-on activities. ## Learning Strategies | Strategy | Description | |--------------------------|-------------------------------------------------------| | Problem-based Learning | Focus on solving real-world problems. | | Inquiry-based Learning | Students ask questions and seek solutions. | | Reflective Learning | Reflection on experiences to foster deeper understanding.| ## Recommended Resources - **Books** - "Learning How to Learn" by Barbara Oakley - "Make It Stick: The Science of Successful Learning" by Peter C. Brown - **Websites** - [Learning How to Learn](https://www.coursera.org/learn/learning-how-to-learn) - [The Learning Agency](https://www.learningagency.com) ## Conclusion Understanding different learning methodologies can significantly enhance educational outcomes. Applying these strategies and resources will facilitate effective learning experiences. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 7 Context: # Preface It can be tremendously difficult for an outsider to understand why computer scientists are interested in Computer Science. It is easy to see the sense of wonder of the astrophysicist, or of the evolutionary biologist or zoologist. We don’t know too much about the mathematician, but we are in awe anyway. But Computer Science? Well, we suppose it must have to do with computers, at least. “Computer science is no more about computers than astronomy is about telescopes,” the great Dutch computer scientist Edsger Dijkstra (1930–2002) wrote. That is to say, the computer is our tool for exploring this subject and for building things in its world, but it is not the world itself. This book makes no attempt at completeness whatever. It is, as the subtitle suggests, a set of little sketches of the use of computer science to address the problems of book production. By looking from different angles at interesting challenges and pretty solutions, we hope to gain some insight into the essence of the thing. I hope that, by the end, you will have some understanding of why these things interest computer scientists and, perhaps, you will find that some of them interest you. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 167 Context: # Solutions ## Chapter 4 ### 1 1. The pattern does not match. 2. The pattern matches at position 17. 3. The pattern matches at positions 28 and 35. 4. The pattern matches at position 24. ### 2 1. The texts aa, aaa, and aaa etc. match. 2. The texts ac and abc only match. 3. The texts ac, abc, and abbc etc. match. 4. The texts ad, abd, acd, abbd, abcd, acbd, and abbbd etc. match. ### 3 1. The pattern matches at positions 16 and 17. 2. The pattern matches at positions 0 and 24. 3. The pattern matches at positions 0, 1, 24, and 25. 4. The pattern matches at positions 0, 1, 24, and 25. #################### File: test.pptx Page: 1 Context: ``` . . . ``` #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 166 Context: ## 2 We have 32 rows: | Bits | Number | Letter | Bits | Number | Letter | |---------|--------|--------|---------|--------|--------| | 00000 | 0 | A | 10000 | 16 | Q | | 00001 | 1 | B | 10001 | 17 | R | | 00010 | 2 | C | 10010 | 18 | S | | 00011 | 3 | D | 10011 | 19 | T | | 00100 | 4 | E | 10100 | 20 | U | | 00101 | 5 | F | 10101 | 21 | V | | 00110 | 6 | G | 10110 | 22 | W | | 00111 | 7 | H | 10111 | 23 | X | | 01000 | 8 | I | 11000 | 24 | Y | | 01001 | 9 | J | 11001 | 25 | Z | | 01010 | 10 | K | 11010 | 26 | space | | 01011 | 11 | L | 11011 | 27 | . | | 01100 | 12 | M | 11100 | 28 | . | | 01101 | 13 | N | 11101 | 29 | ; | | 01110 | 14 | O | 11110 | 30 | ? | | 01111 | 15 | P | 11111 | 31 | ? | Here, we choose the capital letters and the punctuation `space`, `.`, `;`, `?` and hope this covers most useful messages. ## 3 Treason is very much a matter of habit, Smiley decided. ## 4 84 101 104 109 111 114 105 100 101 110 116 105 105 105 115 115 97 109 97 110 97 115 44 46 116 110 101 111 124 112 113 108 101 105 100 116 101 116 101 115 110 111 111 98 108 110 108 115 115 37 97 99 117 114 116 33 118 111 117 101 97 115 101 ## 5 a) The love of money is the root of *all* evil. #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 25 Context: # Information Compression and Predictable Data In this discussion, we explore the distinction between predictable and unpredictable information, often referred to as "noise." ## Key Concepts - **Predictable Information**: The learnable part of the information stream. - **Unpredictable Information**: The information that does not carry over to the future. This is known as "noise." ### Example Scenario Imagine Bob wants to send an image to Alice. He has to pay 1 dollar cent for every bit that he sends. 1. If the image were completely white: - Bob could simply send the message: `pixel 1: white, pixel 2: white, pixel 3: white,...` - Alternatively, he could send: `all pixels are white!` 2. For a completely blank image, which is predictable but carries very little information, he would be wasting resources. ### Noisy Image Example Now consider a noisy image that consists of white noise (like a disconnected television screen). To send the exact image, Bob would need to send: - `pixel 1: white, pixel 2: black, pixel 3: black,...` Bob cannot better that because of the lack of predictable information. ### Predictability and Compression In the noisy scenario, predicting each pixel is very hard. However, some pixels are easier to predict: - To compress the image, Bob can extract rules that predict the same color as the majority of the pixels next to it. These rules help model the pixel regularities. 1. Instead of sending each pixel individually, Bob first establishes rules: - For example: `pixel 103: white, pixel 245: black`. 2. Applying these rules is less costly than sending all 256 pixel values without any prior patterns. ### Trade-offs in Data Transmission There is a fundamental trade-off in this game: - Since Bob is sending only a single image, it does not make sense to use an incredibly complicated model if he could be sending simpler models. - If Bob were to send 1 billion images, he could benefit greatly from a sophisticated model and save data. ### Conclusion Thus, the size of Bob's model depends on the amount of data he wants to transmit. The boundary between what is model and what is noise depends on the data scales involved: - Using models too complex may lead to overfitting. - Simpler models may result in suboptimal predictions. This compression game helps in discovering the right model complexity for varied datasets. #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 40 Context: # CHAPTER 6. THE NATIVE BAYESIAN CLASSIFIER For ham emails, we compute exactly the same quantity, \[ P_{\text{ham}}(X_i = j) = \frac{\# \text{ ham emails for which the word } j \text{ was found } t \text{ times}}{\text{total } \# \text{ of ham emails}} \] \[ = \frac{\sum_{n} \mathbb{I}(X_n = j \land Y_n = 0)}{\sum_{n} \mathbb{I}(Y_n = 0)} \] Both these quantities should be computed for all words or phrases (or more generally attributes). We have now finished the phase where we estimate the model from the data. We will often refer to this phase as "learning" or training a model. The model helps us understand how data was generated in some approximate setting. The next phase is that of prediction or classification of new email. ## 6.3 Class-Prediction for New Instances New email does not come with a label ham or spam (if it would we could throw spam in the spam-box right away). What we do see are the attributes \(X_i\). Our task is to guess the label based on the model and the measured attributes. The approach we take is simple: calculate whether the email has a higher probability of being generated from the spam or the ham model. For example, because the word "viagra" has a tiny probability of being generated under the ham model it will end up with a higher probability under the spam model. But clearly, all words have a say in this process. It’s like a large committee of experts, one for each word. Each member casts a vote and can say things like: “I am 99% certain it’s spam”, or “It’s almost definitely not spam (0.1% spam)”. Each of these opinions will be multiplied together to generate a final score. We then figure out whether ham or spam has the highest score. There is one little practical caveat with this approach, namely that the product of a large number of probabilities, each of which is necessarily smaller than one, very quickly gets so small that your computer can’t handle it. There is an easy fix though. Instead of multiplying probabilities as scores, we use the logarithms of those probabilities and add the logarithms. This is numerically stable and leads to the same conclusion because if \(a > b\) then we also have \(\log(a) > \log(b)\) and vice versa. In equations we compute the score as follows: \[ S_{\text{spam}} = \sum_{i} \log P_{\text{spam}}(X_i = e_i) + \log P(\text{spam}) \] #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 18 Context: # CHAPTER 1. DATA AND INFORMATION The origin is an important concept in data analysis. If data happens to be just positive, it doesn’t fit this assumption very well. Taking the following logarithm can help in that case: $$ X'_{n} = \log(\alpha + X_{n}) \tag{1.5} $$ #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 180 Context: # Chapter 9 1. **Palatino** 2. **AVERSION** 3. **Conjecture** # Chapter 10 1. hy-phen-a-tion 2. fund-raising 3. a-rith-me-tic (the noun) 4. ar-ith-me-tic (the adjective) 5. dem-on-stra-tion #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 35 Context: # Chapter 2. Letter Forms We have drawn the halfway points E and F on the lines between the end-points and control points, and the point G halfway between the control points. Now, we draw lines between E and G, and between F and G, and find the halfway points H and I on those lines: ``` A E B H * G * * I F C D ``` Halfway between those is the point J, which is the mid-point of the original curve, and the end-point of both new curves we are creating: ``` A E B H * G * * J F C D ``` The two final curves can now be seen. They are, on the left, AEHJ and, on the right, JIFD: #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 178 Context: # Solutions ## 2 | | | | | | |-----|-----|-----|-----|-----| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ![Camel](path/to/camel/image) ### Grayscale Palette | Color | |-------| | ![Color 1](path/to/color/1) | | ![Color 2](path/to/color/2) | | ![Color 3](path/to/color/3) | | ![Color 4](path/to/color/4) | | ![Color 5](path/to/color/5) | | ![Color 6](path/to/color/6) | | ![Color 7](path/to/color/7) | | ![Color 8](path/to/color/8) | Image Analysis: I can't analyze or describe the visual content in the image you provided. If you have other questions or need assistance, feel free to ask! #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 81 Context: # Chapter 14 ## Kernel Canonical Correlation Analysis Imagine you are given 2 copies of a corpus of documents, one written in English, the other written in German. You may consider an arbitrary representation of the documents, but for definiteness we will use the “vector space” representation where there is an entry for every possible word in the vocabulary and a document is represented by count values for every word, i.e., if the word “the” appeared 12 times and the first word in the vocabulary we have \( X_1(doc) = 12 \) etc. Let’s say we are interested in extracting low dimensional representations for each document. If we had only one language, we could consider running PCA to extract directions in word space that carry most of the variance. This has the ability to infer semantic relations between the words such as synonymy, because if words tend to co-occur often in documents, i.e., they are highly correlated, they tend to be combined into a single dimension in the new space. These spaces can often be interpreted as topic spaces. If we have two translations, we can try to find projections of each representation separately such that the projections are maximally correlated. Hopefully, this implies that they represent the same topic in two different languages. In this way we can extract language independent topics. Let \( x \) be a document in English and \( y \) a document in German. Consider the projections: \( u = a^T x \) and \( v = b^T y \). Also assume that the data have zero mean. We now consider the following objective: \[ \rho = \frac{E[uv]}{\sqrt{E[u^2]E[v^2]}} \] (14.1) #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 189 Context: # Problem 1.4 ## Graphs Below are the four graphs corresponding to the problem: | | | |-------|-------| | ![Graph 1](https://via.placeholder.com/200x200) | ![Graph 2](https://via.placeholder.com/200x200) | | | | | ![Graph 3](https://via.placeholder.com/200x200) | ![Graph 4](https://via.placeholder.com/200x200) | ### Axes Labels - **x-axis**: Range from 0 to 20 - **y-axis**: Range from 0 to 20 ### Notes - Each graph displays the relationship between variables on the x and y axes. - Ensure to analyze the patterns presented in the graphs for a comprehensive understanding of the problem. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 75 Context: # Chapter 5. Typing it In What of languages other than English? We can divide them into three categories. First, those which use the so-called Latin alphabet (that is the familiar A, B, C...), but which contain characters with accents, or one or two extra characters only. Typically, these can be typed on a standard keyboard, using either modifier keys (holding down a key to make the next letter accented) or by using short sequences of keys (typing “” followed by the “a” key for “ã”). The second category is those languages, such as Modern Greek, which do not use the Latin alphabet, but whose alphabet has only a few tens of characters. These can be dealt with by using the same keyboard, simply with different letters printed on it. The last category is for languages such as Chinese. There are many tens of thousands of characters, and it is impractical to build a keyboard that allows all these characters to be typed on a limited number of keys. This is known as an **input system**. One such input system in use in China, Singapore, and Taiwan is **Pinyin**. It requires knowing the pronunciation of the word; westernized spellings of the syllables are then used to find the right character. We shall write the word **樱桃**, which means cherry. In the Pinyin system, we type the western characters representing the sounds, and we are then invited to disambiguate amongst the possibilities, as the word is formed. The word for cherry sounds like “ying” followed by “tao”. First, we type “ying”: ``` ying ``` The computer displays a list of possibilities for the first character of our word. It does not yet know which syllable or syllables may follow. We choose the right one, and the computer replaces “ying” with the proper character. Image Analysis: I'm unable to analyze the document as requested, but I can offer a brief summary or help with specific questions about the content or structure if you provide more details or context. Let me know how else I can assist! #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 31 Context: fall under the name "reinforcement learning". It is a very general setup in which almost all known cases of machine learning can be cast, but this generally also means that these types of problems can be very difficult. The most general RL problems do not even assume that you know what the world looks like (i.e., the maze for the mouse), so you have to simultaneously learn a model of the world and solve your task in it. This dual task induces interesting trade-offs: should you invest time now to learn machine learning and reap the benefit later in terms of a high salary working for Yahoo!, or should you stop investing now and start exploiting what you have learned so far? This is clearly a function of age, or the time horizon that you still have to take advantage of these investments. The mouse is similarly confronted with the problem of whether he should try out this new alley in the maze that can cut down his time to reach the cheese considerably, or whether he should simply stay with what he has learned and take the route he already knows. This clearly depends on how often he thinks he will have to run through the same maze in the future. We call this the exploration versus exploitation trade-off. The reason that RL is a very exciting field of research is because of its biological relevance. Do we not also have to figure out how the world works and survive in it? Let’s go back to the news articles. Assume we have control over what article we will label next. Which one would we pick? Surely the one that would be most informative in some suitably defined sense. Or the mouse in the maze. Given that he decides to explore, where does he explore? Surely he will try to seek out alleys that look promising, i.e., alleys that he expects to maximize his reward. We call the problem of finding the next best data-case to investigate "active learning". One may also be faced with learning multiple tasks at the same time. These tasks are related but not identical. For instance, consider the problem if recommending movies to customers of Netflix. Each person is different and would really require a separate model to make the recommendations. However, people also share commonalities, especially when people show evidence of being of the same "type" (for example, a fan of a sci-fi or a comedy fan). We can learn personalized models but share features between them. Especially for new customers, where we don’t have access to many movies that were rated by the customer, we need to "draw statistical strength" from customers who seem to be similar. From this example it is hopefully clear become that we are trying to learn models for many different yet related problems and that we can build better models if we share some of the things learned for one task with the other ones. The trick is not to share too much nor too little and how much we should share depends on how much data and prior knowledge we have access to for each task. We call this subfield of machine learning: "multi-task learning". #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 3 Context: # Contents **Preface** ........................................................ iii **Learning and Intuition** ......................................... vii 1. **Data and Information** ........................................ 1 1.1 **Data Representation** ................................. 2 1.2 **Preprocessing the Data** .............................. 4 2. **Data Visualization** .......................................... 7 3. **Learning** ...................................................... 11 3.1 **In a Nutshell** ......................................... 15 4. **Types of Machine Learning** ............................... 17 4.1 **In a Nutshell** ......................................... 20 5. **Nearest Neighbors Classification** ...................... 21 5.1 **The Idea In a Nutshell** ............................. 23 6. **The Naive Bayesian Classifier** .......................... 25 6.1 **The Naive Bayes Model** .............................. 25 6.2 **Learning a Naive Bayes Classifier** .............. 27 6.3 **Class-Prediction for New Instances** .............. 28 6.4 **Regularization** ........................................ 30 6.5 **Remarks** ................................................ 31 6.6 **The Idea In a Nutshell** ............................. 31 7. **The Perceptron** ............................................... 33 7.1 **The Perceptron Model** ............................... 34 #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 117 Context: # Chapter 8: Grey Areas ![Melancholia I](image_url) Figure C: Fine engraving, *Melancholia I*, Albrecht Durer, 1514. ## Table of Contents 1. Introduction 2. Analysis 3. Conclusion --- ## 1. Introduction In this chapter, we explore the concept of "Grey Areas" within art and philosophy. ## 2. Analysis - **Definition**: Grey areas refer to situations that are not clearly defined as good or bad. - **Examples**: - Artistic interpretations - Moral dilemmas ### 2.1 Subsection Example - **Key point**: Subjectivity plays a significant role in understanding grey areas. ## 3. Conclusion Understanding grey areas enriches our perspective on various topics. ### References 1. Durer, Albrecht. *Melancholia I* (1514). 2. Smith, John. *Art and Philosophy* (2010). Image Analysis: I can't access or analyze images directly. However, I can help guide you on how to approach the analysis you've outlined. Please let me know if you'd like assistance with that! #################### File: our-upcebu-edu-ph-58438.txt Page: 1 Context: ###### **Scholastic Delinquency** | **_Standing_** | **_% of Units with no Passing Numeric Grade_** | | | -------------------------------- | ---------------------------------------------- | --------------------------------------------------------------------- | | Good | less than 25% | | | Warning | 25% to less than 50% | | | Probation | 50% to 75% | student to be deloaded by at least 6 units in the succeeding semester | | Dismissal | more than 75% to less than 100% | Double Probation will also mean Dismissal | | Permanent disqualification (PDQ) | 100% | | If a student is on Probation, Dismissal, or Permanent Disqualification (PDQ) status, s/he can improve her/his standing only on specified periods. (see [Removal of Incomplete or 4.0](https://our.upcebu.edu.ph/flowsandforms/#removal)) _Art. 389_. The faculty of each college or school shall approve suitable and effective provisions governing undergraduate delinquent students, subject to the following minimum standards: * **Warning**. – Any student who obtains final grades at the end of the semester below “3” in 25 per cent to 49 per cent of the total number of academic units in which he is registered shall be warned by the Dean to improve his work; * **Probation**. – Any student who, at the end of the semester obtains final grades below “3” in 50 per cent to 75 per cent of the total number of academic units in which he has final grades shall be placed on probation for the succeeding semester and his load shall be limited to the extent to be determined by the Dean. Probation may be removed by passing with grades of “3” or better in more than 50 per cent of the units in which he has final grades in the succeeding semester. * **Dismissal**. – Any student who, at the end of the semester obtains final grades below “3” in at least 76 per cent of the total number of academic units in which he receives final grades shall be dropped from the rolls of his college or school. #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 90 Context: ``` term, \[ (x^T y)^s = (x_1 y_1 + x_2 y_2 + \ldots + x_n y_n)^s = \sum_{i_1, \ldots, i_s} \frac{s!}{i_1! i_2! \ldots i_s!} (x_1 y_1)^{i_1} (x_2 y_2)^{i_2} \cdots (x_n y_n)^{i_s} \] (B.4) Taken together with eqn. B.3 we see that the features correspond to, \[ \phi_t(x) = \frac{d!}{(d - s)!} \frac{1}{k_1! k_2! \ldots k_n!} R^{d-s} x_1^{k_1} x_2^{k_2} \ldots x_n^{k_n} \quad \text{with } i_1 + i_2 + \ldots + i_n = s < d \] (B.5) The point is really that in order to efficiently compute the total sum of \(\binom{(m+d-1)}{d}\) terms we have inserted very special coefficients. The only true freedom we have left is in choosing \(R\): for larger \(R\) we down-weight higher order polynomials more. The question we want to answer is: how much freedom do we have in choosing different coefficients and still being able to compute the inner product efficiently? ## B.2 All Subsets Kernel We define the feature again as the product of powers of input attributes. However, in this case, the choice of power is restricted to \([0,1]\), i.e. the feature is present or absent. For \(n\) input dimensions (number of attributes) we have \(2^n\) possible combinations. Let’s compute the kernel function: \[ K(x, y) = \sum_{I} \phi_I(x) \phi_I(y) = \sum_I \prod_{j=1}^n x_i y_j = \prod_{i=1}^n (1 + x_i y_i) \] (B.6) where the last identity follows from the fact that, \[ \prod_{i=1}^{n} (1 + z_i) = 1 + \sum_i z_i + \sum_{i < j} z_i z_j + \ldots + z_1 z_2 \cdots z_n \] (B.7) i.e. a sum over all possible combinations. Note that in this case again, it is much efficient to compute the kernel directly than to sum over the features. Also note that in this case there is no decaying factor multiplying the monomials. ``` #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 80 Context: # CHAPTER 13. FISHER LINEAR DISCRIMINANT ANALYSIS ## Overview Fisher Linear Discriminant Analysis (LDA) is a method used for dimensionality reduction and classification. It looks for the linear combinations of features that best separate two or more classes of objects or events. ## Key Concepts - **Linear Combinations**: LDA combines features linearly to form a new feature. - **Class Separation**: The goal of LDA is to maximize the distance between means of different classes and minimize the variances within each class. ## Steps in Fisher LDA 1. **Compute the Means**: Calculate mean vectors for each class. 2. **Compute the Within-Class Scatter Matrix**: Measure how much the data points within each class scatter around their respective mean. 3. **Compute the Between-Class Scatter Matrix**: Measure how much the class means scatter around the overall mean. 4. **Compute the Eigenvalues and Eigenvectors**: Solve the generalized eigenvalue problem for the matrices computed previously. 5. **Select the Top Components**: Choose the eigenvectors corresponding to the largest eigenvalues to form a new feature space. ## Applications Fisher LDA is widely used in: - Face recognition - Medical diagnosis - Financial prediction ## Conclusion Fisher Linear Discriminant Analysis is an effective and popular method for classification problems, particularly when dealing with high-dimensional datasets. Understanding its underlying principles is crucial for effective application in various fields. #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 22 Context: # CHAPTER 2: DATA VISUALIZATION ## Introduction Data visualization is the graphical representation of information and data. By using visual elements like charts, graphs, and maps, data visualization tools provide an accessible way to see and understand trends, outliers, and patterns in data. ## Importance of Data Visualization 1. **Enhanced comprehension**: Visuals help convey complex data insights more efficiently. 2. **Faster decision-making**: Quick access to visual information allows for swift analysis. 3. **Pattern recognition**: Visual representation makes it easier to identify correlations and trends. ## Types of Data Visualizations - **Charts** - Bar Charts - Line Charts - Pie Charts - **Graphs** - Scatter Plots - Histograms - **Maps** - Heat Maps - Geographical Maps ### Table of Common Visualization Tools | Tool | Description | Use Case | |------------------|---------------------------|------------------------------| | Tableau | Data visualization software| Business intelligence | | Power BI | Business analytics service | Data analysis | | Google Data Studio| Free data visualization tool| Reporting and dashboards | ## Best Practices - **Keep it simple**: Avoid clutter to maintain clarity. - **Use appropriate colors**: Colors should enhance understanding, not distract. - **Label clearly**: Ensure all elements are well-labeled for easy understanding. ## Conclusion Effective data visualization is crucial for data-driven decision-making. By following best practices and utilizing the right tools, organizations can leverage data to gain insights and enhance performance. #################### File: our-upcebu-edu-ph-58438.txt Page: 1 Context: _For **GCash** transactions, make sure to enable mobile or email notifications. Attach proof of completed payment from GCash SMS or GCash Email with **InstaPay Trace Number**. (also found in your GCash Inbox, “Your Bank Transfer Status” message)_ _Incomplete information may lead to delays or the non-processing of your graduation application as your payment could not be verified._ … … See samples below for correct proof of payment. … [](https://our.upcebu.edu.ph/wp-content/uploads/2022/12/GCash-InstaPay-Trace.png) [](https://our.upcebu.edu.ph/wp-content/uploads/2022/02/Proof-of-Payment-Sample.png) 4. If consent is given, you may also submit (via the online form) your graduation photo for inclusion in the commencement exercises, both on campus and in the virtual online ceremony. _You may also submit your photo at a later date._ Click [here](https://our.upcebu.edu.ph/information/guide-to-the-commencement-exercises/) for the graduation photo guide. … 5. Submit the accomplished Application for Graduation Form, via the jotform portal. (OCS Staff will evaluate the student’s records and eligibility to graduate.) #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 62 Context: # Chapter 4. Looking and Finding ## Data Table | | | |-----|-----| | T | 01234567890123456789012345678 | | houses and horses and hearses | | ## W 012345 ### Horses Count 1. Horses: 3 2. Horses: 1 3. Horses: 1 4. Horses: 1 5. Horses: 1 6. Horses: 1 7. Horses: 1 8. Horses: 1 9. Horses: 6 10. Horses: 1 11. Horses: 1 12. Horses: 1 13. Horses: 1 14. Horses: 1 15. Horses: 1 16. Horses: 1 17. Horses: 1 18. Horses: 1 19. Horses: 2 **Figure A** #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 154 Context: # Chapter 10. Words to Paragraphs Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut pretium, erat et tincidunt tincidunt, enim libero tincidunt nisi, eu accumsan ligula libero at nulla. Ut ut eros massa. Donec vestibulum, elit nec feugiat malesuada, eros orci commodo elit, a volutpat arcu nibh non libero. Nulla vitae neque mi. Duis scelerisque quam et velit varius, at hendrerit enim vestibulum. Aliquam erat volutpat. Sed dapibus, magna ac interdum lacinia, nisi dolor tincidunt leo, at bibendum velit tellus non felis. ## Formatting Guidelines - **Widows and Orphans**: In typesetting, a widow is a short line at the end of a paragraph that appears alone at the top of a page, while an orphan is a single line from a paragraph that appears at the bottom of a page. ### Example | Widows | Orphans | |----------------|---------------| | A widow (top) | An orphan (bottom) | ### Additional Notes - Ensure paragraphs are properly structured and balanced. - Maintain alignment and spacing for readability. - Use bullet points or number lists as needed. #### Conclusion Following these formatting guidelines can help improve text readability and presentation. Make sure to apply these principles in your writing for enhanced clarity and organization. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 185 Context: # Further Reading ## Chapter 6 - **Fundamental Data Compression** Ida Mengyi Pu. Published by Butterworth-Heinemann (2006). ISBN 0750636130. - **The Fax Modern Sourcebook** Andrew Margolis. Published by Wiley (1995). ISBN 0471995726. - **Introduction to Data Compression** Khalid Sayood. Published by Morgan Kaufmann in The Morgan Kaufmann Series in Multimedia Information and Systems (fourth edition, 2012). ISBN 0124157963. ## Chapter 7 - **Python Programming for the Absolute Beginner** Mike Dawson. Published by Course Technology PTR (third edition, 2010). ISBN 1435455002. - **OCaml from the Very Beginning** John Whitington. Published by Coherent Press (2013). ISBN 0957671105. - **Seven Languages in Seven Weeks: A Pragmatic Guide to Learning Programming Languages** Bruce A. Tate. Published by Pragmatic Bookshelf (2010). ISBN 193435639X. ## Chapter 8 - **How to Identify Prints** Bamber Gascoigne. Published by Thames & Hudson (second edition, 2004). ISBN 0500248406. - **A History of Engraving and Etching** Arthur M. Hind. Published by Dover Publications (1963). ISBN 0486269507. - **Prints and Printmaking: An Introduction to the History and Techniques** Anthony Griffiths. Published by University of California Press (1996). ISBN 0520072149. - **Digital Halftoning** Robert Ulichney. Published by The MIT Press (1987). ISBN 0262210096. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 93 Context: # Chapter 6. Saving Space | Character | Decimal | Binary | |-----------|---------|----------| | a | 4 | 01001 | | f | 4 | 010000 | | c | 4 | 11011 | | u | 4 | 10101 | | i | 3 | 10100 | ``` 111000111100001111001011110110111100101000 100110011110010111001101011110111101111110 000000000100101110011001111111110011111100 111011101100000101110010101110111101111011 011101100011111011111001111101110010001101 011111001100110111111101110110101101010111 111011100011101010100101110101001110101001 111001000110101111110100100111011111001110 101100001000000000000101111011110000000111 111111100001110110001011001111001000110110 110011000000001000000100101001110110101001 101111111001111100011100011001010101111010 ``` 3. Encode the following fax image. There is no need to use zero-length white runs at the beginning of lines starting with a black pixel. ``` ABC ``` 4. Decode the following fax image to the same 37x15 grid. There are no zero-length white runs at the beginning of lines starting with a black pixel. ``` 000100100000100111101100000101000001110000 110000100000010111110000111011011001100011 111101110100111010011110111011011101111101 011111111101000000010100000111010000000001 000011110001110001111111111110000010011110 001110100001100110000110010101010011111110 110111000011101101111100010111000101111011 111010010101011001011111110111001111101001 ``` Image Analysis: I'm unable to provide the analysis of the attached visual content as requested. However, if you have specific questions or need assistance with a particular aspect related to the content, feel free to ask! #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 10 Context: # LEARNING AND INTUITION Baroque features or a more “dull” representation, whatever works. Some scientists have been asked to describe how they represent abstract ideas and they invariably seem to entertain some type of visual representation. A beautiful account of this in the case of mathematicians can be found in a marvelous book “XXX” (Hardamard). By building accurate visual representations of abstract ideas we create a database of knowledge in the unconscious. This collection of ideas forms the basis for what we call intuition. I often find myself listening to a talk and feeling uneasy about what is presented. The reason seems to be that the abstract idea I am trying to capture from the talk clashed with a similar idea that is already stored. This in turn can be a sign that I either misunderstood the idea before and need to update it, or that there is actually something wrong with what is being presented. In a similar way, I can easily detect that some idea is a small perturbation of what I already knew (I feel happily bored), or something entirely new (I feel intrigued and slightly frustrated). While the novice is continuously challenged and often feels overwhelmed, the more experienced researcher feels at least 90% of the time because the “new” idea was already in his/her database which therefore needs no and very little updating. Somehow our unconscious mind can also manipulate existing abstract ideas into new ones. This is what we usually think of as creative thinking. One can stimulate this by seeding the mind with a problem. This is a conscious effort and is usually a combination of detailed mathematical derivations and building an intuitive picture or metaphor for the thing one is trying to understand. If you focus enough time and energy on this process and walk home for lunch you’ll find that you’ll still be thinking about it in a much more vague fashion: you review and create visual representations of the problem. Then you get your mind off the problem altogether and when you walk back to work suddenly parts of the solution surface into consciousness. Somehow, your unconscious took over and kept working on your problem. The essence is that you created visual representations as the building blocks for the unconscious mind to work with. In any case, whatever the details of this process are (and I am no psychologist), I suspect that any good explanation should include both an intuitive part, including examples, metaphors and visualizations, and a precise mathematical part where every equation and derivation is properly explained. This then is the challenge I have set to myself. It will be your task to insist on understanding the abstract idea that is being conveyed and build your own personalized visual representations. I will try to assist in this process but it is ultimately you who will have to do the hard work. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 197 Context: # Index - accented letter, 125 - acid wash, 104 - acid-resistant, 104 - advancement, 126 - Albrecht Dürer, 102 - algorithm, 41 - of deCasteljau, 20 - aliasing, 8 - ampersand, 16 - ANSI, 58 - antialiasing, 8 - ASCII, 31 - August Dvorak, 58 - backslash, 33 - backward compatibility, 35 - baseline, 126 - Bézier - Pierre, 17 - curve, 17 - billboard, 8 - bit, 28, 69 - bit pattern, 69 - bold, 23 - Braille, 38 - burn, 102 - burnishing, 104 - carriage return, 31 - Cartesian coordinate, 1 - Casteljau - Paul de, 17 - chain, of curves, 18 - character - accented, 33 - printable, 31 - checkerboard pattern, 100 - Chinese tone, 63 - circle - from Bézier curves, 23 - circular arc, 16 - Citroën, 17 - coated paper, 4 - composing stick, 142 - compression, 65 - lossy, 74 - computer program, 43 - condition, 44 - control point, 17 - coordinate, 1 - copper, 102 - counter, 44 - curve - continuous, 18 - discontinuous, 18 - flatness, 22 - join, 18 - smooth continuous, 18 - subdivision, 20 - Cyrillic, 33, 124 #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 35 Context: 5.1 The Idea In a Nutshell ========================== To classify a new data item, you first look for the k nearest neighbors in feature space and assign it the same label as the majority of these neighbors. because 98 noisy dimensions have been added. This effect is detrimental to the kNN algorithm. Once again, it is very important to choose your initial representation with much care and preprocess the data before you apply the algorithm. In this case, preprocessing takes the form of “feature selection” on which a whole book in itself could be written. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 24 Context: # Chapter 1: Putting Marks on Paper Now, we can proceed to design a method to fill the shape. For each row of the image, we begin on the left and proceed rightward pixel-by-pixel. If we encounter a black dot, we remember and enter filling mode. In filling mode, we fill every dot black, until we hit another dot which is already black—then we leave filling mode. Seeing another already-black dot puts us back into filling mode, and so on. ## Example Images ![Image of House 1](image_link_1) In the image above, two lines have been highlighted. In the first, we enter the shape once at the side of the roof, fill across, and then exit it at the right-hand side of the roof. In the second, we fill a section, exit the shape when we hit the doorframe, enter it again at the other doorframe—filling again—and finally exit it. If we follow this procedure for the whole image, the house is filled as expected. ![Image of House 2](image_link_2) The image on the left shows the new dots in grey; that on the right the final image. Notice that the windows and door did not cause a problem for our method. We have now looked at the very basics of how to convert descriptions of shapes into patterns of dots suitable for a printer or screen. In the next chapter, we will consider the more complicated shapes. #################### File: our-upcebu-edu-ph-58438.txt Page: 1 Context: --- ##### Request for Substitution of Courses/Subjects ##### STEPS 1. Secure and fill out the [**Substitution Form**](https://our.upcebu.edu.ph/wp-content/uploads/2017/02/UPC-Substitution-20170526.pdf), in three copies. 2. Get endorsement from the Instructor of the Subject Required, your Program Adviser, the Department Chair/Program Coordinator of the Subject Required and the Department Chair/Program Coordinator of the Subject Taken. 3. Submit the signed Substitution Form to the [Office of the College Secretary](#SRE). 4. Get approval from your Dean. _Copy 1 – Office of the University Registrar_ _Copy 2 – Office of the College Secretary_ _Copy 3 – Student_ _Art. 354_. Every substitution of subjects must be based on at least one of the following: a. When a student is pursuing a curriculum that has been superseded by a new one and the substitution tends to bring the old curriculum in line with the now; b. Conflict of hours between a required subject and another required subject; or c. When the required subject is not given. _Art. 355_. Every petition for substitution: a. Must involve subjects within the same department, if possible; if not, the two subjects concerned must be allied to each other; b. Must be between subjects carrying the same number of units; and c. Must be recommended by the adviser and by the heads of departments concerned. _Art. 356_. All petitions for substitution must be submitted to the Office of the Dean concerned before 12 percent of regular class meetings have been held during the term. Any petition submitted thereafter shall be considered for the following semester. _Art. 358_. All applications for substitution shall be acted upon by the Dean concerned. In case the action of the Dean is adverse to the recommendation of the adviser and the Head of the Department concerned, the student may appeal to the Vice Chancellor for Academic Affairs, whose decision shall be final. \[as amended at 790th BOR Meeting on 1969 December 19\] _Note: Except for the GE subjects with permanent substitutions approved by the University Council, GE subjects cannot be substituted._ [Back to Top](#top) --- #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 95 Context: # Chapter 7 ## Doing Sums How do we calculate the answer to \(1 + 2 \times 3\)? In our heads, perhaps, or on paper. But how do we decide which operation to do first (the \(+\) or the \(\times\))? Well, in mathematics, we have the convention that, in this situation, the multiplication goes first. So we may work as follows: \[ 1 + 2 \times 3 \] \[ \Rightarrow 1 + 6 \] \[ \Rightarrow 7 \] Something like \(1 + 2 \times 3\) is an example of a mathematical **expression**. (We have underlined the part of the expression being worked on at each stage.) Rewriting it stage by stage, making it smaller each time, until we reach a final answer, is called **evaluating** the expression. The result is a **value**, which is an expression that can be reduced no further: \(7\) is just \(7\). We could rewrite it as \(3 + 4\) or \(1 + 5\), of course, but we like each subsequent expression to be simpler than the last. Computer programs often involve these kinds of expressions, and indeed in some programming languages, the whole expression is just one big expression. It would be simpler if we could represent such expressions in an unambiguous way, so that we don’t need to think about the rules for which operations happen in which order. (It’s simple in our example, but expressions in computer programs can be huge.) We can just add parentheses to the expression: \(1 + (2 \times 3)\). Now the rule for choosing what to do next can be stated more simply: evaluate a part of the expression which contains no parentheses first. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 60 Context: # Chapter 4. Looking and Finding Return `false`. If we reach the end of the word without finding such a mismatch, we stop and return `true`, because the whole word must have matched. We dispense with the success condition first – if `pw` is equal to `length ph`, it means we have compared successfully positions 0 through to `pw - 1`, and so we can return `true`: ```plaintext define compare pt pw if pw = length W then return true ``` Now, we must test the character at position `pt` in `T` and the character at position `pw` in `W` to see if they are equal. We access the individual characters by writing, for example, `T[pt]` where `T` is the text and `W` the position: ```plaintext define compare pt pw if pw = length W then return true if T[pt] = W[pw] then compare (pt + 1) (pw + 1) ``` If the two characters were equal, we continue by running `compare` with both text and word positions advanced by one. All that remains now is to return `false` if they were not equal: ```plaintext define compare pt pw if pw = length W then return true if T[pt] = W[pw] then compare (pt + 1) (pw + 1) return false ``` This line will only be reached if the condition at line 3 was `false`. Here is the whole program in one place: ```plaintext define compare pt pw if pw = length W then return true if T[pt] = W[pw] then compare (pt + 1) (pw + 1) return false define search pt if pt < length T and length W then if compare pt 0 then print pt search (pt + 1) ``` Our made-up language is not so dissimilar to some real computer languages. Some of the words are different, but the essential features are there. Let us have a more detailed look at the execution. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 14 Context: I'm unable to view images. Please provide the text you'd like me to correct. Image Analysis: It seems that there is no visual content attached for me to analyze. If you provide an image or any detailed description of it, I would be happy to help you with the analysis based on the aspects you've outlined. Please try uploading the image again. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 52 Context: # Chapter 3. Storing Words ## Braille Patterns The top section contains all those combinations using the top six dots only. (There were originally only six dots in Braille.) All the others then follow, for a total of 256. The patterns are not assigned particular letters, because these vary for each language: a pattern might represent the letter A in Western languages and something entirely different in Japanese or Vietnamese. Empty circles are used in most Braille typefaces so that, in patterns with only a few black dots, the empty circles can be felt with the finger to help distinguish between different characters. This is particularly useful when dealing with 8-dot patterns. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 91 Context: # Chapter 6. Saving Space ## Figure C | All Pixels | 1/2 Discarded | |-----------------|-----------------| | ![all pixels](image-url1) | ![1/2 discarded](image-url2) | | 3/4 Discarded | 7/8 Discarded | | ![3/4 discarded](image-url3) | ![7/8 discarded](image-url4) | ## Figure D | Original | "75% quality" – 19% | |-----------------|----------------------| | ![original](image-url5) | !["75% quality"](image-url6) | | "50% quality" – 11% | "25% quality" – 9% | | !["50% quality"](image-url7) | !["25% quality"](image-url8) | Image Analysis: Here's a comprehensive examination of the attached visual content, following the specified aspects: ### 1. **Localization and Attribution:** - **Image 1:** Positioned at the top left, labeled "all pixels". - **Image 2:** Positioned at the top right, labeled "1/2 discarded". - **Image 3:** Positioned at the bottom left, labeled "3/4 discarded". - **Image 4:** Positioned at the bottom right, labeled "7/8 discarded". - **Image 5:** Positioned at the middle left, labeled "original". - **Image 6:** Positioned at the middle right, labeled "’75% quality’ – 19%". - **Image 7:** Positioned at the bottom left, labeled "’50% quality’ – 11%". - **Image 8:** Positioned at the bottom right, labeled "’25% quality’ – 9%". ### 2. **Object Detection and Classification:** - **Objects Detected:** All images depict a camel. - **Classification:** Animals. - **Key Features:** Each camel appears in varying degrees of visual fidelity and pixelation in relation to the quality markers (all pixels, 1/2 discarded, etc.). ### 3. **Scene and Activity Analysis:** - **Scene Description:** The images portray a solitary camel against a minimalistic background. - **Main Actors:** The camel in each image. - **Actions:** The camel is stationary, observed from a side profile. ### 4. **Text Analysis:** - **Text Detected:** Quality descriptions (e.g., “75% quality”). - **Significance:** These descriptions indicate the level of detail retained or discarded in each image, highlighting how image quality decreases as pixels are removed. ### 5. **Diagram and Chart Analysis:** - *N/A* ### 6. **Product Analysis:** - *N/A* ### 7. **Anomaly Detection:** - **Anomalies Identified:** The reduction in visual quality might be perceived as unusual as it progressively alters the camel’s representation. - **Impact:** This reduction illustrates the effects of image compression on visual fidelity. ### 8. **Color Analysis:** - **Color Composition:** Images are monochromatic, showcasing shades of gray. - **Dominant Colors:** Various shades of gray, producing a uniform appearance. - **Impact on Perception:** The limited color palette emphasizes the changes in visual detail rather than vibrant imagery. ### 9. **Perspective and Composition:** - **Perspective:** Side view of the camel. - **Composition:** The camel occupies a central position in each image, highlighting its shape and form amidst the gray background. ### 10. **Contextual Significance:** - **Overall Context:** The images appear to demonstrate the effects of pixel reduction on image quality, potentially serving as an illustration for discussions on data compression. - **Contribution to Theme:** Helps convey principles of image quality and compression effects. ### 11. **Metadata Analysis:** - *N/A* ### 12. **Graph and Trend Analysis:** - *N/A* ### 13. **Graph Numbers:** - *N/A* ### **Additional Aspects:** - **Ablaufprozesse (Process Flows):** - Not directly depicted, but the progression of image quality can be interpreted as a flow of degradation. - **Prozessbeschreibungen (Process Descriptions):** - Image quality is progressively reduced by discarding a specific percentage of pixels, influencing the clarity of the camel’s depiction. - **Typen Bezeichnung (Type Designations):** - Quality types are designated based on percentages of pixel retention (e.g., "1/2 discarded", "75% quality"). - **Trend and Interpretation:** - A clear trend shows that as the amount of discarded data increases, image quality perceptibly decreases. - **Tables:** - *N/A* This analysis provides a detailed examination based on your requested aspects. #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 15 Context: # 1.1. DATA REPRESENTATION Most datasets can be represented as a matrix, \( X = [X_{i,n}] \), with rows indexed by "attribute-index" \( i \) and columns indexed by "data-index" \( n \). The value \( X_{i,n} \) for attribute \( i \) and data-case \( n \) can be binary, real, discrete, etc., depending on what we measure. For instance, if we measure weight and color of 100 cars, the matrix \( X \) is \( 2 \times 100 \) dimensional and \( X_{1,20} = 20, 684.57 \) is the weight of car nr. 20 in some units (a real value) while \( X_{2,20} = 2 \) is the color of car nr. 20 (say one of 6 predefined colors). Most datasets can be cast in this form (but not all). For documents, we can give each distinct word of a prespecified vocabulary a number, and simply count how often a word was present. Say the word "book" is defined to have nr. 10, 568 in the vocabulary, then \( X_{10,568} = 4 \) would mean: the word "book" appeared 4 times in document 5076. Sometimes the different data-cases do not have the same number of attributes. Consider searching the internet for images about cats. You’ll retrieve a large variety of images most with a different number of pixels. We can either try to resize the images to a common size or we can simply leave those entries in the matrix empty. It may also occur that a certain entry is supposed to be there but it couldn’t be measured. For instance, if we run an optical character recognition system on a scanned document some letters will not be recognized. We’ll use a question mark "?" to indicate that that entry wasn’t observed. It is very important to realize that there are many ways to represent data and not all are equally suitable for analysis. By this I mean that in some representation the structure may be obvious while in other representation it may become totally obscure. It is still there, but just harder to find. The algorithms that we will discuss are based on certain assumptions, such as, "Humans and Ferraries can be separated with a line," see figure ??. While this may be true if we measure weight in kilograms and height in meters, it is no longer true if we decide to re-code these numbers into bit-strings. The structure is still in the data, but we would need a much more complex assumption to discover it. A lesson to be learned is thus to spend some time thinking about in which representation the structure is as obvious as possible and transform the data if necessary before applying standard algorithms. In the next section we’ll discuss some standard preprocessing operations. It is often advisable to visualize the data before preprocessing and analyzing it. This will often tell you if the structure is a good match for the algorithm you had in mind for further analysis. Chapter 2 will discuss some elementary visualization techniques. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 174 Context: ```markdown # Solutions ## 1 ### b) ``` (2 × 2) × 2 ⇒ 4 × 2 = 8 ``` ### c) ``` (2 × 3) + 4 ⇒ 6 + 4 = 10 ``` ## 2 ### a) ``` x × x × y ⇒ 4 × 4 × 5 = 16 × 5 ⇒ 80 ``` ``` z × y + z ⇒ 100 × 5 + 100 = 500 + 100 ⇒ 600 ``` ### b) ``` z × z ⇒ 100 × 100 = 10000 ``` ``` #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 191 Context: # Problem 8.1 | | | | | | | | | | | | | | | | | |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |---|---|---| | | | | | | | | #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 165 Context: # Solutions So we have the new Bézier curves AEHJ and JIFD as before: ![Bézier Curves](link_to_image) ## 3 **With the even-odd rule:** - Square 1 - Square 2 - Square 3 - Square 4 **With the non-zero rule:** - Square 5 - Square 6 --- # Chapter 3 **1** 32-11-42-54-23-11-14-11-31-24-44-44-31-15-31-11-32-12. There are 18 characters in the message, and so 36 numbers to transmit (though in Polybius's system of torches, these would be done two at a time, so just 18 distinct actions). We might use Z for the space character, since it doesn't appear often in normal text. We could use ZZZ for the end of message. #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 23 Context: # Chapter 3 ## Learning This chapter is without question the most important one of the book. It concerns the core, almost philosophical question of what learning really is (and what it is not). If you want to remember one thing from this book you will find it here in this chapter. Ok, let’s start with an example. Alice has a rather strange ailment. She is not able to recognize objects by their visual appearance. At her home she is doing just fine: her mother explained Alice for every object in her house what is and how you use it. When she is home, she recognizes these objects (if they have not been moved too much), but when she enters a new environment she is lost. For example, if she enters a new meeting room she needs a long time to infer what the chairs and the table are in the room. She has been diagnosed with a severe case of "overfitting." What is the matter with Alice? Nothing is wrong with her memory because she remembers the objects once she has seen them. In fact, she has a fantastic memory. She remembers every detail of the objects she has seen. And every time she sees a new object she reasons that the object in front of her is surely not a chair because it doesn’t have all the features she has seen in earlier chairs. The problem is that Alice cannot generalize the information she has observed from one instance of a visual object category to other, yet unobserved members of the same category. The fact that Alice’s disease is so rare is understandable; there must have been a strong selection pressure against this disease. Imagine our ancestors walking through the savanna one million years ago. A lion appears on the scene. Ancestral Alice has seen lions before, but not this particular one and it does not induce a fear response. Of course, she has no time to infer the possibility that this animal may be dangerous logically. Alice’s contemporaries noticed that the animal was yellow-brown, had manes etc. and immediately un- #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 5 Context: # Contents - **Preface** … v - **1 Putting Marks on Paper** … 1 - **2 Letter Forms** … 15 - **3 Storing Words** … 27 - **4 Looking and Finding** … 41 - **5 Typing it In** … 53 - **6 Saving Space** … 65 - **7 Doing Sums** … 81 - **8 Grey Areas** … 97 - **9 Our Typeface** … 123 - **10 Words to Paragraphs** … 135 - **Solutions** … 147 - **Further Reading** … 169 - **Templates** … 173 - **Colophon** … 181 - **Index** … 183 #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 135 Context: # Chapter 8. Grey Areas ## Problems Solutions on page 163. Show the 17 dither patterns generated from each of these grids of numbers. 1. ``` 1 9 3 11 13 5 15 7 4 12 2 10 16 8 14 6 ``` 2. ``` 1 9 13 3 16 5 7 11 12 8 6 15 4 14 10 2 ``` 3. ``` 2 6 11 15 4 8 9 13 14 10 7 3 16 12 5 1 ``` #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 45 Context: # Chapter 7 ## The Perceptron We will now describe one the simplest parametric classifiers: the **perceptron** and its cousin the **logistic regression** classifier. However, despite its simplicity it should not be under-estimated! It is the workhorse for most companies involved with some form of machine learning (perhaps tying with the **decision tree** classifier). One could say that it represents the canonical parametric approach to classification where we believe that a straight line is sufficient to separate the two classes of interest. An example of this is given in Figure 7 where the assumption that the two classes can be separated by a line is clearly valid. However, this assumption need not always be true. Looking at Figure 8 we clearly observe that there is no straight line that will do the job for us. What can we do? Our first inclination is probably to try and fit a more complicated separation boundary. However, there is another trick that we will be using often in this book. Instead we can increase the dimensionality of the space by “measuring” more things of the data. The features can be highly nonlinear functions. The simplest choice may be to also measure \( \phi_k(X) = X^2_k \), for each attribute \( X_k \). But we may also measure cross-products such as \( \phi_{ij}(X) = X_i X_j, \; \forall i, j \). The latter will allow you to explicitly model correlations between attributes. For example, if \( X_i \) represents the presence (1) or absence (0) of the word “viagra” and similarly for \( X_j \) the presence/absence of the word “dysfunction”, then the cross product feature \( X_{ij} = X_i X_j \) lets you model the presence of both words simultaneously (which should be helpful in trying to find out what this document is about). We can add as many features as we like, adding another dimension for every new feature. In this higher dimensional space we can now be more confident in assuming that the data can be separated by a line. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 46 Context: ``` # Chapter 3. Storing Words Shifting the paper up one line, so we may begin printing the next. The space character `32` moves space across, without printing anything. Most of the other special codes are of historical interest only. We can work out the ASCII codes to represent a piece of text by looking at each character and finding its number in the table. Consider this quotation: > "It's the oldest question of all George. > Who can spy on the spies?" > — John Le Carré, *Tinker, Tailor, Soldier, Spy* Notice that we do not have the acute accent for the `e` in Mr Le Carré’s name, and we have to use two dashes to make what would normally be the long dash. There must be a Carriage Return, followed by a Line Feed at the end of each line, to move the printing position to the beginning of the next line. We obtain the following sequence of numbers: | Char | ASCII | |------|-------| | 34 | 108 | | i | 73 | | s | 115 | | t | 84 | | space| 32 | | r | 114 | | e | 101 | | d | 100 | | u | 117 | | l | 108 | | 7 | 55 | | 8 | 56 | | 9 | 57 | | o | 111 | | n | 110 | | g | 103 | | c | 99 | | 7 | 55 | | 8 | 56 | Thus, representing the text and its corresponding ASCII values. ``` #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 142 Context: # Chapter 9. Our Typeface The diagram shows various alternative characters for “a” and “g” in the Zapfino typeface. You can see that some suit the word more than others. In particular, in the bottom-right example, the initial “a” clashes awfully with the next letter. Before computers, when type was set manually using little metal pieces arranged into rows, smeared in ink, and pressed onto paper, it was natural for a typeface to have slightly different designs for each size. A headline would have a subtly different character design in normal sized text, and again different for the sort of tiny text used for footnotes. Part of this is due to the essential optical characteristics of small sizes: it is easier to read a small typeface if it has proportionally wider characters than its normal size, for example. Part of it has to do with the physical characteristics of the ink and paper: ink spreads, and the way it spreads leads to the same metal shape showing differently at different sizes. When designing a typeface by computer, however, it is tempting to take a shortcut: design the typeface once, and then automatically scale it to whatever size is needed by simple arithmetic in coordinates. Now, we have any size available, and designing the typeface is cheaper. This shortcut was not available to the metal type manufacturer. However, many good modern computer typefaces still have multiple so-called optical sizes. They may have separate shapes for 5pt, 8pt, 10pt, 12pt, and 16pt, for example. Figure A on the opposite page shows a phrase printed in the Latin Modern typeface, at five different sizes. Below, the same phrase has been repeated, but scaled up to show the differences clearly. We have not yet looked at the other three faces of the Palatino typeface: the Bold, the Italic, and the Bold Italic. A bold typeface is one which is thicker, using more ink. The Italic has a different, sloping shape. There are uses for emphasis. You can see both Bold and Italic used in various places in this book, but we don’t use Bold Italic. The various shapes are designed to work comfortably together. ## Shape ### Shape #### Shape #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 162 Context: # 3 For example: ``` | x | y | |-------|-------| | 0 | 0 | | 2 | 2 | | 4 | 4 | | 6 | 6 | | 8 | 8 | | 10 | 10 | | 12 | 12 | | 14 | 14 | | 16 | 16 | | 18 | 18 | | 20 | 20 | ``` ``` | x | y | |-------|-------| | 0 | 20 | | 2 | 18 | | 4 | 16 | | 6 | 14 | | 8 | 12 | | 10 | 10 | | 12 | 8 | | 14 | 6 | | 16 | 4 | | 18 | 2 | | 20 | 0 | ``` # 4 For example: - `X` - `t` - `?` - `©` Image Analysis: Here’s a detailed analysis based on the visual content provided: ### 1. Localization and Attribution - **Image 1:** Graph on the left (depicts an increasing trend). - **Image 2:** Graph on the right (depicts a decreasing trend). - **Image 3:** Characters (text symbols) on the bottom of the page. ### 2. Object Detection and Classification - **Image 1 & 2:** Both are graphs. - **Key features of Graph 1:** Linear increase, plotted in a grid system. - **Key features of Graph 2:** Linear decrease, also plotted similarly. - **Image 3:** Contains various text characters. - **Key features:** Pixelated symbols including letters and a question mark, showcasing a digital or low-resolution style. ### 3. Scene and Activity Analysis - The images primarily depict static analytical content (graphs and text characters) without any obvious activities. - Main “actors” in this context are the graphs (representing data trends) and the characters (representing text symbols). ### 4. Text Analysis - **Graph Labels:** Axes labeled with "x" and "y," indicating a mathematical or statistical context. - **Characters:** The symbols present (x, t, ?, @) suggest a focus on text representation and digital formatting. ### 5. Diagram and Chart Analysis - **Graph 1:** Shows a steady increase from lower left to upper right. - **Graph 2:** Shows a steady decrease from upper left to lower right. - Both graphs demonstrate linear trends, indicating a relationship between x and y. ### 6. Product Analysis - No specific products are depicted; images focus on graphs and characters. ### 7. Anomaly Detection - No significant anomalies are present. The imagery follows expected formats for graphs and text symbols. ### 8. Color Analysis - **Graphs:** Monochromatic (black on white). - **Characters:** Also predominantly black on white. The simple color scheme emphasizes clarity. ### 9. Perspective and Composition - Both graphs are presented in a traditional 2D perspective, with a standard arrangement of axes. - Characters are centered and organized neatly, indicating a focus on clarity and legibility. ### 10. Contextual Significance - The graphs provide a visual representation of mathematical relationships, while the character collection may relate to exercises in typography or digital representation. - Both elements contribute educational insights, likely within a mathematics or computer science context. ### 11. Metadata Analysis - No metadata is available for analysis from the provided visual content. ### 12. Graph and Trend Analysis - **Graph 1 (increasing):** Shows a positive trend across the x-axis. - **Graph 2 (decreasing):** Shows a negative trend across the x-axis. - Both graphs represent clear and distinct linear relationships. ### 13. Graph Numbers - **Graph 1 Data Points:** - (0, 0), (2, 10), (4, 20), (6, 30), (8, 40), (10, 50) - **Graph 2 Data Points:** - (0, 20), (2, 16), (4, 12), (6, 8), (8, 4), (10, 0) ### Additional Aspects - **Ablaufprozesse (Process Flows):** Not applicable as no processes are depicted. - **Prozessbeschreibungen (Process Descriptions):** Not applicable; imagery does not depict processes. - **Typen Bezeichnung (Type Designations):** No specific type designations are referenced. - **Trend and Interpretation:** Clearly demonstrates linear relationships with positive and negative trends, possibly for instructional purposes. - **Tables:** No tables are present in the images. This analysis captures the key elements and interpretations of the visual content presented, emphasizing its educational and analytical significance. #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 70 Context: # Chapter 11: Kernel K-Means and Spectral Clustering ## Introduction In this chapter, we explore the kernel K-means algorithm and spectral clustering methods. Both techniques are widely used in clustering applications where traditional methods may fall short. ## 1. Kernel K-Means Kernel K-means is an extension of the K-means clustering algorithm that applies a kernel function to map the input features into a higher-dimensional space. ### 1.1. Algorithm Steps 1. **Choose the number of clusters, K.** 2. **Initialize cluster centroids.** 3. **Assign each sample to the nearest centroid.** 4. **Update centroids based on the assigned samples.** 5. **Repeat steps 3 and 4 until convergence.** ### 1.2. Kernel Functions Common kernel functions include: | Kernel Type | Formula | |------------------|---------------------------------| | Linear | \(K(x, y) = x^T y\) | | Polynomial | \(K(x, y) = (x^T y + c)^d\) | | Radial Basis | \(K(x, y) = \exp(-\gamma \|x - y\|^2)\) | ## 2. Spectral Clustering Spectral clustering uses the eigenvalues of a similarity matrix to reduce dimensionality before applying a clustering algorithm. ### 2.1. Steps of Spectral Clustering 1. **Construct the similarity matrix.** 2. **Compute the Laplacian matrix.** 3. **Obtain the eigenvalues and eigenvectors.** 4. **Use the top eigenvectors for clustering.** ### 2.2. Applications - Image segmentation - Social network analysis - Anomaly detection ## Conclusion Kernel K-means and spectral clustering are powerful methods for clustering in complex datasets. Understanding their mechanics and applications can enhance your machine learning repertoire. ## References - Bishop, C. M. (2006). **Pattern Recognition and Machine Learning**. - Ng, A. Y., Jordan, M. I., & Weiss, Y. (2001). **On spectral clustering: Analysis and an algorithm**. #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 11 Context: Many people may find this somewhat experimental way to introduce students to new topics counter-productive. Undoubtedly for many it will be. If you feel under-challenged and become bored I recommend you move on to the more advanced text-books of which there are many excellent samples on the market (for a list see [books](#)). But I hope that for most beginning students this intuitive style of writing may help to gain a deeper understanding of the ideas that I will present in the following. Above all, have fun! #################### File: our-upcebu-edu-ph-58438.txt Page: 1 Context: --- ##### Apply for Graduation ###### Deadline for Filing Application for Graduation | **as of the end of the** | **Deadline** | | ----------------------------------- | ---------------------------- | | MBA First Trimester AY 2022-2023 | 2022 August 26, Friday | | M.Ed. First Trimester AY 2022-2023 | 2022 September 23, Friday | | First Semester AY 2022-2023 | 2022 September 19, Monday | | MBA Second Trimester AY 2022-2023 | 2022 December 2, Friday | | M.Ed. Second Trimester AY 2022-2023 | 2023 January 20, Friday | | **Second Semester AY 2022-2023** | **2023 February 27, Monday** | | MBA Third Trimester AY 2022-2023 | 2023 April 3, Monday | | M.Ed. Third Trimester AY 2022-2023 | 2023 May 26, Friday | | Midyear Term AY 2022-2023 | 2023 July 12, Wednesday | ##### STEPS #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 64 Context: # 10.1 Kernel Ridge Regression We now replace all data-cases with their feature vector: \( x_i \rightarrow \Phi(x_i) \). In this case the number of dimensions can be much higher, or even infinitely higher, than the number of data-cases. There is a neat trick that allows us to perform the inverse above in smallest space of the two possibilities, either the dimension of the feature space or the number of data-cases. The trick is given by the following identity: \[ (P^{-1} + B^T R^{-1} B)^{-1} B^T R^{-1} = P B^T (B P B^T + R)^{-1} \tag{10.4} \] Now note that if \( B \) is not square, the inverse is performed in spaces of different dimensionality. To apply this to our case we define \( \Phi = \Phi_d \) and \( y = y_i \). The solution is then given by, \[ w = (\lambda I_l + \Phi^T \Phi)^{-1} \mathbf{y} = \Phi(\Phi^T \Phi + \lambda I_n)^{-1} \mathbf{y} \tag{10.5} \] This equation can be rewritten as: \( w = \sum \alpha \Phi(x_i) \) with \( \alpha = (\Phi^T \Phi + \lambda I_n)^{-1} \). This is an equation that will be a recurrent theme and it can be interpreted as: The solution \( w \) must lie in the span of the data-cases, even if the dimensionality of the feature space is much larger than the number of data-cases. This seems intuitively clear, since the algorithm is linear in feature space. We finally need to show that we never actually need access to the feature vectors, which could be infinitely long (which would be rather impractical). What we need in practice is the predicted value for new test point, \( x \). This is computed by projecting it onto the solution \( w \), \[ y = w^T \Phi(x) = \left( \Phi^T w + \lambda^{-1} \right) \Phi(x) = y(K + \lambda^{-1})^{-1} \tag{10.6} \] where \( K(bix, b_j) = \Phi(x_i)^T \Phi(x_j) \) and \( \kappa(x) = K(x, x) \). The important message here is of course that we only need access to the kernel \( K \). We can now add bias to the whole story by adding one more, constant feature to \( \Phi: \Phi_0 = 1 \). The value of \( w_0 \) then represents the bias since, \[ w^T \Phi = \sum_a w_a \Phi_i + w_0 \tag{10.7} \] Hence, the story goes through unchanged. #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 73 Context: # 12.1 CENTERING DATA IN FEATURE SPACE \[ \alpha_i^k \text{ can be computed efficiently in a space of dimension } N \text{ (and not } d) \text{ as follows,} \] \[ x^T_k C_u = \lambda x^T_k u \Rightarrow x^T_k \frac{1}{N} \sum_k x_k x_k^T \sum_j \alpha_j^k = \lambda x^T_k \sum_j \alpha_j^k x_j \Rightarrow \frac{1}{N} \sum_{j,k} \alpha_j^k |x_k| |x_j| = \lambda \sum_j \alpha_j^k |x_j| (12.7) \] We now rename the matrix \([x^T_j x_k] = K_{ij}\) to arrive at, \[ K \alpha^a = N \lambda K \alpha^a \Rightarrow K \alpha^e = (\lambda_a) \alpha^a \text{ with } \bar{\lambda} = N \lambda_a (12.8) \] So, we have derived an eigenvalue equation for \( \alpha \) which in turn completely determines the eigenvectors \( u \). By requiring that \( u \) is normalized we find, \[ u^T u = 1 \Rightarrow \sum_{i,j} \alpha_i^a \alpha_j^a [x^T_j x_i] = \alpha^T \alpha = N \lambda_a \alpha^a = 1 \Rightarrow \|\alpha\|_1 = \frac{1}{\sqrt{N \lambda_a}} (12.9) \] Finally, when we receive a new data-case \( t \) and we like to compute its projections onto the new reduced space, we compute, \[ u^T t = \sum_i \alpha_i^a x^T_i t = \sum_i \alpha_i^a K(x_i, t) (12.10) \] This equation should look familiar; it is central to most kernel methods. Obviously, the whole exposition was set up so that in the end we only needed the matrix \( K \) to do our calculations. This implies that we are now ready to kernelize the procedure by replacing \( x_i \to \Phi(x_i) \) and defining \( K_{ij} = \Phi(x_i) \Phi(x_j)^T \), where \( \Phi(x) = \Phi_\alpha \). ## 12.1 Centering Data in Feature Space It is in fact very difficult to explicitly center the data in feature space. But, we know that the final algorithm only depends on the kernel matrix, so if we can center the kernel matrix we are done as well. A kernel matrix is given by \( K_{ij} = \Phi(x_i) \Phi^T(x_j) \). We now center the features using, \[ \Phi_i = \Phi - \frac{1}{N} \sum_k \Phi_k (12.11) \] #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 171 Context: Solutions ============================= | Code | Length | Colour | |-------------|--------|--------| | 00010110 | 37 | White | | 0000111 | 1 | White | | 00011 | 7 | Black | | 1111 | 7 | White | | 00010 | 3 | Black | | 1100 | 5 | White | | 00001 | 9 | Black | | 1000 | 3 | White | So we have: ``` 0001011011111100111100101011110010100 01011000010011110110011110111011000101 01100011010011101110010110000101101101 10011111001111111101110101111000100101 111001110000000000000101000010111010111 ``` The codes are: | Code | Length | Colour | Code | Length | Colour | |----------|--------|---------|---------|--------|---------| | 00010110 | 37 | White | 0000100 | 9 | Black | | 0000111 | 1 | White | 0111 | 2 | White | | 00011 | 7 | Black | 11 | 2 | Black | | 1111 | 7 | White | 1100 | 5 | White | | 00010 | 3 | Black | 10 | 3 | Black | | 1000 | 3 | White | 1100 | 5 | White | | 00001 | 9 | Black | 11 | 2 | White | | 00000010 | 9 | Black | 0001111 | 1 | White | | 11 | 2 | Black | 1100 | 5 | White | | 10 | 3 | Black | 11 | 2 | Black | #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 92 Context: # Chapter 6. Saving Space ## Problems **Solutions on page 154.** 1. Count the frequencies of the characters in this piece of text and assign them to the Huffman codes, filling in the following table. Then encode the text up to "more lightly.": > 'I have a theory which I suspect is rather immoral,' Smiley went on, more lightly. 'Each of us has only a quantum of compassion. That if we lavish our concern on every stray cat, we never get to the centre of things.' | Letter | Frequency | Code | |--------|-----------|--------| | a | 111 | 11000 | | b | 100 | 11011 | | c | 101 | 11010 | | d | 0111 | 11001 | | e | 0110 | 01011 | | h | 0100 | 01001 | | i | 0011 | 010000 | | m | 0010 | 010010 | | n | 0000 | 010011 | | o | 1101 | 011001 | | p | 1010 | 011000 | | r | 0001 | 010110 | | s | 1100 | 011010 | | t | 1011 | 011110 | | u | 11000 | 011111 | | y | 11001 | 011101 | 2. Consider the following frequency table and text. Decode it. | Letter | Frequency | Code | |--------|-----------|--------| | space | 20 | 111 | | e | 12 | 100 | | t | 9 | 111 | | h | 7 | 011 | | o | 7 | 0110 | | r | 5 | 0011 | | m | 6 | 0100 | | p | 1 | 010111 | | b | 1 | 010000 | | s | 2 | 00011 | | d | 2 | 11001 | #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 4 Context: ``` # CONTENTS 7.1 A Different Cost Function: Logistic Regression . . . . . . . . 37 7.2 The Idea in a Nutshell . . . . . . . . . . . . . . . . . . . . 38 # 8 Support Vector Machines . . . . . . . . . . . . . . . . . 39 8.1 The Non-Separable Case . . . . . . . . . . . . . . . . . 43 # 9 Support Vector Regression . . . . . . . . . . . . . . . . 47 # 10 Kernel Ridge Regression . . . . . . . . . . . . . . . . 51 10.1 Kernel Ridge Regression . . . . . . . . . . . . . . . . 52 10.2 An Alternative Derivation . . . . . . . . . . . . . . . . 53 # 11 Kernel K-means and Spectral Clustering . . . . . . . . . . 55 # 12 Kernel Principal Components Analysis . . . . . . . . . . 59 12.1 Centering Data in Feature Space . . . . . . . . . . . . 61 # 13 Fisher Linear Discriminant Analysis . . . . . . . . . . 63 13.1 Kernel Fisher LDA . . . . . . . . . . . . . . . . . . . 66 13.2 A Constrained Convex Programming Formulation of FDA . . 68 # 14 Kernel Canonical Correlation Analysis . . . . . . . . . . 69 14.1 Kernel CCA . . . . . . . . . . . . . . . . . . . . . . . 71 # A Essentials of Convex Optimization . . . . . . . . . . . 73 A.1 Lagrangians and All That . . . . . . . . . . . . . . . . 73 # B Kernel Design . . . . . . . . . . . . . . . . . . . . 77 B.1 Polynomials Kernels . . . . . . . . . . . . . . . . . . 77 B.2 All Subsets Kernel . . . . . . . . . . . . . . . . . . 78 B.3 The Gaussian Kernel . . . . . . . . . . . . . . . . . . 79 ``` #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 74 Context: # Chapter 5. Typing it In It is striking how similar the keyboards of these early typewriters are to those we use every day. Here is a keyboard from the early days of computing. It belongs to the UNIVAC system built by the Eckert-Mauchly Computer Corporation in 1953: ![UNIVAC Keyboard](image-url-univac) By the 1980s, computer keyboards were almost identical in layout to today's, just rather larger and more solid. Here is an IBM Model M Keyboard from 1985: ![IBM Model M Keyboard](image-url-ibm) The author’s own keyboard (Apple, 2015), with which this book was typed, is much the same, but very much thinner and lighter. Laptop keyboards must be smaller and thinner still: ![Author's Keyboard](image-url-author) Image Analysis: Here’s a comprehensive analysis of the visual content based on your outlined aspects: ### 1. Localization and Attribution: - **Image 1 (Top)**: An early keyboard from the UNIVAC system, 1953. - **Image 2 (Middle)**: An IBM Model M Keyboard from 1985. - **Image 3 (Bottom)**: The author's own keyboard, Apple, 2015. ### 2. Object Detection and Classification: - **Image 1**: Early computer keyboard. - **Key Features**: Large keys, mechanical design typical of early computing. - **Image 2**: IBM Model M Keyboard. - **Key Features**: Similar layout to modern keyboards, larger size, distinct arrangement of keys. - **Image 3**: Modern Apple keyboard. - **Key Features**: Thinner profile, lighter keys, sleek design. ### 3. Scene and Activity Analysis: - Each image portrays a different keyboard design through time. - **Image 1**: Depicts a historical keyboard likely in use during computing operations. - **Image 2**: Represents the transition to more modern keyboard layouts. - **Image 3**: Highlights advancements in keyboard design emphasizing portability and ergonomics. ### 4. Text Analysis: - Text accompanying the images provides context for the evolution of keyboard design over time. - It establishes a narrative connecting early computing to contemporary technology. ### 5. Diagram and Chart Analysis: - There are no diagrams or charts present in the images. ### 6. Product Analysis: - **Image 1**: UNIVAC Keyboard. - **Materials**: Metal and plastic, traditional keyboard mechanisms. - **Image 2**: IBM Model M. - **Materials**: Plastic housing with mechanical keys. - **Image 3**: Apple Keyboard. - **Materials**: Plastic, likely incorporating newer technology for key switches. ### 7. Anomaly Detection: - No significant anomalies detected; the images follow a logical sequence of keyboard evolution. ### 8. Color Analysis: - **Image 1**: Dark gray tones typical of older machines. - **Image 2**: Light gray with a more industrial feel. - **Image 3**: Sleek, minimalist design in light colors, reflecting modern aesthetics. ### 9. Perspective and Composition: - Perspective is straightforward, focusing on the keyboards in a way that allows for easy comparison. - Composition is clean, with each keyboard centered, drawing attention to their designs. ### 10. Contextual Significance: - The images contribute to an understanding of technological advances in computing and the design of user interfaces over time. ### 11. Metadata Analysis: - No metadata is available for review in the images. ### 12. Graph and Trend Analysis: - No graphs present in the images. ### 13. Graph Numbers: - Not applicable as no graphs are included. ### Additional Aspects: - **Ablaufprozesse (Process Flows)**: Not depicted in the images. - **Prozessbeschreibungen (Process Descriptions)**: The images collectively describe the evolution of keyboard designs. - **Typen Bezeichnung (Type Designations)**: Each type of keyboard is categorized by era and manufacturer. - **Trend and Interpretation**: A clear trend of miniaturization and ergonomics in keyboard design. - **Tables**: No tables are present in the images. This structured analysis encapsulates the essential elements of the visual content as requested. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 4 Context: # COHERENT PRESS Cambridge Published in the United Kingdom by Coherent Press, Cambridge © Coherent Press 2016 This publication is in copyright. Subject to statutory exception no reproduction of any part may take place without the written permission of Coherent Press. First published March 2016 A catalogue record for this book is available from the British Library **ISBN** 978-0-9576711-2-6 Paperback ## by the same author - PDF Explained (O'Reilly, 2012) - OCaml from the Very Beginning (Coherent, 2013) - More OCaml: Algorithms, Methods & Diversions (Coherent, 2014) #################### File: our-upcebu-edu-ph-58438.txt Page: 1 Context: ### **STEPS** ##### Step 1\. Pre-enlistment 1. 1. **Pre-enlist for your classes** in the next term at ****. Here is a [Guide on How to Pre-enlist in SAIS](https://our.upcebu.edu.ph/wp-content/uploads/2020/11/SAIS-Student-Self-Service.pdf). 2. Follow your [**Program Checklist and Study Plan**](https://our.upcebu.edu.ph/information/collegesandprograms/). 3. [Contact your college](https://our.upcebu.edu.ph/flowsandforms/#SRE) if you are unable to pre-enlist. 4. You may still have tagged ineligibilities (Accounting, Library, OSA, OUR) in SAIS. Make sure to comply with these ineligibilities or to get in touch with the concerned office in order to clear you for pre-enlistment. ##### Step 2\. Advisement Students who were not able to pre-enlist during the pre enlistment period should pre-enlist during the registration period before advisement. Please check with your College/Department/Program if Advising will be done online or face-to-face. 1. 1. Update your Program Checklist with your most recent grades. 2. If F2F, generate and print your **[UP Form 5A (List of Enlisted Classes)](#form5)** in SAIS, under Main Menu > Reporting Tools > BI Publisher > Query Report Viewer > “UPCEBFORM5A”. The SAIS interface looks like [this](https://our.upcebu.edu.ph/wp-content/uploads/2020/11/SAIS-Student-Query-Report-Viewer.jpg). Present this to your Program Adviser along with your updated Program Checklist. If online, check with your College/Department/Program on communicating with your Program Adviser. 3. Your Program Adviser will check your enlisted classes and tag you as “Advised”. Your Adviser will make the necessary changes in your schedule, as needed. ##### STEP 3\. Scholarships and/or Tuition Discount Your scholarships and tuition discounts should be reflected in SAIS. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 149 Context: # Chapter 10 ## Words to Paragraphs We have learned how to design individual characters of a typeface using lines and curves, and how to combine them into lines. Now we must combine the lines into paragraphs, and the paragraphs into pages. Look at the following two paragraphs from Franz Kafka’s *Metamorphosis*: > One morning, when Gregor Samsa woke from troubled dreams, he found himself transformed in his bed into a horrible vermin. He lay on his armour-like back, and if he lifted his head a little he could see his brown belly, slightly domed and divided by arches into stiff sections. The bedding was hardly able to cover it and seemed ready to slide off any moment. His many legs, pitifully thin compared with the size of the rest of him, waved about helplessly as he looked. > > “What’s happened to me?” he thought. It wasn’t a dream. His room, a proper human room although a little too small, lay peacefully between its four familiar walls. A collection of textile samples lay spread out on the table — Samsa was a travelling salesman — and above it there hung a picture that he had recently cut out of an illustrated magazine and housed in a nice, gilded frame. It showed a lady fitted out with a fur hat and fur boa who sat upright, raising a heavy fur muff that covered the whole of her lower arm towards the viewer. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 190 Context: # Templates ## Problem 2.1 ![Problem 2.1](image_url) Description of the problem goes here. ### Key Points - Bullet point 1 - Bullet point 2 - Bullet point 3 ### Steps to Solve 1. Step one 2. Step two 3. Step three ### Notes - Additional information or notes relevant to the problem. - Tips for solving or understanding the problem. ### References - [Link to related resources](https://example.com) #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 123 Context: # Chapter 8: Grey Areas Figure I: Halftone of the Steinway Hall, printed in the *Daily Graphic*, December 2nd, 1873. Image Analysis: ### Analysis of the Visual Content: 1. **Localization and Attribution:** - **Image Number:** Image 1 - **Location:** Centered on the page with caption details below. 2. **Object Detection and Classification:** - **Objects Detected:** - Building (Steinway Hall) - Advertisements (various categories related to holiday attractions and retail) - **Key Features:** - The building has multiple windows and a textured facade. - Advertisements display services and products (toys, dolls, lamps). 3. **Scene and Activity Analysis:** - **Entire Scene Description:** - The image displays a historical architectural structure with surrounding advertisements indicating holiday attractions. - **Main Actors and Actions:** - While there are no people shown, the advertisements suggest potential activities such as shopping and attending performances. 4. **Text Analysis:** - **Extracted Text:** Various advertisements including "HOLIDAY ATTRACTIONS," "Toys Dolls," and "LORD & TAYLOR." - **Significance:** - Highlights the promotional culture of the time, indicating important retail periods and societal interests. 5. **Diagram and Chart Analysis:** - **Not applicable** as no diagrams or charts are present in the image. 6. **Product Analysis:** - **Depicted Products:** - Toys, dolls, and lamps mentioned in the ads. - **Main Features:** - Toys and dolls suggest a focus on children's gifts, lamps for home lighting. - **Visual Differences:** - No visual representation of the products; purely text-based descriptions. 7. **Anomaly Detection:** - **Possible Anomalies:** - Lack of modern advertising methods; all advertisements are text-based. - **Impact:** - Reflects historical advertising practices which may seem outdated by today’s standards. 8. **Color Analysis:** - **Color Composition:** - Predominantly monochrome (black and white). - **Impact on Perception:** - Creates a historical and nostalgic feel, typical of 19th-century photographs. 9. **Perspective and Composition:** - **Perspective:** - Straight-on view of the building, presenting a frontal perspective. - **Composition Analysis:** - Building centrally positioned with ads arranged around it; draws viewer’s attention to the structure as well as the context of advertisements. 10. **Contextual Significance:** - **Overall Significance:** - The image contextualizes the historical significance of Steinway Hall and reflects societal values through its advertisements, emphasizing retail culture. 11. **Metadata Analysis:** - **Not applicable** as metadata details are not provided in the image. 12. **Graph and Trend Analysis:** - **Not applicable**; no graphs are present. 13. **Graph Numbers:** - **Not applicable** as there are no graphs included. ### Additional Aspects: - **Ablaufprozesse (Process Flows):** - **Not applicable**; no process flows depicted. - **Prozessbeschreibungen (Process Descriptions):** - **Not applicable**; no descriptions of processes shown. - **Typen Bezeichnung (Type Designations):** - Advertisement types are specified but not categorized within the image itself. - **Trend and Interpretation:** - Indicates a trend towards holiday shopping and community engagement in leisure activities during the season. - **Tables:** - **Not applicable** as no tables are presented. This historical image provides insights into architectural design, social practices, and the evolution of advertising from the 19th century. #################### File: our-upcebu-edu-ph-58438.txt Page: 1 Context: 1. Secure and fill out [**FORM 26A, 13C Permit for Removal of Incomplete/4.0**](https://our.upcebu.edu.ph/wp-content/uploads/2022/06/UPC-FORM-26A-13C-Permit-for-Removal-of-Incomplete-4.0-20210831-Fillable.pdf). _It is recommended to use the_ online fillable version _of both forms._ _Both forms are to be filled up for each and every course requiring removal or completion._ 2. Submit the forms to your college or to the [Office of the College Secretary](#SRE), for the requisite checking. Your college will then route the forms for endorsement and approval by the designated signatories. _Note: For the College of Science, please submit your forms directly to your department/program clerk._ 3. When your FORM 26A has been approved, then you may now take the removal exam or submit your completion requirements. 4. Your instructor will forward the graded FORM 13C to the Office of the College Secretary of the instructor’s college, for the signature of the Dean. (FORM 26A attached) 5. The approved FORM 26A and the graded and signed FORM 13C will be forwarded to the Office of the College Secretary of the student’s college for encoding in SAIS and filing. **A student must be officially enrolled in the semester when the completion or removal is to be taken or complied with.** Students who are not enrolled in any subject but who want to complete an “INC” or remove a grade of “4”, must enroll for [residence](https://our.upcebu.edu.ph/information/registration-notes/#Residence). _Students on Leave of Absence (LOA) are not considered officially enrolled._ #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 79 Context: # 13.2 A Constrained Convex Programming Formulation of FDA The eigenvalue equation. This scales as \( \mathcal{O}(n^3) \) which is certainly expensive for many datasets. More efficient optimization schemes solving a slightly different problem and based on efficient quadratic programs exist in the literature. Projections of new test points into the solution space can be computed by: \[ w^T \phi(x) = \sum_{i} \alpha_i K(x_i, x) \tag{13.19} \] as usual. In order to classify the test point we still need to divide the space into regions which belong to one class. The easiest possibility is to pick the cluster with the smallest Mahalanobis distance: \[ d(x, \mu^c) = \frac{(x^c - \mu^c)^2}{\sigma^c} \quad \text{where} \quad \mu^c \text{ and } \sigma^c \text{ represent the class mean and standard deviation in the 1-d projected space respectively.} \] Alternatively, one could train any classifier in the 1-d subspace. One very important issue that we did not pay attention to is regularization. Clearly, as it stands the kernel machine will overfit. To regularize we can add a term to the denominator: \[ S_W \rightarrow S_W + \beta I \tag{13.20} \] By adding a diagonal term to this matrix makes sure that very small eigenvalues are bounded away from zero which improves numerical stability in computing the inverse. If we write the Lagrangian formulation where we maximize a constrained quadratic form in \( \alpha \), the extra term appears as a penalty proportional to \( \|\alpha\|^2 \) which acts as a weight decay term, favoring smaller values of \( \alpha \) over larger ones. Fortunately, the optimization problem has exactly the same form in the regularized case. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 11 Context: # Acknowledgments The word list on pg 67 is from *The Reading Teacher's Book of Lists*, Fourth Edition (Fry, Kress & Fountoukidis), Prentice Hall, 2000. The literary quotations used as example texts in Chapters 3 and the problems in Chapter 6 are from Leon Marie Carrière's *Trink, Taler, Sider, Sy* . The text in Chapter 6 is from Franz Kafka's *The Trial*. Likewise in Chapter 10 from his *Metamorphosis*. The hyphenation examples in Chapter 10 are from *How-to Hyphenate* essays submitted to the Tylx typing system, the PhD Thesis of Hanh Thi Thanh, Faculty of Informatics, Masaryk University, Brno, October 2000. The cover image shows a Page Composer, courtesy of the United States Patent Office. The drawing of French Curves on page 17 was modified from one created by Joshua Carter. The tables on pages 35–39 are reprinted by kind permission of the Unicode Consortium. Unicode is a registered trademark of Unicode, Inc. in the United States and other countries. The Easymatic patents on pages 54, 55, 56, and 57 were provided by the United States Patent and Trademark Office. The picture of a Uniwac keyboard on page 60 appears courtesy of the Recomputing Society of Rhode Island. The picture of an IBM Model M keyboard on page 63 was taken by Sal Cangialosi. The woodblock print on page 101 is from *De Formschneider* (The Blockcutter) from the *Panda* publishing millimeter mechanism (book of trades); it was printed in 1568 and is in the British Museum. The picture on page 102 is a detail of the engraving *Der Kreutzmann zu Marienfeld* published by Franz Sorat in 1819. The engraving *Medici I* by Eberhard Bier on page 104 is held at the Minneapolis Institute of Art. The image of a mezzotint plate on page 104 was taken by David Lardner. The guide for the lithograph print by Franz Kruger on page 106 is courtesy of the Image Permanence Institute. The Rembrandt etching *The Hundred Gulden Print* on page 108 is from the Rijksmuseum in Amsterdam. The photograph of bacon grain on the same page is courtesy of the University of Rochester. The halftone photograph of the Steinway Hall on East 14th Street in Manhattan on page 109 is from *The Ugly Duckling*, December 1978. The sketches of useful typesetting quotes on pages 142–143 are from an unknown early 20th century press. The tables of Palatino Linotype on pages 131–133 were produced by the eponymous typeface from Monotype GmbH. The sketches of useful typesetting tricks on page 127 are based on the instructions of Dario Turoldo. The photograph on page 144 was taken by Tom Gardett at the Print Shop of the Cambridge Museum of Technology, Cambridge, UK. All other photographs and images were created by the author. Scrabble is a trademark of Hasbro, Inc. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 73 Context: # Chapter 5. Typing it In between the left and right hands in a rhythmic fashion. The layout has never caught on, however, though one can buy keyboards for most modern computers with the Dvorak layout, and it retains a band of admirers. **May 12, 1936.** **A. Dvorak et al** **Sytethesis Registered** **Filed May 21, 1932** | | | | | | | |---|---|---|---|---|---| | . | : | P | Y | F | G | | R | L | | | | | | A | O | E | U | J | H | | T | H | T | N | S | | | Z | Q | J | K | X | B | | M | W | Y | | | | August Dvorak William L. Dreyfus Charles L. Reynolds, Attorney Image Analysis: I'm unable to analyze the visual content you provided in detail, but I can help summarize or discuss its components if you describe them to me. Please let me know how you'd like to proceed! #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 88 Context: # APPENDIX A. ESSENTIALS OF CONVEX OPTIMIZATION Complementary slackness is easily derived by, $$ f_0(x^*) = L_D(\lambda^*, \nu^*) = \inf_x \left( f_0(x) + \sum_i \lambda_i f_i(x) + \sum_j \nu_j h_j(x) \right) $$ $$ \leq f_0(x^*) + \sum_i \lambda_i f_i(x^*) + \sum_j \nu_j h_j(x^*) \tag{A.13} $$ $$ \leq f_0(x^*) \tag{A.14} $$ where the first line follows from Eqn. A.6, the second because the inf is always smaller than any \(x^*\) and the last because \(f_i(x^*) \leq 0\), \(\lambda_i \geq 0\), and \(h_j(x^*) = 0\). Hence all inequalities are equalities and each term is negative, so each term must vanish separately. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 109 Context: # Chapter 7. Doing Sums 6. Consider this function, which removes elements in positions 2, 4, 6, ... from a list, leaving elements in positions 1, 3, 5, ...: ``` odds [] = [] if tail [] = [] then [] else [head []] = odds (tail tail []) ``` Evaluate the following uses of this function: a) `odds []` b) `odds [1, 2]` c) `odds [1, 2, 3]` You need not show all the stages of evaluation if you can do it in your head. #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 27 Context: # 3.1 In a Nutshell Learning is all about generalizing regularities in the training data to new, yet unobserved data. It is not about remembering the training data. Good generalization means that you need to balance prior knowledge with information from data. Depending on the dataset size, you can entertain more or less complex models. The correct size of the model can be determined by playing a compression game. Learning = generalization = abstraction = compression. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 77 Context: # Chapter 5: Typing it In We continue to build the first character by typing the next Zhuyin symbol: ``` ㄧ ``` Now, we type in the tone using the number keys on the keyboard (there are four tones in Chinese, each a different variation on a given sound): ``` ㄇ ``` We can now see the full character, which replaces the Zhuyin ones we typed in. It is the wrong character, but do not worry – this will correct itself once the system knows the end of the word. We begin on the second character: ``` 應 ``` Now we type the second sound of the second character: ``` 左 ``` Image Analysis: Here’s a comprehensive analysis of the attached visual content based on the specified aspects: ### 1. Localization and Attribution - **Image 1:** First character construction (top segment). - **Image 2:** Building the character (second segment). - **Image 3:** Full character revealing (third segment). - **Image 4:** Begin the second character construction (fourth segment). - **Image 5:** Second sound entry (fifth segment). ### 2. Object Detection and Classification - **Image 1:** Chinese character and ZhuYin symbols. - **Image 2:** ZhuYin symbol placeholder. - **Image 3:** Complete Chinese character. - **Image 4:** Second character placeholder. - **Image 5:** Input symbols for the second character. ### 3. Scene and Activity Analysis - The series showcases the process of typing Chinese characters using ZhuYin symbols. The activities involve building characters step-by-step and illustrating how to input tones. ### 4. Text Analysis - The text explains the steps needed for typing Chinese characters, emphasizing the use of ZhuYin symbols and tone indicators to construct characters which is a crucial aspect of learning Chinese. ### 5. Diagram and Chart Analysis - No diagrams or charts are present in the images. ### 6. Product Analysis - No products are depicted in the images. ### 7. Anomaly Detection - No anomalies are detected. The content appears to follow a logical sequence in character construction. ### 8. Color Analysis - The images are monochrome, primarily utilizing black text on a white background, contributing to readability and clarity. ### 9. Perspective and Composition - The perspective is straightforward, focusing directly on the text. The layout is simple and clear, emphasizing the step-by-step instruction method. ### 10. Contextual Significance - The images are part of a chapter explaining how to type in Chinese, contributing to a broader educational narrative on language learning. ### 11. Metadata Analysis - Metadata is not available for analysis in this context. ### 12. Graph and Trend Analysis - No graphs are present in the images. ### 13. Graph Numbers - No data points for graphs are present. ### Additional Aspects: - **Ablaufprozesse (Process Flows):** The images outline a step-by-step process for inputting characters. - **Prozessbeschreibungen (Process Descriptions):** Each image depicts a distinct step in building a Chinese character, increasing in complexity as each sound and symbol is added. - **Typen Bezeichnung (Type Designations):** ZhuYin symbols are referred to as the primary input method for the characters. - **Trend and Interpretation:** The series illustrates an educational trend focusing on user-friendly methods for character input in language learning. - **Tables:** No tables are included in the images. This analysis covers the key elements based on your specified aspects while adhering to the visual content provided. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 26 Context: # Chapter 1. Putting Marks on Paper ## Problems Solutions on page 147. Grids for you to photocopy or print out have been provided on page 173. Alternatively, use graph paper or draw your own grids. 1. Give sequences of coordinates which may be used to draw these sets of lines. ![Image 1](insert_image_1_url) **y** | | | | | | | | | | | | | | | | | | | | | |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| | 20| | | | | | | | | | | | | | | | | | | | | 16| | | | | | | | | | | | | | | | | | | | | 12| | | | | | | | | | | | | | | | | | | | | 8| | | | | | | | | | | | | | | | | | | | | 4| | | | | | | | | | | | | | | | | | | | | 0| 0 | 2 | 4 | 6 | 8 | 10| 12| 14| 16| 18| 20| | | | | | | | | | x | | | | | | | | | | | | | | | | | | | | ![Image 2](insert_image_2_url) **y** | | | | | | | | | | | | | | | | | | | | | |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| | 20| | | | | | | | | | | | | | | | | | | | | 16| | | | | | | | | | | | | | | | | | | | | 12| | | | | | | | | | | | | | | | | | | | | 8| | | | | | | | | | | | | | | | | | | | | 4| | | | | | | | | | | | | | | | | | | | | 0| 0 | 2 | 4 | 6 | 8 | 10| 12| 14| 16| 18| 20| | | | | | | | | | x | | | | | | | | | | | | | | | | | | | | 2. Draw these two sequences of coordinates on separate 20x20 grids, with lines between the points. What do they each show? - (5,19)—(15,19)—(15,16)—(8,16)—(8,12)—(15,12)—(15,9)—(8,9)—(8,5)—(15,5)—(15,2)—(5,2)—(5,19) - (0,5)—(10,10)—(5,0)—(10,3)—(15,0)—(10,20)—(5,17)—(10,15)—(10,10)—(15,20)—(10,17)—(5,20)—(10,10)—(15,3)—(10,0)—(5,0) 3. Given the following lines on 20x20 grids, select pixels to approximate them. ![Image 3](insert_image_3_url) **Figure for approximation** #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 65 Context: # Chapter 4. Looking and Finding There are other special characters: we can use a full stop `.` to match any character, so that the pattern `.unce` matches `ounce` and `dunce`. In addition to these patterns, we can run a search multiple times and combine the results. For example, when using an internet search engine, if we are interested in finding documents containing `"cats"` or `"dogs"`, we might enter the search `"cats OR dogs"`. The search engine knows that the word `OR` is special, and it runs two searches, one for `"cats"` and one for `"dogs"` and returns documents which contain an instance of either. In reality, search engines don’t look through the text of web pages at the moment that you click the search button; they use pre-prepared indexes to make the search many times faster. In the problems which follow, we extend this idea of patterns and ask you to run the searching algorithm through on paper to determine whether they match the text. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 114 Context: # Chapter 8: Grey Areas We produce a few thousand bibles by having monks copy them repeatedly for years at a time. However, we could not produce a million copies of a newspaper, with text and pictures, overnight, no matter how extensive our resources. To reproduce text, we require only black ink on white paper. To reproduce paintings and photographs, we need methods which provide the illusion of grey tones. Printing the paper dozens of times with diluted inks to form multiple shades of grey, as the watercolourist would, is time-consuming and physically difficult—think of the amount of water that would end up on the paper, for one. So we must find other ways. The following picture shows a very simple scheme for creating an illusion of grey on a display such as a computer screen: ![Illusion of Grey](link_to_image) We have a checkerboard pattern of black and white, and if we get the scale small enough, or the viewer stands far enough away, or both, the appearance is of a mid grey. Put this book on a stand and walk slowly away—how far do you have to go for each of the parts of this picture to appear grey? Similar techniques can work when printing on paper, but we have to account for the spreading of ink and all the other imperfections of the physical world. Some of the earliest reproduction methods involved cutting patterns into wooden blocks, applying ink, and stamping them onto cloth or paper, either by hand or in a primitive press. The process is as follows: 1. **Design the Pattern**: Create a design that can be carved into a block. 2. **Carve the Block**: Use tools to cut away the areas that will not hold ink. 3. **Apply Ink**: Roll ink over the block evenly. 4. **Transfer the Image**: Press the block onto the material. This method allowed for the mass production of images before more advanced printing techniques were developed. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 125 Context: # Chapter 8. Grey Areas At the top, the finest halftone; at the bottom, the coarsest. We can see that the larger number of apparent greys in the coarsest dots are an advantage in this case: the top line looks very fragmented indeed. At a distance, the middle and bottom are both good. We can perform this halftoning by producing patterns which look like increasing sizes of ink dots. These patterns can then be displayed on screen or printed. To do this, we divide the image up into little 2x2 or 4x4 squares of pixels, and then choose one of a set of patterns to represent the average level of grey in the square. The result is a picture with the same number of pixels, but where each is black or white. Here are the patterns for a 2x2 scheme: | 1 | 2 | |---|---| | 3 | 4 | Notice that there are five patterns, not four as we might expect. We use the first pattern for a grey level between 0% and 20%, the second between 20% and 40%, and so forth. In this scheme we have tried to keep the black dots adjacent to one another to build up little spots, which is better suited to the spreading behaviour of ink on paper. The process is known as dithering. The patterns above may be generated by listing the order in which they turn black in a table: | | | |---|---| | 1 | 2 | | 3 | 4 | So, for the third pattern, we blacken all pixels with values less than three (that is, one and two). It is known as an ordered dither for this reason. Here is the result: ![Result Image Placeholder] Image Analysis: I can’t analyze the visual content directly, but I can guide you on how to approach an analysis based on the aspects you've provided. Here’s a structured way to examine visual content like the one you've described: ### 1. Localization and Attribution: - **Identifying Position**: Label each image sequentially (e.g., Image 1, Image 2). ### 2. Object Detection and Classification: - **Objects**: Identify and categorize objects shown in the images (e.g., patterns, shapes). - **Features**: Highlight notable attributes (e.g., size, arrangement, color contrast). ### 3. Scene and Activity Analysis: - **Description**: Summarize the overall scene; detail any depicted activities or interactions. ### 4. Text Analysis: - **Text Content**: Extract any visible text, such as captions or annotations. - **Significance**: Analyze how the text complements the visual content. ### 5. Diagram and Chart Analysis: - **Diagrams/Charts**: Identify any data visualizations. - **Details**: Note axes, scales, and key trends illustrated. ### 6. Product Analysis: - **Products**: Describe any products shown, focusing on features and design. ### 7. Anomaly Detection: - **Anomalies**: Identify any unexpected elements within the images. ### 8. Color Analysis: - **Color Composition**: Note the dominant colors and their emotional impact. ### 9. Perspective and Composition: - **Perspective**: Describe the viewpoint of the images. - **Composition**: Analyze how elements are arranged aesthetically. ### 10. Contextual Significance: - **Overall Context**: Discuss how the images relate to the document's theme. ### 11. Metadata Analysis: - **Metadata Review**: If available, look for capture data and settings. ### 12. Graph and Trend Analysis: - **Trends**: Identify significant trends in the graphs shown. ### 13. Graph Numbers: - **Data Points**: List numerical data provided in any graphs. ### Additional Aspects: - **Process Flows**: Describe any depicted workflows. - **Process Descriptions**: Provide details on any processes illustrated. - **Type Designations**: Identify specification types or categories. - **Trend and Interpretation**: Analyze any emerging trends. - **Tables**: Review tables for data presented. Utilizing this framework will help ensure a thorough analysis of the visual content. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 188 Context: # Templates ## Problem 1.2 | y | 20 | 18 | 16 | 14 | 12 | 10 | 8 | 6 | 4 | 2 | 0 | |-------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------| | **x** | | | | | | | | | | | | | 0 | . | . | . | . | . | . | . | . | . | . | . | | 2 | . | . | . | . | . | . | . | . | . | . | . | | 4 | . | . | . | . | . | . | . | . | . | . | . | | 6 | . | . | . | . | . | . | . | . | . | . | . | | 8 | . | . | . | . | . | . | . | . | . | . | . | | 10 | . | . | . | . | . | . | . | . | . | . | . | | 12 | . | . | . | . | . | . | . | . | . | . | . | | 14 | . | . | . | . | . | . | . | . | . | . | . | | 16 | . | . | . | . | . | . | . | . | . | . | . | | 18 | . | . | . | . | . | . | . | . | . | . | . | | 20 | . | . | . | . | . | . | . | . | . | . | . | ## Problem 1.3 | y | 20 | 18 | 16 | 14 | 12 | 10 | 8 | 6 | 4 | 2 | 0 | |-------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------| | **x** | | | | | | | | | | | | | 0 | . | . | . | . | . | . | . | . | . | . | . | | 2 | . | . | . | . | . | . | . | . | . | . | . | | 4 | . | . | . | . | . | . | . | . | . | . | . | | 6 | . | . | . | . | . | . | . | . | . | . | . | | 8 | . | . | . | . | . | . | . | . | . | . | . | | 10 | . | . | . | . | . | . | . | . | . | . | . | | 12 | . | . | . | . | . | . | . | . | . | . | . | | 14 | . | . | . | . | . | . | . | . | . | . | . | | 16 | . | . | . | . | . | . | . | . | . | . | . | | 18 | . | . | . | . | . | . | . | . | . | . | . | | 20 | . | . | . | . | . | . | . | . | . | . | . | #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 9 Context: # Chapter 7 Chapter 7 introduces more programming of a slightly different kind. We begin by seeing how computer programs calculate simple sums, following the familiar schoolhouse rules. We then build more complicated things involving the processing of lists of items. By the end of the chapter, we have written a substantive, real program. # Chapter 8 Chapter 8 addresses the problem of reproducing colour or grey tone images using just black ink on white paper. How can we do this convincingly and automatically? We look at historical solutions to this problem from medieval times onwards and try out some different modern methods for ourselves, comparing the results. # Chapter 9 Chapter 9 looks again at typefaces. We investigate the principal typeface used in this book, Palatino, and some of its intricacies. We begin to see how letters are laid out next to each other to form a line of words on the page. # Chapter 10 Chapter 10 shows how to lay out a page by describing how lines of letters are combined into paragraphs to build up a block of text. We learn how to split words with hyphens at the end of lines without outlines, and we look at how this sort of layout was done before computers. #################### File: our-upcebu-edu-ph-58438.txt Page: 1 Context: [Back to Top](#top) --- ##### Apply for the new UP RFID Submit **Online** the [UP Cebu **RFID** Information Sharing Consent Form](https://pdf.ac/OPrkQ) ##### Helpful Guides [**UP Cebu RFID Guide for Students**](https://our.upcebu.edu.ph/wp-content/uploads/2021/04/UP-Cebu-RFID-Guide-for-Students.pdf) [**UP Cebu RFID Guide for Faculty and Staff**](https://our.upcebu.edu.ph/wp-content/uploads/2021/06/UP-Cebu-RFID-Guide-for-Faculty-Staff-Final.pdf) [**UP Cebu RFID Picture Taking Guide**](https://our.upcebu.edu.ph/wp-content/uploads/2021/12/UP-Cebu-RFID-Picture-Taking-Guide-2021.pdf) [Back to Top](#top) --- ##### Request to Cross-Register ##### (INCOMING) Steps to cross-register from other Universities, foreign (including MOVE UP1) or local (Outside UP), to UP Cebu Cross-registrants\* are students who enroll in UP Cebu for a specific period of time but are primarily enrolled in another UP Constituent University or in another institution of higher learning. Such enrolment is subject to the approval of the Deans of the home and the accepting units/colleges. \[UP Code Art. 333−334\] _Exchange students are also classified as cross-registrants._ A special student\*\* is one who is not earning formal academic credit for his/her work. \[UP Code Art. 337c\] #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 100 Context: # Chapter 7. Doing Sums To evaluate based on input data and defining and using reusable functions. Now let’s write a real, useful function. Given a number, such as 4, it will calculate the factorial, written as `4!`, of the number. The factorial of a number is all the numbers from 1 to that number multiplied together. For example, the factorial of 4 is \( 4! = 4 \times 3 \times 2 \times 1 \), which is 24. The number of possible orderings of a pack of playing cards is \( 52! \), which is a very large number indeed. To calculate a factorial, we start at the given number, and we want to keep multiplying by the number one smaller and one smaller and one smaller, until we reach 1. Then, we want to stop, rather than keep multiplying by 0, -1, -2, etc. You can see that we will have to use an `if`...`then`...`else` construct because we have a decision to make. Let us begin to define our function. The first part is easy – if the number is 1, the answer is 1: ``` factorial n = if n = 1 then 1 else ... ``` Now we must consider what to do when the number is greater than 1. In this case, we want to multiply the number by the factorial of the number one smaller. For example, \( 4! = 4 \times 3 \times 2 \times 1 = 4 \times 3! \). So we write it out: ``` factorial n = if n = 1 then 1 else n × factorial (n - 1) ``` Notice that our function uses itself within its own definition. This is not a problem as long as the computation eventually completes and gives a result. Here it is for the number 4: ``` factorial 4 = ⇒ if 4 = 1 then 1 else 4 × factorial (4 - 1) ⇒ 4 × factorial (4 - 1) ⇒ 4 × factorial (3) ⇒ 4 × (if 3 = 1 then 1 else 3 × factorial (3 - 1)) ⇒ 4 × (3 × factorial (3 - 1)) ⇒ 4 × (3 × (if 2 = 1 then 1 else 2 × factorial (2 - 1))) ⇒ 4 × (3 × (2 × factorial (2 - 1))) ⇒ 4 × (3 × (2 × (if 1 = 1 then 1 else 1 × factorial (1 - 1)))) ⇒ 4 × (3 × (2 × 1)) ``` Finally, we have: ``` 4! = 4 × 3 × 2 × 1 ``` #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 29 Context: # Chapter 4 ## Types of Machine Learning We now will turn our attention and discuss some learning problems that we will encounter in this book. The most well-studied problem in ML is that of **supervised learning**. To explain this, let’s first look at an example. Bob wants to learn how to distinguish between bobcats and mountain lions. He types these words into Google Image Search and closely studies all catlike images of bobcats on the one hand and mountain lions on the other. Some months later on a hiking trip in the San Bernardino mountains, he sees a big cat... The data that Bob collected was labeled because Google is supposed to only return pictures of bobcats when you search for the word "bobcat" (and similarly for mountain lions). Let’s call the images \(X_1, \ldots, X_n\), and the labels \(Y_1, \ldots, Y_n\). Note that \(X_i\) are much higher dimensional objects because they represent all the information extracted from the image (approximately 1 million pixel color values), while \(Y_i\) is simply -1 or 1 depending on how we choose to label our classes. So, that would be a ratio of about 1 million to 1 in terms of information content! The classification problem can usually be posed as finding (a.k.a. learning) a function \(f(X)\) that approximates the correct class labels for any input \(X\). For instance, we may decide that \(\text{sign}(f(X))\) is the predictor for our class label. In the following, we will be studying quite a few of these classification algorithms. There is also a different family of learning problems known as **unsupervised learning** problems. In this case, there are no labels \(Y\) involved, just the features \(X\). Our task is not to classify, but to organize the data, or to discover the structure in the data. This may be very useful for visualization data, compressing data, or organizing data for easy accessibility. Extracting structure in data often leads to the discovery of concepts, topics, abstractions, factors, cues, and more such terms that all really mean the same thing. These are the underlying semantic #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 20 Context: # CHAPTER 2. DATA VISUALIZATION etc. An example of such a scatter plot is given in Figure ??. Note that we have a total of \( d(d - 1) / 2 \) possible two-dimensional projections which amounts to 4950 projections for 100-dimensional data. This is usually too many to manually inspect. How do we cut down on the number of dimensions? Perhaps random projections may work? Unfortunately, that turns out to be not a great idea in many cases. The reason is that data projected on a random subspace often looks distributed according to what is known as a Gaussian distribution (see Figure ??). The deeper reason behind this phenomenon is the **central limit theorem** which states that the sum of a large number of independent random variables is (under certain conditions) distributed as a Gaussian distribution. Hence, if we denote with \( \mathbf{w} \) a vector in \( \mathbb{R}^d \) and by \( x \) the d-dimensional random variable, then \( y = \mathbf{w}^T\mathbf{x} \) is the value of the projection. This is clearly a weighted sum of the random variables \( x_i, \; i = 1, \ldots, d \). If we assume that \( x_i \) are approximately independent, then we can see that their sum will be governed by this central limit theorem. Analogously, a dataset \( \{X_n\} \) can thus be visualized in one dimension by “histogramming” the values of \( Y = \mathbf{w}^T\mathbf{X} \), see Figure ??. In this figure we clearly recognize the characteristic "Bell-shape" of the Gaussian distribution of projected and histogrammed data. In one sense the central limit theorem is a rather helpful quirk of nature. Many variables follow Gaussian distributions and the Gaussian distribution is one of the few distributions which have very nice analytic properties. Unfortunately, the Gaussian distribution is also the most uninformative distribution. This notion of “uninformative” can actually be made very precise using information theory and states: > **Given a fixed mean and variance, the Gaussian density represents the least amount of information among all densities with the same mean and variance.** This is rather unfortunate for our purposes because Gaussian projections are the least revealing dimensions to look at. So in general we have to work a bit harder to see interesting structure. A large number of algorithms has been devised to search for informative projections. The simplest being "principal component analysis" or PCA for short ??. Here, interesting means dimensions of high variance. However, it is known that high variance is not always a good measure of interestingness and one should rather search for dimensions that are non-Gaussian. For instance, "independent components analysis" (ICA) ?? and "projection pursuit" ?? searches for dimen- #################### File: our-upcebu-edu-ph-58438.txt Page: 1 Context: It also serves as your **_temporary_ Form 5** prior to final enrollment. After your Program Adviser has _**advised**_ you, by checking your enlisted classes, and your College Secretary’s Office has _**officially enrolled**_ you for the term, then your Form 5 (_without the A_) will be generated and issued. The [**UP Form 5 EOR**](https://our.upcebu.edu.ph/wp-content/uploads/2020/11/UP-Cebu-FORM-5-EOR.jpg) is your official **Certificate of Registration**, which also serves as **Official Receipt** for paying students. Your Form 5 is considered valid when it has (a) the “REGISTERED” watermark; (b) the date enrolled; and (c) the date generated. The issuance of your UP Form 5 means that you are _**officially enrolled**_ and that your enlisted classes are final and may no longer be changed. _The only way to change your courses\* (or subjects) after the issuance of your Form 5 is by requesting for a [change of matriculation](https://our.upcebu.edu.ph/flowsandforms/#changemat)._ **Remember!** The **Form 5A** is a temporary document, while the **UP Form 5 EOR** is the final record of your registration and enrollment in the University for the specified term. ###### \*Added Note: _In UP, we call subjects “courses”, and we call courses “degree programs”, or “program” for short._ _You would hear an Iskolar say, “My program is BS Biology and I am taking the General Education course, Ethics 1, this First Semester”._ #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 16 Context: # CHAPTER 1. DATA AND INFORMATION ## 1.2 Preprocessing the Data As mentioned in the previous section, algorithms are based on assumptions and can become more effective if we transform the data first. Consider the following example, depicted in Figure 1. The algorithm we consist of estimating the area that the data occupy. It grows a circle starting at the origin, and at the point it contains all the data we record the area of the circle. In the figure why this will be a bad estimate: the data-cloud is not centered. If we had first centered it, we would have obtained a reasonable estimate. Although this example is somewhat simple-minded, there are many, much more interesting algorithms that assume centered data. To center data, we will introduce the sample mean of the data, given by: \[ E[X_i] = \frac{1}{N} \sum_{n=1}^{N} X_{in} \tag{1.1} \] Hence, for every attribute \(i\) separately, we simply add all the attribute values across data-cases and divide by the total number of data-cases. To transform the data so that their sample mean is zero, we set: \[ X'_{in} = X_{in} - E[X_i] \quad \forall n \tag{1.2} \] It is now easy to check that the sample mean of \(X'\) indeed vanishes. An illustration of the global shift is given in Figure 2. We also see in this figure that the algorithm described above now works much better! In a similar spirit as centering, we may also wish to scale the data along the coordinate axes in order to make it more “spherical.” Consider Figure 3. In this case, the data was first centered, but the elongated shape still prevented us from using the simplistic algorithm to estimate the area covered by the data. The solution is to scale the axes so that the spread is the same in every dimension. To define this operation, we first introduce the notion of sample variance: \[ V[X_i] = \frac{1}{N} \sum_{n=1}^{N} X_{in}^2 \tag{1.3} \] where we have assumed that the data was first centered. Note that this is similar to the sample mean, but now we have used the square. It is important that we have removed the sign of the data-cases (by taking the square) because otherwise positive and negative signs might cancel each other out. By first taking the square, all data-cases first get mapped to positive half of the axes (for each dimension or #################### File: our-upcebu-edu-ph-58438.txt Page: 1 Context: ## How To submit forms **online** 1. Click on the link provided below for the form that you need. 2. Fill up the form online. _Make sure you fill up all the required pages of the form._ 3. Choose ‘Print’ then ‘Save as PDF’, in order to save a copy of your filled-up information. _For online submissions, just **one copy** of the form will suffice._ (Alternatively, you may choose to print the form, fill it up by hand, and then scan or take a photo of the filled-up form.) 4. Email the accomplished form to the [Office of the College Secretary](#SRE) of the concerned college. (Emails are indicated below.) For UP students, please use your up.edu.ph email. For the College of Science, please submit your accomplished form directly to your department/program clerk. 5. Check your email regularly for a reply to your submission. _Your college will take care of securing signatures and approval for forms submitted via email._ 6. Payments for submitted applications are usually collected after approval or initial inspection of the documents. Instructions for payments will be sent via email as well. --- ###### Who is your College Secretary? * **College of Communication, Art, and Design** – Prof. Belinda F. Espiritu * **College of Science** – Instr. Sram J Isle L. Santillan * **College of Social Sciences** – Asst. Prof. Purita T. Baltazar * **School of Management** – Asst. Prof. Ma. Kresna N. Mansueto ###### Where to find the Offices of the College Secretary (OCS)? #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 58 Context: # Chapter 8: Support Vector Machines ## Introduction Support Vector Machines (SVM) are supervised learning models used for classification and regression analysis. This chapter provides an overview of SVM, its principles, and applications. ## Key Concepts - **Hyperplane**: A decision boundary that separates different classes in feature space. - **Support Vectors**: Data points that lie closest to the hyperplane and influence its position. ## Types of Support Vector Machines 1. **Linear SVM** - Used when data is linearly separable. - The goal is to find a hyperplane that maximizes the margin between the classes. 2. **Non-Linear SVM** - Used for data that is not linearly separable. - Transforms data into a higher dimension using a kernel trick. ### Kernels | Kernel Type | Description | |----------------------|-----------------------------------------------------| | Linear | No transformation; used for linear separability. | | Polynomial | Maps data to a polynomial feature space. | | Radial Basis Function (RBF) | Maps data to an infinite-dimensional space. | ## Advantages of SVM - Effective in high dimensional spaces. - Robust against overfitting, especially in high-dimensional space. ## Disadvantages of SVM - Not suitable for very large datasets. - Requires careful tuning of parameters (e.g., C and kernel parameters). ## Applications - Image classification - Text categorization - Bioinformatics ## Conclusion Support Vector Machines are a powerful tool in machine learning with various applications across different domains. Understanding the underlying principles and techniques allows for their effective use in solving complex problems. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 103 Context: # Chapter 7. Doing Sums Let us write a function to find the length of a list using the tail function: ```plaintext length l = if l = [] then 0 else 1 + length (tail l) ``` The empty list has length 0, and the length of any other list is 1 plus the length of its tail. Notice that the `=` operator works on lists too. We can try a sample evaluation: ```plaintext length [2,3] ``` **Evaluation Steps:** 1. `if [2,3] = [] then 0 else 1 + length (tail [2,3])` 2. `if false then 0 else 1 + length (tail [2,3])` 3. `1 + length [3]` 4. `1 + if [3] = [] then 0 else 1 + length (tail [3])` 5. `1 + 1 + length (tail [3])` 6. `1 + 1 + (1 + length [])` 7. `1 + 1 + 1 + (1 + 0)` 8. `2 + 1` 9. `2` These diagrams are becoming a little unwieldy, so as we write more complicated functions, we will leave some of the detail out, concentrating on the repeated uses of the main function we are writing, here length: ```plaintext length [2,3] ``` **Evaluation Steps:** 1. `= 1 + length [3]` 2. `= 1 + (1 + length [])` 3. `= 1 + (1 + 0)` 4. `= 2` #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 7 Context: # MEANT FOR INDUSTRY AS WELL AS BACKGROUND READING This book was written during my sabbatical at the Radboud University in Nijmegen (Netherlands). Hans for discussion on intuition. I like to thank Prof. Bert Kappen who leads an excellent group of postdocs and students for his hospitality. Marga, kids, UCI. This book aims to present a few popular and perhaps useful tools and approaches. What will (hopefully) be significantly different from most other scientific books is the manner in which I will present these methods. I have always been frustrated by the lack of proper explanation of equations. Many times I have been staring at a formula having not the slightest clue where it came from or how it was derived. Many books also excel in stating facts in an almost encyclopedic style, without providing the proper intuition of the method. This is my primary mission: to write a book which conveys intuition. The first chapter will be devoted to why I think this is important. #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 49 Context: # 7.2 A Different Cost Function: Logistic Regression The fact that we are picking data cases randomly injects noise into the updates, so even close to convergence we are “wiggling around” the solution. If we decrease the stepsize however, the wiggles get smaller. So it seems a sensible strategy would be to slowly decrease the stepsize and wiggle our way to the solution. This stochastic gradient descent is actually very efficient in practice if we can find a good annealing schedule for the stepsize. Why really? It seems that if we use more data cases in a mini-batch to perform a parameter update we should be able to make larger steps in parameter space by using bigger stepsizes. While this reasoning holds close to the solution it does not far away from the solution. The intuitive reason is that far away from convergence every datapoint will tell you the same story: move in direction \( X \) to improve your model. You simply do not need to query datapoints in order to extract that information. So for a bad model there is a lot of redundancy in the information that data cases can convey about improving the parameters and querying a few is sufficient. Closer to convergence you need to either use more data or decrease the stepsize to increase the resolution of your gradients. This type of reasoning clearly makes an effort to include the computational budget part of the overall objective. This is what we have argued in chapter XX is the distinguishing feature of machine learning. If you are not convinced about how important this is in the face of modern day datasets imagine the following. Company C organizes a contest where they provide a virtually infinite dataset for some prediction task. You can earn 1 million dollars if you make accurate predictions on some test set by Friday next week. You can choose between a single parameter update based on all the data or many updates on small subsets of the data. Who do you think will win the contest? ## 7.2 A Different Cost Function: Logistic Regression The cost function of Eq. 7.2 penalizes gross violations of ones predictions rather severely (quadratically). This is sometimes counter-productive because the algorithm might get obsessed with improving the performance of one single data-case at the expense of all the others. The real cost simply counts the number of mislabelled instances, irrespective of how badly you prediction function \( w^TX_n+\alpha \) was. So, a different function is often used: \[ C(w, \alpha) = -\frac{1}{n} \sum_{i=1}^{n} Y_i \tanh(w^TX_n + \alpha) \] (7.10) #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 44 Context: # CHAPTER 6. THE NAIVE BAYESIAN CLASSIFIER ## Introduction The Naive Bayesian Classifier is a probabilistic classification technique based on Bayes' theorem. It assumes that the presence of a particular feature in a class is unrelated to the presence of any other feature. Despite this naive assumption, it has been shown to work surprisingly well in many real-world situations. ## Table of Content 1. [Basic Principles](#basic-principles) 2. [Conditional Probability](#conditional-probability) 3. [Naive Bayesian Classifier](#naive-bayesian-classifier) 4. [Applications](#applications) ## Basic Principles ### Bayes' Theorem Bayes' theorem can be written as: \[ P(A|B) = \frac{P(B|A) \cdot P(A)}{P(B)} \] Where: - \( P(A|B) \) is the posterior probability of class \( A \) given feature \( B \). - \( P(B|A) \) is the likelihood of feature \( B \) given class \( A \). - \( P(A) \) is the prior probability of class \( A \). - \( P(B) \) is the prior probability of feature \( B \). ### Assumptions 1. Each feature is independent of others given the class. 2. The features contribute equally to the final classification. ## Conditional Probability Conditional probabilities are fundamental in the Naive Bayesian Classifier. The classifier computes the probability of each class given the features of a data point. ## Naive Bayesian Classifier To classify a given data point \( x \), the classifier computes: \[ P(C_k | x) \propto P(C_k) \cdot P(x | C_k) \] Where \( C_k \) is the class. The class with the highest probability is chosen as the prediction. ## Applications The Naive Bayesian Classifier has applications in various fields: - **Spam Detection:** Classifying emails as spam or not. - **Sentiment Analysis:** Determining the sentiment of customer reviews. - **Document Classification:** Categorizing documents into predefined topics. ## Conclusion The Naive Bayesian Classifier is a simple yet powerful technique for classification tasks. It performs well on various datasets and is commonly used due to its efficiency and effectiveness. #################### File: our-upcebu-edu-ph-58438.txt Page: 1 Context: ###### Deadline for Removal of INC/4.0 (M Business Administration) | **Term Incurred** | **Deadline** | | ----------------------------- | ------------------------ | | Third Trimester AY 2020-2021 | 2022 August 8, Monday | | First Trimester AY 2021-2022 | 2022 November 14, Monday | | Second Trimester AY 2021-2022 | 2023 March 14, Tuesday | | Third Trimester AY 2021-2022 | 2023 August 1, Tuesday | | First Trimester AY 2022-2023 | 2023 November 6, Monday | | Second Trimester AY 2022-2023 | 2024 March 22, Friday | ###### Deadline for Removal of INC/4.0 (M Education) | **Term Incurred** | **Deadline** | | ----------------------------- | ------------------------ | | Third Trimester AY 2020-2021 | 2022 September 2, Friday | | First Trimester AY 2021-2022 | 2023 January 2, Monday | | Second Trimester AY 2021-2022 | 2023 May 8, Monday | | Third Trimester AY 2021-2022 | 2023 August 18, Friday | | First Trimester AY 2022-2023 | 2024 January 2, Tuesday | | Second Trimester AY 2022-2023 | 2024 May 6, Monday | [Back to Top](#top) --- ##### Appeal for Readmission/Extension of Residence At the end of every semester, students are required to check their grades and scholastic status to determine if they are eligible to enroll in the following semester. Students who: **(A)** are dropped for reasons of **Double Probation**, **Dismissal**, or **Permanent Disqualification (PDQ)**; Graduate students who fail to maintain a GWA of “2.000” or better in their course work at the end of the academic year **(GDQ)**; #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 121 Context: # Chapter 8: Grey Areas ## Figures ### Figure G: Film grain ![Film grain](path_to_image_G) ### Figure H: Film under an electron microscope ![Film under an electron microscope](path_to_image_H) Image Analysis: Here's the analysis based on the aspects you specified: ### 1. **Localization and Attribution:** - **Image 1:** Film grain (Figure G) - **Image 2:** Film under an electron microscope (Figure H) ### 2. **Object Detection and Classification:** - **Image 1:** - Objects: Grain particles on a film. - Key Features: Small, granular appearance, varying sizes. - **Image 2:** - Objects: Film particles under electron microscopy. - Key Features: Distinct shapes (irregular, rounded), consistent size measurement annotations (e.g., "50 nm"). ### 3. **Scene and Activity Analysis:** - **Image 1:** - Scene: A grainy surface of a film. - Activities: None, as this is a static image. - **Image 2:** - Scene: A close-up of a film surface revealing detailed microstructure. - Activities: Implied analysis of the film's microstructure. ### 4. **Text Analysis:** - **Image 1:** - Text: "Figure G: Film grain" - Significance: Labels the image for reference in the document. - **Image 2:** - Text: "Figure H: Film under an electron microscope." - Significance: Indicates the context of the detailed examination. ### 5. **Diagram and Chart Analysis:** - No diagrams or charts present in the images. ### 6. **Product Analysis:** - The images depict a film material; however, they aren't specific products. - Key features: The microscopic texture and characteristics of the film. ### 7. **Anomaly Detection:** - **Image 1:** No anomalies, typical film grain structure. - **Image 2:** The annotation of "50 nm" stands out, emphasizing precision in measurement which is notable in the context of material analysis. ### 8. **Color Analysis:** - Both images are grayscale; thus: - Dominant Colors: Shades of gray. - Impact: The monochrome palette emphasizes texture and granularity rather than color. ### 9. **Perspective and Composition:** - **Image 1:** Taken at a standard view of the film surface. - **Image 2:** Close-up perspective under an electron microscope, demonstrating fine details. - Composition: Both images focus on the texture of the film, arranged to highlight grain and particle size. ### 10. **Contextual Significance:** - The images contribute to discussing film properties and characteristics within the document, enhancing understanding of material science and photography. ### 11. **Metadata Analysis:** - Metadata is not accessible from the provided images. ### 12. **Graph and Trend Analysis:** - No graphs present in the images. ### 13. **Graph Numbers:** - No graph data available. ### **Additional Aspects:** - **Ablaufprozesse (Process Flows):** - Not applicable as there are no depicted processes. - **Prozessbeschreibungen (Process Descriptions):** - No process descriptions are visible in the images. - **Typen Bezeichnung (Type Designations):** - Film grain and microstructural analysis are the key types described. - **Trend and Interpretation:** - There appears to be a focus on the physical characteristics of film, though no explicit trends are provided. - **Tables:** - No tables are included in the images. This analysis provides a detailed overview based on the aspects you requested. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 22 Context: # Chapter 1. Putting Marks on Paper & & & On the left is an idealised high resolution shape. In the middle, just black and white at a lower resolution. On the right, prepared for display on a screen supporting grey as well as black and white, at the same lower resolution. This use of greys is known as **antialiasing**, since the jagged edges in low resolution lines are known as **aliasing**. This term originated in the field of signal processing and is used to describe problems stemming from low-resolution versions of high-resolution signals. Here is a photograph, taken under a microscope, of such an antialiased line on a modern computer screen: ![Antialiased line on a computer screen](image-url) The left image is magnified 20×; the right image 400×. The rectangular shapes you can see in the images are the separated Red, Green, and Blue sub-pixels, which a monitor uses to build up all the different colours and greys it may need (the monitor makes a picture by emitting light and Red, Green, and Blue are the primary colours of light). What might a reasonable minimum resolution be? To simplify, let’s return to the scenario where we only have black and white dots – no antialiasing. The resolution required to make the page look smooth depends on the distance at which the typical viewer sees it. For a computer screen, this might be twenty inches. For a smartphone, eight inches. For a billboard, two or twenty feet (if you have never walked right up to a billboard and looked at the printing, do so – it is surprisingly coarse). The limit of the human optical system’s ability to distinguish the colour of adjacent dots is... Image Analysis: I'm unable to analyze the visual content as requested, but I can help you summarize or understand the text you've shared. Let me know how you'd like to proceed! #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 86 Context: # Chapter 6. Saving Space A common use for this sort of encoding is in the sending of faxes. A fax consists of a high-resolution black and white image. In this case, we are not compressing characters, but the black and white image of those characters itself. Take the following fragment: ![FAX Image](path_to_image.png) *This image is 37 pixels wide and 15 tall. Here it is with a grid superimposed to make it easier to count pixels:* ``` [Insert grid image here] ``` We cannot compress the whole thing with Huffman encoding, since we do not know the frequencies at the outset – a fax is sent incrementally. One machine scans the document whilst the machine at the other end of the phone line prints the result as it pulls paper from its roll. It had to be this way because, when fax machines were in their infancy, computer memory was very expensive, so receiving and storing the whole image in one go and only then printing it was not practical. The solution the fax system uses is as follows. Instead of sending individual pixels, we send a line at a time, a list of runs. Each run is a length of white pixels or a length of black pixels. For example, a line of width 30 might contain 12 pixels of white, then 4 of black, then 2 of white, then 18 of black, and then 3 of white. We look up the code for each run and send the codes in order. To avoid the Image Analysis: I'm unable to view images directly, but I can help guide you on how to analyze the visual content based on the aspects you've outlined. Here's a structured approach for your analysis: ### Analysis Structure 1. **Localization and Attribution:** - Number the images based on their position (e.g., Image 1, Image 2, etc.). - Locate them on the page. 2. **Object Detection and Classification:** - Identify objects within each image. - Classify objects into categories (e.g., text, shapes, etc.) and note key features. 3. **Scene and Activity Analysis:** - Describe the scene and any activities depicted. - Identify the main actors, if any, and their actions. 4. **Text Analysis:** - Extract any visible text from the images. - Analyze its content and relevance to the overall message. 5. **Diagram and Chart Analysis:** - Identify if any charts or diagrams are present. - Describe their axes, scales, and key insights. 6. **Product Analysis:** - If products are shown, detail their features, materials, and colors. - Compare visual attributes among products. 7. **Anomaly Detection:** - Look for unusual elements and explain their significance. 8. **Color Analysis:** - Analyze the image's color composition. - Note dominant colors and their psychological impact. 9. **Perspective and Composition:** - Describe the perspective from which the image was taken. - Analyze how the elements are arranged within the frame. 10. **Contextual Significance:** - Relate the image to the overall document or webpage's themes. - Explain the image's contribution to the overarching message. 11. **Metadata Analysis:** - Review any available metadata (if applicable) such as capture date or settings. 12. **Graph and Trend Analysis:** - Analyze any graphs, identifying trends and interpreting their significance. 13. **Graph Numbers:** - Provide a breakdown of data points for any included graphs. ### Additional Considerations: - **Ablaufprozesse (Process Flows):** Describe any depicted process flows. - **Prozessbeschreibungen (Process Descriptions):** Detail processes shown. - **Typen Bezeichnung (Type Designations):** Identify and explain types or categories present in the images or charts. - **Trend and Interpretation:** Note any visual trends. - **Tables:** Analyze any included tables and their presented data. By following this framework, you can conduct a comprehensive examination of your visual content. Is there a particular image or content type you would like more detailed guidance on? #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 168 Context: # Chapter 6 ## 1 | Letter | Frequency | Code | Letter | Frequency | Code | |--------|-----------|----------|--------|-----------|----------| | | 41 | 111 | u | 5 | 11000 | | e | 18 | 100 | v | 4 | 11001 | | t | 14 | 0111 | f | 4 | 11000 | | a | 13 | 0110 | y | 3 | 01001 | | h | 10 | 1010 | p | 2 | 0101001 | | r | 11 | 0011 | I | 2 | 0100101 | | n | 11 | 0010 | q | 1 | 0101010 | | s | 9 | 11011 | E | 1 | 0101100 | | c | 9 | 11001 | S | 1 | 0101101 | | m | 6 | 10010 | T | 1 | 0101101 | | l | 6 | 11001 | | | | So we have: ``` I h a v e a 010111 01010111 0100 1100 1101 100 111 0100 111 t h e e r y w h i 0111 0100 100 1011 0011 0101 1100 1101 1101 c h I s u s p 101001 0101 0100011 11000 1101000 0101010 1010 e c I s m e r a t h 100 1010 0111 1101 0100 1101 0011 0110 1000 e r i m m o r e a l 100 0011 100 111 0011 110001 110101 011 0001 l y 0001 010101 01010000 ``` #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 61 Context: From the complementary slackness conditions, we can read the sparseness of the solution out: \[ \alpha_i(w^T\Phi_i + b - y_i - \xi_i) = 0 \quad (9.6) \] \[ \alpha_i^*(y_i - w^T\Phi_i - b = -\xi_i) = 0 \quad (9.7) \] \[ \xi_i \geq 0, \quad \alpha_i \xi_i = 0 \quad (9.8) \] where we added the last conditions by hand (they don’t seem to directly follow from the formulation). Now we clearly see that if a case is above the tube \(\xi_i\), it will take on its smallest possible value in order to make the constraints satisfied \(\xi_i = y_i - w^T\Phi_i - b - \epsilon\). This implies that \(\alpha_i\) will take on a positive value and the farther outside the tube the larger the \(\xi_i\) (you can think of it as a compensating force). Note that in this case \(\alpha_i = 0\). A similar story goes if \(\xi_i > 0\) and \(\alpha_i > 0\). If a data case is inside the tube, the \(\alpha_i, \alpha_i^*\) are necessarily zero, and hence we obtain sparseness. We now change variables to make this optimization problem look more similar to the SVM and ridge-regression case. Introduce \(\beta_i = \alpha_i - \alpha_i^*\) and use \(\alpha_i\) = 0 to write \(\alpha_i + \alpha_i^* = |\beta_i|\): \[ \text{maximize } \quad -\frac{1}{2} \sum_{i,j} \beta_i \beta_j (K_{ij} + \frac{1}{C} \delta_{ij}) + \sum_{i} \beta_i y_i - \sum_{i} |\beta_i| \quad (9.9) \] subject to \[ \sum_{i} \beta_i = 0 \] From the slackness conditions, we can also find a value for \(b\) (similar to the SVM case). Also, as usual, the prediction of new data-case is given by: \[ y = w^T\Phi(x) + b = \sum_{i} \beta_i K(x_i, x) + b \quad (9.10) \] It is an interesting exercise for the reader to work their way through the case. \[ \text{Note by the way that we could not use the trick used in ridge-regression by defining a constant feature } \phi_0 = 1 \text{ and } b = w_0. \text{ The reason is that the objective does not depend on } b. \] #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 18 Context: # Chapter 1. Putting Marks on Paper The upper row shows high-resolution printing of lettering on coated paper, such as might be used for a glossy pamphlet, under a microscope at 20x magnification, and the same at 400x magnification. The lower row is standard text of the London Times printed on newsprint at 20x magnification and the same at 400x magnification. The home or office laser printer works by using a laser to prepare a roller in such a way that a powder will adhere only to areas where the laser has not been shone. The powder (called toner) is then transferred from the roller to paper, and bonded to it by heat. The particles of toner behave differently from ink: ## Comparison of Printed Text | Size | Example Text | Magnification | |--------|-------------------|---------------| | 1pt | judgement | 20x | | 2pt | judgement | 20x | | 4pt | judgement | 20x | | 6pt | judgement | 20x | | 8pt | judgement | 20x | On the left is a word printed in 1pt, 2pt, 4pt, 6pt, and 8pt text under a microscope, with magnification at 20x. On the right, the... Image Analysis: Here’s a comprehensive examination of the provided visual content based on the specified aspects: ### 1. Localization and Attribution - **Image 1:** Upper left (high-resolution printing of letter under a microscope at 20x) - **Image 2:** Upper right (same text at 400x magnification) - **Image 3:** Lower left (standard text of the London Times at 20x) - **Image 4:** Lower right (same text at 400x magnification) - **Image 5:** (Left image showing words in 1pt, 2pt, 4pt, 6pt, and 8pt at 20x) - **Image 6:** (Right image showing printed text under the microscope) ### 2. Object Detection and Classification - **Image 1 & 2:** Printed text (lettering), categorized as typography/graphic. - **Image 3 & 4:** Printed text from a newspaper, categorized similarly. - **Image 5 & 6:** Various sizes of font printed on paper, categorized as typography. ### 3. Scene and Activity Analysis - The images depict printed letters under a microscope showcasing different resolutions and magnifications, illustrating the differences in text clarity and characteristics based on printing quality and techniques. ### 4. Text Analysis - **Image Content:** The text primarily discusses printing techniques and demonstrates the difference between high-resolution print and standard magazine print. - **Significance:** This highlights the evolution of printing technologies and how they impact text clarity and readability. ### 5. Diagram and Chart Analysis - No diagrams or charts are present in the images. ### 6. Product Analysis - The images do not show specific products but rather illustrate printing techniques and typography under microscope conditions. ### 7. Anomaly Detection - There are side-by-side comparisons of different inks and printing techniques, which could be seen as anomalies in printing processes when considering traditional ink versus toner. ### 8. Color Analysis - The images likely feature monochromatic color schemes (black and white), typical of text printed on paper, emphasizing clarity and definition in the printing process. ### 9. Perspective and Composition - All images are taken from a microscope perspective, highlighting the intricate details of typography. Composition varies in focus but centers around printed text. ### 10. Contextual Significance - The images support the text in the chapter, explaining the technical aspects of printing, enhancing understanding of print quality and clarity. ### 11. Metadata Analysis - Metadata is not available for analysis in this format. ### 12. Graph and Trend Analysis - No graphs are present in the images. ### 13. Graph Numbers - No data points from graphs are available as there are no graphs. ### Additional Aspects: - **Ablaufprozesse (Process Flows):** - The images describe the print process indirectly by showing the results of different printing resolutions. - **Prozessbeschreibungen (Process Descriptions):** - The text explains how laser printers work, revealing the process of how toner is transferred to paper. - **Typen Bezeichnung (Type Designations):** - Font sizes are indicated in Image 5 as 1pt, 2pt, 4pt, 6pt, and 8pt, showcasing various type sizes. - **Trend and Interpretation:** - The trend shown is the increase in clarity and detail at higher magnifications, illustrating the importance of printing techniques in text readability. - **Tables:** - There are no tables present in the images. This analysis highlights the technical aspects of printing and the visual representations of different printing methods, providing insight into typography and printing technology. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 79 Context: # Chapter 6 ## Saving Space As computers get ever faster, we ask ever more of them: a higher-resolution film streamed in real time, a faster download, or the same experience on a mobile device over a slow connection as we have at home or in the office over a fast one. When we talk of efficiency, we are concerned with the time taken to do a task, the space required to store data, and knock-on effects such as how often we have to charge our device’s battery. And so we cannot simply say "things are getting faster all the time; we need not worry about efficiency." An important tool for reducing the space information takes up (and so, increasing the speed with which it can be moved around) is **compression**. The idea is to process the information in such a way that it becomes smaller, but also so that it may be *decompressed* — that is to say, the process must be reversible. Imagine we want to send a coffee order. Instead of writing “Four espressos, two double espressos, a cappuccino, and two lattes,” we might write “42EDC2L.” This relies, of course, on the person to whom we are sending the order knowing how to decompress it. The instructions for decomposing might be longer than the message itself, but if we are sending similar messages each day, we need only share the instructions once. We have reduced the message from 67 characters to 7, making it almost ten times smaller. This sort of compression happens routinely, and it is really just a matter of choosing a better representation for storing a particular kind of information. It tends to be more successful the more uniform the data is. Can we come up with a compression method which works for any data? If not, what about one which works well? #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 148 Context: # Chapter 9. Our Typeface ## Problems **Solutions on page 166.** The following words have been badly spaced. Photocopy or print out this page, cut out the letters, and then paste them onto another page along a straight line, finding an arrangement which is neither too tight nor too loose. 1. P a l a t i n o 2. A V E R S I O N 3. C o n j e c t u r e #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 76 Context: ``` # CHAPTER 13. FISHER LINEAR DISCRIMINANT ANALYSIS The scatter matrices are: \[ S_B = \sum_c N_c(\mu_c - \bar{x})(\mu_c - \bar{x})^T \tag{13.2} \] \[ S_W = \sum_{c} \sum_{i \in c} (x_i - \mu_c)(x_i - \mu_c)^T \tag{13.3} \] where \[ \mu_c = \frac{1}{N_c} \sum_{i \in c} x_i \tag{13.4} \] \[ \bar{x} = \frac{1}{N} \sum_{i} x_i = \frac{1}{\sum_c N_c} \sum_c N_c \mu_c \tag{13.5} \] and \( N_c \) is the number of cases in class \( c \). Oftentimes you will see that for 2 classes \( S_B \) is defined as \( S_B^* = (\mu_1 - \mu_2)(\mu_1 - \mu_2)^T \). This is the scatter of class 1 with respect to the scatter of class 2 and you can show that \( S_B = \frac{N_1 N_2}{N} S_B^* \), but since it boils down to multiplying the objective with a constant, it makes no difference to the final solution. Why does this objective make sense? Well, it says that a good solution is one where the class-means are well separated, measured relative to the (sum of the) variances of the data assigned to a particular class. This is precisely what we want, because it implies that the gap between the classes is expected to be big. It is also interesting to observe that since the total scatter, \[ S_T = \sum_{i} (x_i - \bar{x})(x_i - \bar{x})^T \tag{13.6} \] is given by \( S_T = S_W + S_B \), the objective can be rewritten as: \[ J(w) = \frac{w^T S_B w}{w^T S_W w} - 1 \tag{13.7} \] and hence can be interpreted as maximizing the total scatter of the data while minimizing the within scatter of the classes. An important property to notice about the objective \( J \) is that it is invariant w.r.t. rescalings of the vectors \( w \rightarrow \alpha w \). Hence, we can always choose \( w \) such that the denominator is simply \( w^T S_W w = 1 \), since it is a scalar itself. For this reason, we can transform the problem of maximizing \( J \) into the following constrained ``` #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 177 Context: # Chapter 8 ## 1 | | | | | |---|---|---|---| | | | | | | | ✔ | | | | ✔ | ✔ | ✔ | | | | ✔ | | | ![Camel](link-to-camel-image) | Level of Gray | |---------------| | ![#ffffff](https://via.placeholder.com/15/ffffff/000000?text=+) White | | ![#cccccc](https://via.placeholder.com/15/cccccc/000000?text=+) Light Gray | | ![#999999](https://via.placeholder.com/15/999999/000000?text=+) Gray | | ![#666666](https://via.placeholder.com/15/666666/000000?text=+) Dark Gray | | ![#000000](https://via.placeholder.com/15/000000/000000?text=+) Black | Image Analysis: Here's a comprehensive examination of the visual content provided: ### 1. Localization and Attribution - **Image 1:** The top section with patterns (4 rows of squares). - **Image 2:** A depiction of a camel within a grid (lower section). - **Image 3:** A grayscale color palette at the bottom. ### 2. Object Detection and Classification - **Image 1:** Contains abstract patterns consisting of black and white squares, categorized as geometric patterns. - **Image 2:** A camel, categorized as an animal. - **Image 3:** A grayscale gradient, categorized as a color palette. ### 3. Scene and Activity Analysis - **Image 1:** No specific activity; it displays geometric patterns that may serve a design or analytical purpose. - **Image 2:** Depicts a camel standing in a stylized, pixelated representation, suggesting an artistic interpretation. - **Image 3:** Serves to provide a range of grayscale colors, likely for reference. ### 4. Text Analysis - **Text:** "Chapter 8" is noted at the top, indicating this is part of a larger document, likely a textbook or solution guide. - **Significance:** It provides context for the images, suggesting they relate to a particular chapter or topic of study. ### 5. Diagram and Chart Analysis - **Image 1 (Patterns):** No axes or scales; patterns may illustrate design concepts or logical groupings. - **Image 2 (Camel):** No axes or scales, focused on an artistic representation. - **Image 3 (Color Palette):** No statistical data, acts as a visual aid. ### 6. Product Analysis - Not applicable, as the images do not depict specific commercial products. ### 7. Anomaly Detection - No significant anomalies detected; all elements contribute to the overall artistic or educational purpose. ### 8. Color Analysis - **Image 1:** Predominantly black and white, creates contrast. - **Image 2:** Grayscale rendition of a camel, emphasizing shadows and structure without color. - **Image 3:** A gradient ranging from black to white, providing a transition of shades. ### 9. Perspective and Composition - **Image 1:** Flat composition focusing on geometric patterns. - **Image 2:** The camel is centered, captured in a flat, pixelated style. - **Image 3:** Simple horizontal layout for the grayscale gradient. ### 10. Contextual Significance - The images relate to a chapter in a textbook, suggesting a focus on visual patterns, representations, or grayscale analysis. ### 11. Metadata Analysis - Metadata is not provided, thus analysis is not possible. ### 12. Graph and Trend Analysis - Not applicable as there are no graphs to analyze. ### 13. Graph Numbers - Not applicable as there are no graphs present. ### Additional Aspects - **Ablaufprozesse (Process Flows):** Not depicted in the images. - **Prozessbeschreibungen (Process Descriptions):** Not provided. - **Typen Bezeichnung (Type Designations):** No types or categories defined beyond object classifications. - **Trend and Interpretation:** No trends are illustrated; the content is primarily static. - **Tables:** No tables are included in the visual content. This analysis focuses closely on the specifics of the images while respecting the provided guidelines. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 164 Context: # Solutions ![Diagram 1](path/to/first_diagram.png) *Figure 1: Initial Construction* Finally, we finish the construction all the way to J, so our diagram looks like this: ![Diagram 2](path/to/second_diagram.png) *Figure 2: Completed Construction* - **Points**: - A - B - C - D - E - F - G - H - I - J ### Steps of Construction 1. Draw the initial curve. 2. Mark points A, B, C, D, E, F, and G. 3. Extend the construction to points H and J. **Note**: Ensure accuracy in the placement of all points for clarity in representation. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 186 Context: # Chapter 9 1. **Just My Type: A Book About Fonts** Simon Garfield. Published by Profile Books (2011). ISBN 1846683205. 2. **The Geometry of Type: The Anatomy of 100 Essential Typefaces** Stephen Coles. Published by Thames and Hudson Ltd (2013). ISBN 05002411422. 3. **The Elements of Typographic Style** Robert Bringhurst. Published by Hartley & Marks (2004). ISBN 0881792065. # Chapter 10 1. **Micro-typographic extensions to the TEX typesetting system** PhD Thesis, Hàn Thế Thành, Faculty of Informatics, Masaryk University, Brno, October 2000. 2. **Digital Typography** Donald E. Knuth. Published by the Center for the Study of Language and Information (Stanford, California) CSLI Lecture Notes, No. 78 (1999). ISBN 1575868104. 3. **Printer's Type in the Twentieth Century: Manufacturing And Design Methods** Richard Southall. Published by Oak Knoll Press (2005). ISBN 1584561552. 4. **History of the Monotype Corporation** Judith Slinn et al. Published by Vanburgh Press (2014). ISBN 0993051005. #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 6 Context: # PREFACE About 60% correct on 100 categories, the fact that we pull it off seemingly effortlessly serves as a "proof of concept" that it can be done. But there is no doubt in my mind that building truly intelligent machines will involve learning from data. The first reason for the recent successes of machine learning and the growth of the field as a whole is rooted in its multidisciplinary character. Machine learning emerged from AI but quickly incorporated ideas from fields as diverse as statistics, probability, computer science, information theory, convex optimization, control theory, cognitive science, theoretical neuroscience, physics, and more. To give an example, the main conference in this field is called: **advances in neural information processing systems**, referring to information theory and theoretical neuroscience and cognitive science. The second, perhaps more important reason for the growth of machine learning is the exponential growth of both available data and computer power. While the field is built on theory and tools developed statistics, machine learning recognizes that the most exciting progress can be made to leverage the enormous flood of data that is generated each year by satellites, sky observatories, particle accelerators, the human genome project, banks, the stock market, the army, seismic measurements, the internet, video, scanned text, and so on. It is difficult to appreciate the exponential growth of data that our society is generating. To give an example, a modern satellite generates roughly the same amount of data as previous satellites produced together. This insight has shifted the attention from highly sophisticated modeling techniques on small datasets to more basic analysis on much larger datasets (the latter sometimes called **data-mining**). Hence the emphasis shifted to algorithmic efficiency, and as a result, many machine learning faculty (like myself) can typically be found in computer science departments. To give some examples of recent successes of this approach, one would only have to turn on one computer and perform an internet search. Modern search engines do not run terribly sophisticated algorithms, but they manage to store and sift through almost the entire content of the internet to return sensible search results. There has also been much success in the field of machine translation, not because a new model was invented but because many more translated documents became available. The field of machine learning is multifaceted and expanding fast. To sample a few sub-disciplines: statistical learning, kernel methods, graphical models, artificial neural networks, fuzzy logic, Bayesian methods, and so on. The field also covers many types of learning problems, such as supervised learning, unsupervised learning, semi-supervised learning, active learning, reinforcement learning, etc. I will only cover the most basic approaches in this book from a highly-per. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 69 Context: # Chapter 5. Typing it In ## The Type Writing Machine of C. L. Scholes, 1868 (Side View) An early user of a demonstration machine, James Densmore, bought a 25% interest in the project, despite believing that it needed significant further development to be practical. ![The Type Writing Machine of C. L. Scholes, 1868](path/to/image.jpg) ### Figures - **Fig. 1**: - **Fig. 2**: - **Fig. 3**: Image Analysis: Based on the provided visual content of a patent drawing for an early typewriting machine, here is a comprehensive examination according to your specified aspects: ### 1. **Localization and Attribution:** - **Image Number:** Image 1 - **Location on Page:** The image occupies the central section of the page, with text elements positioned above and below. ### 2. **Object Detection and Classification:** - **Objects Identified:** - Typewriting machine (mechanical components). - Gears and levers. - **Key Features:** - The typewriting machine consists of a main body, an arm (likely for the keys), and gears for operation. - The design is mechanical, indicative of 19th-century technology. ### 3. **Scene and Activity Analysis:** - **Scene Description:** The drawing represents a technical illustration of a typewriting machine, showcasing its components without a real-world environment. - **Main Actors:** - Not applicable since it's a mechanical device rather than a scene with people. - **Actions:** - The image illustrates the engineering design, emphasizing the machine’s functionality. ### 4. **Text Analysis:** - **Text Extracted:** Descriptions and annotations related to the components of the typewriter. - **Significance:** The text provides context about the invention, patent date (June 23, 1868), and mentions key figures involved, enhancing understanding of its historical significance. ### 5. **Diagram and Chart Analysis:** - **Diagrams Identified:** - Several labeled figures related to the machine's components (e.g., parts labeled as A, B). - **Axes/Scales:** Not applicable, as this is not a chart but a technical drawing. - **Insights:** The technical specifications serve to explain how the typewriter was intended to function. ### 6. **Product Analysis:** - **Product Description:** - An early typewriting machine featuring gears and a mechanical typing mechanism. - **Main Features:** - Noteworthy features include the mechanical drive system and overall design. - **Color:** - The drawing is monochrome, typical for patent illustrations. ### 7. **Anomaly Detection:** - **Anomalies:** - The lack of color and shading may appear unusual by modern standards but is typical for historical patents. - **Impact:** This style indicates a focus on technical detail rather than aesthetic appeal. ### 8. **Color Analysis:** - **Dominant Colors:** - The image is in black and white. - **Impact on Perception:** - The monochromatic scheme emphasizes clarity and detail over vibrancy, suitable for technical documentation. ### 9. **Perspective and Composition:** - **Perspective:** - The illustration employs a side view to highlight the mechanics. - **Composition:** - Components are arranged to show interactions and functionality, with clear labels directing attention. ### 10. **Contextual Significance:** - **Overall Contribution:** The image provides historical context to advancements in technology, particularly in typing mechanisms and their evolution. ### 11. **Metadata Analysis:** - **Metadata:** Not available for analysis, so no further insights can be drawn. ### 12. **Graph and Trend Analysis:** - **Graphs:** Not present in this image, as it is a technical drawing rather than a data representation. ### 13. **Graph Numbers:** - **Data Points:** Not applicable due to absence of graphs. ### **Additional Aspects:** - **Ablaufprozesse (Process Flows):** - The image does not depict processes; it focuses on a static mechanical design. - **Prozessbeschreibungen (Process Descriptions):** - No dynamic processes are illustrated; thus, detailed descriptions are not available. - **Typen Bezeichnung (Type Designations):** - Components are annotated but do not specify types in a categorical sense. - **Trend and Interpretation:** - The design reflects early industrial innovation, showcasing a trend towards mechanization in office tools. - **Tables:** - No tables are present in the image. This analysis captures the essential aspects of the visual content based on your guidelines. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 87 Context: # Chapter 6. Saving Space problem of having to gather frequency data for the whole page, a pre-prepared master code table is used, upon which everyone agrees. The table has been built by gathering frequencies from thousands of text documents in several languages and typefaces, and then collating the frequencies of the various black and white runs. Here is the table of codes for black and white runs of lengths 0 to 63. (We need length 0 because a line is always assumed to begin white, and a zero-length white run is required if the line actually begins black.) | Run | White | Black | Run | White | Black | |-----|-------------|--------------------|-----|-------------|--------------------| | 0 | 01100101 | 00000110101101 | 32 | 00110111 | 00000101010101 | | 1 | 00001101 | 010 | 33 | 00101000 | 00000110110111 | | 2 | 0111 | 11 | 34 | 00010100 | 00100110101000 | | 3 | 1000 | 10 | 35 | 00010010 | 00010010010101 | | 4 | 1011 | 011 | 36 | 00011001 | 00001111001001 | | 5 | 1001 | 37 | 37 | 00001101 | 00000110010100 | | 6 | 1110 | 0010 | 38 | 00010111 | 00100100110010 | | 7 | 0000111 | 39 | 39 | 00100010 | 00011001010101 | | 8 | 1011 | 0001 | 40 | 00101010 | 00010011010000 | | 9 | 10100 | 00000 | 41 | 00101001 | 00100110100100 | | 10 | 011000 | 00100 | 42 | 00110101 | 00011001010101 | | 11 | 00000100 | 00101 | 43 | 00000111 | 00001100000000 | | 12 | 000010 | 0000001 | 44 | 01100010 | 00111000010101 | | 13 | 00000000 | 00 | 45 | 01100100 | 01010001010101 | | 14 | 110010 | 00001 | 46 | 00010000 | 00010010101000 | | 15 | 1010010 | 00000 | 47 | 01100011 | 00001000001100 | | 16 | 01010100 | 1000 | 48 | 00000111 | 00010010010101 | | 17 | 110000 | 01000 | 49 | 01100011 | 00110100001100 | | 18 | 100000 | 11001 | 50 | 00011110 | 01100001000010 | | 19 | 010000 | 10010 | 51 | 00010000 | 00010100010101 | | 20 | 110100 | 0011 | 52 | 10111100 | 01111000010101 | | 21 | 01110100 | 10001 | 53 | 01001011 | 00010010010010 | | 22 | 00000101 | 00101 | 54 | 00100001 | 01100101001000 | | 23 | 01001101 | 01101 | 55 | 00010001 | 01100001000011 | | 24 | 01100000 | 00100100 | 56 | 00011101 | 01010011000010 | | 25 | 00100011 | 01010101 | 57 | 00011110 | 01000011100101 | | 26 | 00100110000 | 00100 | 58 | 00000101 | 00001001001101 | | 27 | 01010101 | 0010100 | 59 | 01011010 | 00010011000100 | #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 153 Context: # Chapter 10. Words to Paragraphs Those words are in the same language – we require a hyphenation dictionary for each language appearing in the document. For example, in the typesetting system used for this book, there are 8527 rules, and only exceptional cases which must be listed explicitly: | uni-ver-sit-es | ma-nu-scripts | |----------------|----------------| | uni-ver-sit-es | re-ci-pro-city | | how-ever | through-out | | na-u-script | some-thing | Thus, we have assumed that decisions on hyphenation are made once we reach the end of a line and find we are about to overrun it. If we are, we alter the spacing between words, or hyphenate, or some combination of the two. And so, at most, we need to re-typset the current line. Advanced line breaking algorithms use a more complicated approach, seeking to optimize the result for a whole paragraph. (We have gone line-by-line, making the best line we can for the first line, then the second, etc.) It may turn out that an awkward situation later in the paragraph is prevented by making a slightly less-than-optimal decision in an earlier line, such as squeezing in an extra word or hyphenating in a good position when not strictly required. We can assign “demerits” to certain situations (e.g., hyphenation, too much or too little spacing between words, and so on) and optimize the outcome for the least sum of such demerits. These sorts of optimization algorithms can be quite slow for large paragraphs, taking an amount of time equal to the square of the number of lines in the paragraph. For normal texts, this is not a problem, since we are unlikely to have more than a few tens of lines in a single paragraph. We have now dealt with splitting a text into lines and paragraphs, but similar problems occur when it comes to fitting those paragraphs onto a page. There are two worrying situations: when the last line of a paragraph is “widowed” at the top of the next page, and when the first line of a paragraph is “orphaned” on the next page. It is difficult to deal with these problems without upsetting the balance of the whole two-page spread, but it can be done by slightly increasing or decreasing line spacing on one side. Another option, of course, is to edit the text, and you may be surprised to learn how often that happens. Further small adjustments and improvements to reduce the amount of hyphenation can be introduced using so-called microtypography. This involves stretching or shrinking the individual characters. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 67 Context: # Chapter 5 ## Typing it In It is easy to take for granted the ability to enter, modify, and correct large amounts of text accurately and quickly, since most of us have some proficiency at it these days. But remember that, in the past, huge numbers of young people would go to secretarial school before they could work as typists or data entry clerks. The present generation may use computer or tablet keyboards from childhood, but those of us who remember having to learn to type as teenagers recall how hard it seemed at the time. The development of the keyboard began before the computer, or course, for typewriters and similar equipment. And so, when computers were first developed, the best methods for text input were already known, requiring only slight alteration. In this chapter, we will look at the development of typing devices from the typewriter to the modern computer. We will also see methods for typing languages such as Chinese, where the number of characters might vastly outnumber the keys we could possibly place on a keyboard. As early as the eighteenth century, work was beginning on devices for "automatic writing", but what we recognize today as a typewriter can be traced back to the first commercially successful examples, the work of the Pennsylvania-born newspaper editor Christopher Latham Scholes (1819–1890). His company was eventually sold to Remington & Sons, under whose brand typewriters were sold until the 1960s. The following pages show images from Scholes's early and more mature patent applications. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 53 Context: # Chapter 3. Storing Words | | 1000 | 1002 | 1004 | 1006 | 1008 | 1010 | 1012 | 1014 | 1016 | 1018 | 1020 | 1022 | |---|------|------|------|------|------|------|------|------|------|------|------|------| | 0 | | | | | | | | | | | | | | 1 | | | | | | | | | | | | | | 2 | | | | | | | | | | | | | | 3 | | | | | | | | | | | | | | 4 | | | | | | | | | | | | | | 5 | | | | | | | | | | | | | | 6 | | | | | | | | | | | | | | 7 | | | | | | | | | | | | | | 8 | | | | | | | | | | | | | | 9 | | | | | | | | | | | | | | A | | | | | | | | | | | | | | B | | | | | | | | | | | | | | C | | | | | | | | | | | | | | D | | | | | | | | | | | | | | E | | | | | | | | | | | | | **Linear A:** An undeciphered writing system of ancient Greece, thought to have been in use 2500–1400 BC. It was added to the Unicode specification in 2014. We do not need to understand a writing system to reproduce it electronically! Most specimens were found on Crete, though they have been found as far apart as Bulgaria and Israel. #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 39 Context: # Chapter 2. Letter Forms Our line crosses two anti-clockwise lines and is therefore non-zero (it has a count of 0 - 1 - 1 which is -2). We can apply this rule to our ⭕ example too, but there is a problem: the inner hole is filled too: ![Circle Example](path/to/circle_image) This can be fixed by reversing the direction of either of the two paths. We now have a method which works for both cases: ![Reversed Paths Example](path/to/reversed_paths_image) Now that we have some understanding of how to draw lines and curves onto paper or the screen, we will turn to the input, storage, and manipulation of text itself, before returning to the visual layout of the page later in the book. #################### File: A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf Page: 83 Context: # 14.1 Kernel CCA As usual, the starting point to map the data-cases to feature vectors \( \Phi(x_i) \) and \( \Psi(y_j) \). When the dimensionality of the space is larger than the number of data-cases in the training set, then the solution must lie in the span of data-cases, i.e. \[ a = \sum_{i} \alpha_i \Phi(x_i) \quad b = \sum_{i} \beta_i \Psi(y_i) \tag{14.7} \] Using this equation in the Lagrangian, we get: \[ \mathcal{L} = \alpha^T K_x K_y \beta - \frac{1}{2} \lambda \left( \alpha^T K_x^T \alpha - N \right) - \frac{1}{2} \lambda \left( \beta^T K_y^T \beta - N \right) \tag{14.8} \] where \( \alpha \) is a vector in a different \( N \)-dimensional space than e.g. a space that lives in a \( D \)-dimensional space, and \( K_x = \sum \Phi(x_i) \Phi(x_i)^T \) and similarly for \( K_y \). Taking derivatives w.r.t. \( \alpha \) and \( \beta \) we find: \[ K_x K_y \beta = \lambda K_x^2 \alpha \tag{14.9} \] \[ K_y K_x \alpha = \lambda K_y^2 \beta \tag{14.10} \] Let’s try to solve these equations by assuming that \( K_x \) is full rank (which is typically the case). We get, \( \alpha = \lambda^{-1} K_x^{-1} K_y \beta \) and hence, \( K_y^T \beta = \lambda K_x^T K_y \beta \) which always has a solution for \( \lambda = 1 \). By recalling that, \[ \rho = \frac{1}{N} \sum a^T S_{xy} b = \frac{1}{N} \sum \lambda a^T S_{aa} = \lambda \tag{14.11} \] we observe that this represents the solution with maximal correlation and hence the preferred one. This is a typical case of over-fitting emphasizes against the need to regularize in kernel methods. This can be done by adding a diagonal term to the constraints in the Lagrangian (or equivalently to the denominator of the original objective), leading to the Lagrangian, \[ \mathcal{L} = \alpha^T K_x K_y \beta - \frac{1}{2} \lambda \left( \alpha^T K_x^T \alpha + \eta || \alpha ||^2 - N \right) - \frac{1}{2} \lambda \left( \beta^T K_y^T \beta + \eta || \beta ||^2 - N \right) \tag{14.12} \] One can see that this acts as a quadratic penalty on the norm of \( \alpha \) and \( \beta \). The resulting equations are, \[ K_x K_y \beta = N(K_x^2 + \eta) \alpha \tag{14.13} \] \[ K_y K_x \alpha = \lambda(K_y^2 + \eta) \beta \tag{14.14} \] #################### File: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf Page: 173 Context: # Solutions | Binary | Value | Color | Decimal | Color | |---------|-------|--------|---------|--------| | 11 | 2 | Black | 0001000 | 10 | Black | | 1011 | 8 | White | 1000 | 3 | White | | 11 | 2 | Black | 000100 | 9 | Black | | 0111 | 2 | White | 0111 | 2 | White | | 0001000 | 9 | Black | 0000111 | 1 | White | | 0111 | 2 | White | 00011 | 7 | Black | | 000100 | 9 | Black | 1111 | 7 | White | | 1000 | 3 | White | 0010 | 6 | Black | | 11 | 2 | Black | 1110 | 6 | White | | 1011 | 8 | White | 0010 | 6 | Black | | 11 | 2 | Black | 1011 | 4 | White | | 0111 | 2 | White | 0001010 | 37 | White | So we have the image: ``` BOB ``` ## Chapter 7 1. a) ``` + 1 + 1 ``` \[ (1 + 1) + 1 \] \[ \Rightarrow 2 + 1 \] \[ \Rightarrow 3 \] Image Analysis: I'm unable to analyze the visual content directly. However, if you provide a description or share specific queries regarding the content or its elements, I'd be happy to help in any way I can! ########## """QUERY: Please summarize the whole context. It is important that you include a summary for each file. All files should be included, so please make sure to go through the entire context""" Consider the chat history for relevant information. Use all information included. If query is already asked in the history double check the correctness of your answer and maybe correct your previous mistake. Use as much tokens as needed but at the same time be as efficient as possible. If you find information separated by a | in the context, it is a table formatted in Markdown - the whole context is formatted as md structure. Final Files Sources: A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 157, our-upcebu-edu-ph-58438.txt - Page 1, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 106, test.ppt - Page 2, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 54, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 1, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 187, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 82, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 52, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 93, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 8, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 28, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 7, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 167, test.pptx - Page 1, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 166, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 25, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 40, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 18, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 180, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 35, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 178, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 81, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 189, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 75, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 31, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 3, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 117, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 90, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 80, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 22, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 62, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 154, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 185, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 93, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 10, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 197, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 35, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 24, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 95, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 60, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 14, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 91, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 15, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 174, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 191, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 165, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 23, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 5, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 135, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 45, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 46, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 142, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 162, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 70, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 11, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 64, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 73, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 171, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 92, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 4, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 74, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 4, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 149, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 190, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 123, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 79, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 11, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 73, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 88, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 109, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 27, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 77, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 26, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 65, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 114, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 125, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 188, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 9, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 100, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 29, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 20, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 16, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 58, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 103, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 7, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 49, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 44, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 121, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 22, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 86, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 168, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 61, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 18, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 79, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 148, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 76, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 177, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 164, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 186, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 6, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 69, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 87, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 153, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 67, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 53, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 39, A%20First%20Encounter%20with%20Machine%20Learning%20-%20Max%20Welling%20%28PDF%29.pdf - Page 83, A%20MACHINE%20MADE%20THIS%20BOOK%20ten%20sketches%20of%20computer%20science%20-%20JOHN%20WHITINGTON%20%28PDF%29.pdf - Page 173 ================================================== **Elapsed Time: 7.42 seconds** ================================================== FINAL ANSWER Answer: ================================================== **Elapsed Time: 0.00 seconds** ==================================================