Friday, September 6, 2019
Moral absolutism Essay Example for Free
Moral absolutism Essay When we speak of ââ¬Å"Moralityâ⬠we think of the difference between right and wrong, the difference between the good and the evil. We use morality to justify our actions and decisions. More often than not, people impose their morality on others and expect them to act in the way they find fit. They believe that the idea of right and wrong is universal. In her essay ââ¬Å"On Moralityâ⬠, Didion contradicts this theory and believes that everyone can have different ideas of morality based on their own perception. To make her point, Didion uses the examples of Klaus Fuchs and Alfred Rosenberg. Fuchs was a British traitor who leaked nuclear secrets to the Soviets, and Rosenberg was the Nazi administrator of Eastern Europe, where the Germans committed their most heinous and most murderous acts during World War II. Both of them claimed that what they did were morally appropriate. She then goes on to say that Jesus justifies what he did based on morality. The juxtaposition of these ideas affirms Didionââ¬â¢s theory that the conviction of morality is vastly based on perspective. This juxtaposition also helps prove that people use morality to justify almost anything. Osama Bin Laden believed that it was morally right to take the lives of millions of innocent civilians in the name of religion. President Snow, along with the Capitol, in The Hunger Games saw it fit to throw 24 teenagers in a battlefield and let them fight until only one remains. Morality does not seem like a tool to distinguish right from wrong, but a method to have a clean conscious, irrespective of whether oneââ¬â¢s acts are good or bad. Didion also says, ââ¬Å"For better or for worse, we are what we learned as children. â⬠(158) This shows that the ideas we have of good and bad and the so-called ââ¬Å"moralityâ⬠is part of what weââ¬â¢ve learnt growing up. A lot of people might find it pointless to stay with a corpse on a highway. But to Didion, it is the moral thing to do. We do not leave behind our dead. Friedrich Nietzsche said, ââ¬Å"Fear is the mother of morality. â⬠Didion maintains that morality might differ from person to person. In my opinion, a lot of factors influence the decisions we make and then blame them on morality. ââ¬Å"The right thingâ⬠is too abstract to be universal. Didion debates about the cannibalism acts and talks about the vestigial taboo that no one should eat their own blood kin. This might seem appalling to some while being a being of anotherââ¬â¢s culture. Didion says that morality has ââ¬Å"the most potentially mendacious meaning. â⬠(159) And I couldnââ¬â¢t agree more. There is a very thin line between right and wrong, and morality is what shows one where to draw it. But the basis of that line is so ambiguous, that people end up using morality to cover up their actions. The idea Didion presents is that humans are not equipped to distinguish between the good and the bad. We think that all actions are sound as long as they donââ¬â¢t hurt another person. But then we see people like Adolf Hitler. The man murdered millions of people. Yet, he had a bunch of supporters who helped him with these inhumane acts. But he did what he did in the name of morality, in the name of ââ¬Ërespect for the greater race. ââ¬â¢ The central idea of this essay is that morality depends largely on perception. What one finds wrong may not necessarily be seen as inappropriate by another. ââ¬Å"I followed my own conscience. â⬠ââ¬Å"I did what I thought was right. â⬠Didion questions the reader how many madmen have said this and meant it? Didion doesnââ¬â¢t believe that these men shelter themselves under the illusion of morality but actually believe their actions are moral and justified. Maybe we ourselves have said it before and been wrong. Our conscience isnââ¬â¢t always the best judge of things. But the concept of morality makes it okay to just be impulsive and do what we think is correct in the moment. The relevance of our logic lacks frequency. One might not kill people on a daily basis but one might find it moral to do it someday and go ahead with it. Does this make them immoral? Is the act of killing immoral? What if the victim is a killer? The answers to questions relating to morality are not black or white. There could be various different instances where individuals might have different stands on issues. All of which they might believe to be morally correct. So the question is, who decides what is moral and what is not? What gives them the power to do so? Should the morality of one person be forced on another? Clearly, universal standards of right and wrong do not exist. The evidence Didion provided as well as instances we see around the world proves that fact. A lot of people do not agree with Didionââ¬â¢s idea of differing morality. The people who adhere themselves to a supposedly universal moral code can delude themselves into thinking people who do not follow that code are less humane. People need to stop fretting over moral absolutes and let morality run their life and effect every decision they make in order to ensure the future is safe from oppression and terrorism.
Thursday, September 5, 2019
Analysis of Honeynets and Honeypots for Security
Analysis of Honeynets and Honeypots for Security Chapter 1 Introduction Honeynet is a kind of a network security tool, most of the network security tools we have are passive in nature for example Firewalls and IDS. They have the dynamic database of available rules and signatures and they operate on these rules. That is why anomaly detection is limited only to the set of available rules. Any activity that is not in alignment with the given rules and signatures goes under the radar undetected. Honeypots by design allows you to take the initiative, and trap those bad guys (hackers). This system has no production value, with no authorized activity. Any interaction with the honeypot is considered malicious in intent. The combination of honeypots is honeynet. Basically honeypots or honeynets do not solve the security problem but provide information and knowledge that help the system administrator to enhance the overall security of his network and systems. This knowledge can act as an Intrusion detection system and used as input for any early warning systems. O ver the years researchers have successfully isolated and identified verity of worms exploits using honeypots and honeynets. Honeynets extend the concept of a single honeypot to a highly controlled network of honeypots. A honeynet is a specialized network architecture cond in a way to achieve Data Control, Data Capture Data Collection. This architecture builds a controlled network that one can control and monitor all kind of system and network activity. 1.1 Information Security Information Security is the protection of all sensitive information, electronic or otherwise, which is owned by an individual or an organization. It deals with the preservation of the confidentiality, integrity and availability of information. It protects information of organizations from all kinds of threats to ensure business continuity, minimize business damage and maximize the return on investment and business opportunities. Information stored is highly confidential and not for public viewing. Through information security we protect its availability, privacy and integrity. Information is one of most important assets of financial institutions. Fortification of information assets is essential to ascertain and maintain trust between the financial institution and its customers, maintain compliance with the law, and protect the reputation of the institution. Timely and reliable information is compulsory to process transactions and support financial institution and customer decisions. A financial institutions earnings and capital can be adversely affected, if information becomes known to unauthorized parties is distorted or is not available when it is needed [15]. 1.2 Network Security It is the protection of networks and its services from any unauthorized access. It includes the confidentiality and integrity of all data passing through the network. It also includes the security of all Network devices and all information assets connected to a network as well as protection against all kind of known and unknown attacks. The ITU-T Security Architecture for Open System Interconnection (OSI) document X.800 and RFC 2828 are the standard documentation defining security services. X.800 divides the security services into 5 categories and 14 specific services which can be summarized as Table 1.1 OSI X.800 Summary[8] ââ¬Å"1. AUTHENTICATION The assurance that the communicating entity is the one that it claims to be. Peer Entity Authentication Used in association with a logical connection to provide confidence in the identity of the entities connected. Data Origin Authentication In a connectionless transfer, provides assurance that the source of received data is as claimed. 2. ACCESS CONTROL The prevention of unauthorized use of a resource (i.e., this service controls who can have access to a resource, under what conditions access can occur, and what those accessing the resource are allowed to do). 3. DATA CONFIDENTIALITY The protection of data from unauthorized disclosure. Connection Confidentiality The protection of all user data on a connection. Connectionless Confidentiality The protection of all user data in a single data block Selective-Field Confidentiality The confidentiality of selected fields within the user data on a connection or in a single data block. Traffic Flow Confidentiality The protection of the information that might be derived from observation of traffic flows. 4. DATA INTEGRITY The assurance that data received are exactly as sent by an authorized entity (i.e., contain no modification, insertion, deletion, or replay). Connection Integrity with Recovery Provides for the integrity of all user data on a connection and detects any modification, insertion, deletion, or replay of any data within an entire data sequence, with recovery attempted. Connection Integrity without Recovery As above, but provides only detection without recovery. Selective-Field Connection Integrity Provides for the integrity of selected fields within the user data of a data block transferred over a connection and takes the form of determination of whether the selected fields have been modified, inserted, deleted, or replayed. Connectionless Integrity Provides for the integrity of a single connectionless data block and may take the form of detection of data modification. Additionally, a limited form of replay detection may be provided. Selective-Field Connectionless Integrity Provides for the integrity of selected fields within a single connectionless data block; takes the form of determination of whether the selected fields have been modified. 5. NONREPUDIATION Provides protection against denial by one of the entities involved in a communication of having participated in all or part of the communication. Nonrepudiation, Origin Proof that the message was sent by the specified party. Nonrepudiation, Destination Proof that the message was received by the specified party.â⬠[1] [8], [9], 1.3 The Security Problem System security personnel fighting an unending battle to secure their digital assets against the ever increasing attacks, verity of attacks and their intensity is increasing day by day. Most of the attacks are detected after the exploitations so there should be awareness of the threats and vulnerabilities that exist in the Internet today. First we have to understand that we cannot say that there exists a perfect secure machine or network because the closest we can get to an absolute secure machine is that we unplugged the network cable and power supply and put that machine in to a safe. Unfortunately it is not useful in that state. We cannot achieve perfect security and perfect access at the same time. We can only increase the no of doors but we cannot put wall instead of doors. In field of security we need to find the vulnerably and exploits before they affect us. Honeypot and honeynet provides a valuable tool to collect information about the behavior of attackers in order to design and implement better defense. In the field of security it is important to note that we cannot simply state that what is the best type of firewall? Absolute security and absolute access are the two chief points. Absolute security and absolute access are inverse to each other. If we increase the security access will be decrease. There should be balance between absolute security and absolute defense, access is given without compromising the security. If we compare it to our daily lives we observe not much difference. We are continuously making decisions regarding what risks we are ready to take. When we step out of our homes we are taking a risk. As we get into a car and drive to our work place there is a risk associated with it too. There is a possibility that something might happen on the highway which will make us a part of an accident. When we fly and sit on an airplane we are willing to undergo the level of risk which is at par with the heavy amount we are paying for this convenience. It is observed that many people think differently about what an acceptable risk would be and in majority cases they do go beyond this thinking. For instance if I am sitting upstairs in my room and have to go to work, I wont take a jump straight out of the window. It might be a faster way but the danger of doing so and the injury I would have to face is much greater than the convenience. It is vital for every organization to decide that between the two opposite poles of total security and total access where they need to place themselves. It is necessary for a policy to articulate this system and then further explain the way it will be enforced with which practices and ways. Everything that is done under the name of security must strictly agree to the policy. 1.4 Types of Hacker Hackers are generally divide into two major categories. 1.4.1 Black Hats Black hat hackers are the biggest threat both internal and external to the IT infrastructure of any organization, as they are consistently challenging the security of applications and services. They are also called crackers, These are the persons who specialize in unauthorized infiltration. There could be Varity of reasons for this type of penetration it could be for profit, for enjoyment, or for political motivations or as a part of a social cause. Such infiltration often involves modification / destruction of data. 1.4.2 White Hats White hat hackers are similar to black hat hackers but there is a important difference that is white hat hackers do it without any criminal intention. Different companies all around the world hire/contact these kinds of persons to test their systems and softwares. They check how secure these systems are and point out any fault they found. These hackers, also known as ethical hackers, These are the persons or security experts who are specialize in penetration testing. These types of people are also known as tiger teams. These experts may use different types of methods and techniques to carry out their tests, including social engineering tactics, use of hacking tools, and attempts to bypass security to gain entry into protected areas, but they do this only to find weaknesses in the system[8]. 1.5 Types of Attacks There are many types of attacks that can be categorized under 2 major categories Active Attacks Passive Attacks 1.5.1 Active Attacks Active attacks involve the attacker taking the offensive and directing malicious packets towards its victims in order to gain illegitimate access of the target machine such as by performing exhaustive user password combinations as in brute-force attacks. Or by exploiting remote local vulnerabilities in services and applications that are termed as holes. Other types of attacks include Masquerading attack when attacker pretends to be a different entity. Attacker user fake Identity of some legitimate user. Replay attack In Replay attack, attacker captures data and retransmits it to produce an unauthorized effect. It is a kind of man in middle attack. Modification attack In this type of attack integrity of the message is compromise. Message or file is modified by the attacker to achieve his malicious goals. Denial of service (DOS)attack In DOS attack an attacker attempts to prevent legitimate users from accessing information or services. By targeting your computer and its network connection, or the computers and network of the sites you are trying to use, an attacker may be able to prevent you from accessing email, websites, online accounts (banking, etc.), or other services that rely on the affected computer. TCP ICMP scanning is also a form of active attacks in which the attackers exploit the way protocols are designed to respond. e.g. ping of death, sync attacks etc. In all types of active attacks the attacker creates noise over the network and transmits packets making it possible to detect and trace the attacker. Depending on the skill level, it has been observed that the skill full attackers usually attack their victims from proxy destinations that they have victimized earlier. 1.5.2 Passive Attacks Passive attacks involve the attacker being able to intercept, collect monitor any transmission sent by their victims. Thus, eavesdropping on their victim and in the process being able to listen in to their victims or targets communications. Passive attacks are very specialized types of attacks which are aimed at obtaining information that is being transmitted over secure and insecure channels. Since the attacker does not create any noise or minimal noise on the network so it is very difficult to detect and identify them. Passive attacks can be divided into 2 main types, the release of message content and traffic analysis. Release of message content It involves protecting message content from getting in hands of unauthorized users during transmission. This can be as basic as a message delivered via a telephone conversation, instant messenger chat, email or a file. Traffic analysis It involves techniques used by attackers to retrieve the actual message from encrypted intercepted messages of their victims. Encryption provides a means to mask the contents of a message using mathematical formulas and thus make them unreadable. The original message can only be retrieved by a reverse process called decryption. This cryptographic system is often based on a key or a password as input from the user. With traffic analysis the attacker can passively observe patterns, trends, frequencies and lengths of messages to guess the key or retrieve the original message by various cryptanalysis systems. Chapter 2 Honeypot and Honeynet 2.1 Honeypot Is a system, or part of a system, deliberately made to invite an intruder or system cracker. Honeypots have additional functionality and intrusion detection systems built into them for the collection of valuable information on the intruders. The era of virtualization had its impact on security and honeypots, the community responded, marked by the fine efforts of Niels Provos (founder of honeyd) Thorsten Holz for their masterpiece book ââ¬Å"Virtual Honeypots From Botnet Tracking to Intrusion Detectionâ⬠in 2007. 2.2 Types of Honeypots Honeypots can be categorized into 2 main types based on Level of interaction Deployment. 2.2.1 Level of interaction Level of interaction determines the amount of functionality a honeypot provides. 2.2.1.1 Low-interaction Honeypot Low-interaction honey pots are limited in the extent of their interaction with the attacker. They are generally emulator of the services and operating systems. 2.2.1.2 High interaction Honeypot High-interaction honeypots are complex solution they involve with the deployment of real operating systems and applications. High interaction honeypots capture extensive amount of information by allowing attacker to interact with the real systems. 2.2.2 Deployment Based on deployment honeypot may be classified as Production Honeypots Research Honeypots 2.2.2.1 Production Honeypots Production honeypots are honeypots that are placed within the production networks for the purpose of detection. They extend the capabilities of the intrusion detection systems. These type of honeypots are developed and cond to integrate with the organizations infrastructure and scope. They are usually implemented as low-interaction honeypots but implementation may vary depending on the available funding and expertise required by the organization. Production honeypots can be placed within the application and authentication server subnets and can identify any attacks directed towards those subnets. Thus they can be used to identify both internal and external threats for an organization. These types of honeypots can also be used to detect malware propagation in the network caused by zero day exploits. Since IDSs detection is based on database signatures they fail to detect exploits that are not defined in their databases. This is where the honeypots out shine the Intrusion detection systems. They aid the system network administrators by providing network situational awareness. On basis of these results administrators can take decisions necessary to add or enhance security resources of the organization e.g. firewall, IDS and IPS etc. 2.2.2.1 Research Honeypots Research honeypots are deployed by network security researchers the whitehat hackers. Their primarily goal is to learn the tools, tactics techniques of the blackhat hackers by which they exploit computers network systems. These honeypots are deployed with the idea of allowing the attacker complete freedom and in the process learn his tactics from his movement within the system. Research honeypots help security researchers to isolate attacker tools they use to exploit systems. They are then carefully studied within a sand box environment to identify zero day exploits. Worms, Trojans and viruses propagating in the network can also be isolated and studied. The researchers then document their findings and share with system programmers, network and system administrators various system and anti-virus vendors. They provide the raw material for the rule engines of IDS, IPS and firewall system. Research Honeypots act as early warning systems. They are designed to detect and log maximum information from attackers yet being stealthy enough not to let attackers identify them. The identity of the honeypot is crucial and we can conclude that the learning curve (from the attacker) is directly proportional to the stealthiest of thehoneypot .These types of honeypots are usually deployed at universities and by the RD departments of various organizations. These types of honeypots are usually deployed as High-Interaction honeypots. 2.3 Honeynet The concept of the honeypot is sometimes extended to a network of honeypots, known as a honeynet. In honeynet we grouped different types of honeypots with different operatrating systems which increases the probability of trapping an attacker. At the same time, a setting in which the attacker explores the honeynet through network connections between the various host systems provides additional prospects for monitoring the attack and revealing information about the intruder. The honeynet operator can also use the honeynet for training purposes, gaining valuable experience with attack strategies and digital forensics without endangering production systems. The Honeynet project is a non-profit research organization that provides tools for building and managing honeynets. The tools of the Honeynet project are designed for the latest generation of high interaction honeynets that require two separate networks. The honeypots reside on the first network, and the second network holds the tools for managing the honeynet. Between these tools (and facing the Internet) is a device known as the honeywall. The honeywall, which is actually a kind of gateway device, captures controls, and analyzes all inbound and outbound traffic to the honeypots[4]. It is a high-interaction honeypot designed to capture wide-range of information on threats. High-interaction means that a honeynet provides real systems, applications, and services for attackers to interact with, as opposed to low-interaction honeypots which provide emulated services and operating systems. It is through this extensive interaction we gain information on threats, both external and internal to an organization. What makes a honeynet different from most honeypots is that it is a network of real computers for attackers to interact with. These victim systems (honeypots within the honeynet) can be any type of system, service, or information you want to provide [14]. 2.4 Honeynet Data Management Data management consist of three process Data control, data capture and data collection. 2.4.1 Data Control Data control is the containment of activity within the honeynet. It determines the means through which the attackers activity can be restricted in a way to avoid damaging/abusing other systems/resources through the honeynet. This demands a great deal of planning as we require to give the attacker freedom in order to learn from his moves and at the same time not let our resources (honeypot+bandwidth) to be used to attack, damage and abuse other hosts on the same or different subnets. Careful measures are taken by the administrators of the honeynet to study and formulate a policy on attackers freedom versus containment and implement this in a way to achieve maximum data control and yet not be discovered or identified by the attacker as a honeypot. Security is a process and is implemented in layers, various mechanisms to achieve data control are available such as firewall, counting outbound connections, intrusion detection systems,intrusion prevention systems and bandwidth restriction e tc. Depending on our requirements and risk thresholds defined we can implement data control mechanisms accordingly [4]. 2.4.2 Data Capture Data Capture involves the capturing, monitoring and logging of allthreats and attacker activities within the honeynet. Analysis of this captured data provides an insight on the tools, tactics, techniques and motives of the attackers. The concept is to achieve maximum logging capability at all nodes and hence log any kind of attackers interaction without the attacker knowing it. This type of stealthy logging is achieved by setting up tools and mechanisms on the honeypots to log all system activity and have network logging capability at the honeywall. Every bit of information is crucial in studying the attacker whether its a TCP port scan, remote and local exploit attempt, brute force attack, attack tool download by the haacker, various local commands run, any type of communication carried out over encrypted and unencrypted channels (mostly IRC) and any outbound connection attempt made by the attacker [25]. All of this should be logged successfully and sent over to a remote location to avoid any loss of data due to risk of system damage caused by attackers, such as data wipe out on disk etc. In order to avoid detection of this kind of activity from the attacker, data masking techniques such as encryption should be used. 2.4.3 Data Collection Once data is captured, it is securely sent to a centralized data collection point. Data is used for analysis and archiving which is collected from different honeynet sensors. Implementations may vary depending on the requirements of the organization, however latest implementations incorporate data collection at the honeywall gateway [19]. 2.5 Honeynet Architectures There are three honeynet architectures namely Generation I, Generation II and Generation III 2.5.1 Generation I Architecture Gen I Honeynet was developed in 1999 by the Honeynet Project. Its purpose was to capture attackers activity and give them the feeling of a real network. The architecture is simple with a firewall aided by IDS at front and honeypots placed behind it. This makes it detectable by attacker [7]. 2.5.2 Generation II III Architecture Gen II honeynets were first introduced in 2001 and Gen III honeynets was released in the end of 2004. Gen II honeynets were made in order to address the issues of Gen I honeynets. Gen II and Gen III honeynets have the same architecture. The only difference being improvements in deployment and management, in Gen III honeynets along with the addition of Sebek server built in the honeywall. Sebek is a stealthy capture tool installed on honeypots that capture and log all requests sent to the system read and write system call. This is very helpful in providing an insight on the attacker [7]. A radical change in architecture was brought about by the introduction of a single device that handles the data control and data capture mechanisms of the honeynet called the IDS Gateway or marketing-wise, the Honeywall. By making the architecture more ââ¬Å"stealthyâ⬠, attackers are kept longer and thus more data is captured. There was also a major thrust in improving honeypot layer of data capture with the introduction of a new UNIX and Windows based data. 2.6 Virtual Honeynet Virtualization is a technology that allows running multiple virtual machines on a single physical machine. Each virtual machine can be an independent Operating system installation. This is achieved by sharing the physical machines resources such as CPU, Memory, Storage and peripherals through specialized software across multiple environments. Thus multiple virtual Operating systems can run concurrently on a single physical machine [4]. A virtual machine is specialized software that can run its own operating systems and applications as if it were a physical computer. It has its own CPU, RAM storage and peripherals managed by software that dynamically shares it with the physical hardware resources. Virtulization A virtual Honeynet is a solution that facilitates one to run a honeynet on a single computer. We use the term virtual because all the different operating systems placed in the honeynet have the appearance to be running on their own, independent computer. Network to a machine on the Honeynet may indicate a compromised enterprise system. CHAPTER 3 Design and Implementation Computer networks, connected to the Internet are vulnerable to a variety of exploits that can compromise their intended operations. Systems can be subject to Denial of Service Attacks, i-e preventing other computers to gain access for the desired service (e.g. web server) or prevent them from connecting to other computers on the Internet. They can also be subject to attacks that cause them to cease operations either temporarily or permanently. A hacker may be able to compromise a system and gain root access as if he is the system administrator. The number of exploits targeted against various platforms, operating systems, and applications increasing regularly. Most of vulnerabilities and attack methods are detected after the exploitations and cause big loses. Following are the main components of physical deployment of honeynet. First is the design of the Deployed Architecture. Then we installed SUN Virtual box as the Virtualization software. In this we virtually installed three Operating System two of them will work as honey pots and one Honeywall Roo 1.4 as Honeynet transparent Gateway. Snort and sebek are the part of honeywall roo operating system. Snort as IDS and Snort-Inline as IPS. Sebek as the Data Capture tool on the honeypot. The entire OS and honeywall functionality is installed on the system it formats all the previous data from the hard disk. The only purpose now of the CDROM is to install this functionality to the local hard drive. LiveCD could not be modified, so after installing it on the hard drive we can modify it according to our requirement. This approach help us to maintain the honeywall, allowing honeynet to use automated tools such asyumto keep packages current [31]. In the following table there is a summry of products with features installed in honeynet and hardware requirements. Current versions of the installed products are also mention in the table. Table 3.1 Project Summary Project Summary Feature Product Specifications Host Operating System Windows Server 2003 R2 HW Vendor HP Compaq DC 7700 ProcessorIntel(R) Pentiumà ® D CPU 3GHz RAM 2GB Storage 120GB NIC 1GB Ethernet controller (public IP ) Guest Operating System 1 Linux, Honeywall Roo 1.4 Single Processor Virtual Machine ( HONEYWALL ) RAM 512 MB Storage 10 GB NIC 1 100Mbps Bridged interface NIC 2 100Mbps host-only interface NIC 3 100Mbps Bridged interface (public IP ) Guest Operating System 2 Linux, Ubuntu 8.04 LTS (Hardy Heron) Single Processor Virtual Machine ( HONEYPOT ) RAM 256 MB Storage 10 GB NIC 100Mbps host-only vmnet (public IP ) Guest Operating System 3 Windows Server 2003 Single Processor Virtual Machine ( HONEYPOT ) RAM 256 MB Storage 10 GB NIC 100Mbps host-only vmnet (public IP ) Virtualization software SUN Virtual Box Version 3 Architecture Gen III Gen III implemented as a virtual honeynet Honeywall Roo Roo 1.4 IDS Snort Snort 2.6.x IPS Snort_inline Snort_inline 2.6.1.5 Data Capture Tool (on honeypots) Sebek Sebek 3.2.0 Honeynet Project Online Tenure November 12, 2009 TO December 12, 2009 3.1 Deployed Architecture and Design 3.2 Windows Server 2003 as Host OS Usability and performance of virtualization softwares are very good on windows server 2003. Windows Server 2003is aserveroperating system produced byMicrosoft. it is considered by Microsoft to be the cornerstone of itsWindows Server Systemline of business server products. Windows Server 2003 is more scalable and delivers better performance than its predecessor,Windows 2000. 3.3 Ubuntu as Honeypot Determined to use free and open source software for this project, Linux was the natural choice to fill as the Host Operating System for our projects server. Ubuntu 8.04 was used as a linux based honeypot for our implementation. The concept was to setup an up-to-date Ubuntu server, cond with commonly used services such as SSH, FTP, Apache, MySQL and PHP and study attacks directed towards them on the internet. Ubuntu being the most widely used Linux desktop can prove to be a good platform to study zero day exploits. It also becomes a candidate for malware collection and a source to learn hacker tools being used on the internet. Ubuntu was successfully deployed as a virtual machine and setup in our honeynet with a host-only virtual Ethernet connection. The honeypot was made sweeter i.e. an interesting target for the attacker by setting up all services with default settings, for example SSH allowed password based connectivity from any IP on default port 22, users created were given privi leges to install and run applications, Apache index.html page was made remotely accessible with default errors and banners, MySQL default port 1434 was accessible and outbound connections were allowed but limited [3]. Ubuntu is a computeroperating systembased on theDebianGNU/Linux distribution. It is named after theSouthern Africanethical ideology Ubuntu (humanity towards others)[5]and is distributed asfree and open source software. Ubuntu provides an up-to-date, stable operating system for the average user, with a strong focus onusabilityand ease of installation. Ubuntu focuses onusability andsecurity. The Ubiquity installer allows Ubuntu to be installed to the hard disk from within the Live CD environment, without the need for restarting the computer prior to installation. Ubuntu also emphasizesaccessibilityandinternationalization to reach as many people as possible [33]. Ubuntu comes installed with a wide range of software that includes OpenOffice, Firefox,Empathy (Pidgin in versions before 9.10), Transmission, GIMP, and several lightweight games (such as Sudoku and chess). Ubuntu allows networking ports to be closed using its firewall, with customized port selectio Analysis of Honeynets and Honeypots for Security Analysis of Honeynets and Honeypots for Security Chapter 1 Introduction Honeynet is a kind of a network security tool, most of the network security tools we have are passive in nature for example Firewalls and IDS. They have the dynamic database of available rules and signatures and they operate on these rules. That is why anomaly detection is limited only to the set of available rules. Any activity that is not in alignment with the given rules and signatures goes under the radar undetected. Honeypots by design allows you to take the initiative, and trap those bad guys (hackers). This system has no production value, with no authorized activity. Any interaction with the honeypot is considered malicious in intent. The combination of honeypots is honeynet. Basically honeypots or honeynets do not solve the security problem but provide information and knowledge that help the system administrator to enhance the overall security of his network and systems. This knowledge can act as an Intrusion detection system and used as input for any early warning systems. O ver the years researchers have successfully isolated and identified verity of worms exploits using honeypots and honeynets. Honeynets extend the concept of a single honeypot to a highly controlled network of honeypots. A honeynet is a specialized network architecture cond in a way to achieve Data Control, Data Capture Data Collection. This architecture builds a controlled network that one can control and monitor all kind of system and network activity. 1.1 Information Security Information Security is the protection of all sensitive information, electronic or otherwise, which is owned by an individual or an organization. It deals with the preservation of the confidentiality, integrity and availability of information. It protects information of organizations from all kinds of threats to ensure business continuity, minimize business damage and maximize the return on investment and business opportunities. Information stored is highly confidential and not for public viewing. Through information security we protect its availability, privacy and integrity. Information is one of most important assets of financial institutions. Fortification of information assets is essential to ascertain and maintain trust between the financial institution and its customers, maintain compliance with the law, and protect the reputation of the institution. Timely and reliable information is compulsory to process transactions and support financial institution and customer decisions. A financial institutions earnings and capital can be adversely affected, if information becomes known to unauthorized parties is distorted or is not available when it is needed [15]. 1.2 Network Security It is the protection of networks and its services from any unauthorized access. It includes the confidentiality and integrity of all data passing through the network. It also includes the security of all Network devices and all information assets connected to a network as well as protection against all kind of known and unknown attacks. The ITU-T Security Architecture for Open System Interconnection (OSI) document X.800 and RFC 2828 are the standard documentation defining security services. X.800 divides the security services into 5 categories and 14 specific services which can be summarized as Table 1.1 OSI X.800 Summary[8] ââ¬Å"1. AUTHENTICATION The assurance that the communicating entity is the one that it claims to be. Peer Entity Authentication Used in association with a logical connection to provide confidence in the identity of the entities connected. Data Origin Authentication In a connectionless transfer, provides assurance that the source of received data is as claimed. 2. ACCESS CONTROL The prevention of unauthorized use of a resource (i.e., this service controls who can have access to a resource, under what conditions access can occur, and what those accessing the resource are allowed to do). 3. DATA CONFIDENTIALITY The protection of data from unauthorized disclosure. Connection Confidentiality The protection of all user data on a connection. Connectionless Confidentiality The protection of all user data in a single data block Selective-Field Confidentiality The confidentiality of selected fields within the user data on a connection or in a single data block. Traffic Flow Confidentiality The protection of the information that might be derived from observation of traffic flows. 4. DATA INTEGRITY The assurance that data received are exactly as sent by an authorized entity (i.e., contain no modification, insertion, deletion, or replay). Connection Integrity with Recovery Provides for the integrity of all user data on a connection and detects any modification, insertion, deletion, or replay of any data within an entire data sequence, with recovery attempted. Connection Integrity without Recovery As above, but provides only detection without recovery. Selective-Field Connection Integrity Provides for the integrity of selected fields within the user data of a data block transferred over a connection and takes the form of determination of whether the selected fields have been modified, inserted, deleted, or replayed. Connectionless Integrity Provides for the integrity of a single connectionless data block and may take the form of detection of data modification. Additionally, a limited form of replay detection may be provided. Selective-Field Connectionless Integrity Provides for the integrity of selected fields within a single connectionless data block; takes the form of determination of whether the selected fields have been modified. 5. NONREPUDIATION Provides protection against denial by one of the entities involved in a communication of having participated in all or part of the communication. Nonrepudiation, Origin Proof that the message was sent by the specified party. Nonrepudiation, Destination Proof that the message was received by the specified party.â⬠[1] [8], [9], 1.3 The Security Problem System security personnel fighting an unending battle to secure their digital assets against the ever increasing attacks, verity of attacks and their intensity is increasing day by day. Most of the attacks are detected after the exploitations so there should be awareness of the threats and vulnerabilities that exist in the Internet today. First we have to understand that we cannot say that there exists a perfect secure machine or network because the closest we can get to an absolute secure machine is that we unplugged the network cable and power supply and put that machine in to a safe. Unfortunately it is not useful in that state. We cannot achieve perfect security and perfect access at the same time. We can only increase the no of doors but we cannot put wall instead of doors. In field of security we need to find the vulnerably and exploits before they affect us. Honeypot and honeynet provides a valuable tool to collect information about the behavior of attackers in order to design and implement better defense. In the field of security it is important to note that we cannot simply state that what is the best type of firewall? Absolute security and absolute access are the two chief points. Absolute security and absolute access are inverse to each other. If we increase the security access will be decrease. There should be balance between absolute security and absolute defense, access is given without compromising the security. If we compare it to our daily lives we observe not much difference. We are continuously making decisions regarding what risks we are ready to take. When we step out of our homes we are taking a risk. As we get into a car and drive to our work place there is a risk associated with it too. There is a possibility that something might happen on the highway which will make us a part of an accident. When we fly and sit on an airplane we are willing to undergo the level of risk which is at par with the heavy amount we are paying for this convenience. It is observed that many people think differently about what an acceptable risk would be and in majority cases they do go beyond this thinking. For instance if I am sitting upstairs in my room and have to go to work, I wont take a jump straight out of the window. It might be a faster way but the danger of doing so and the injury I would have to face is much greater than the convenience. It is vital for every organization to decide that between the two opposite poles of total security and total access where they need to place themselves. It is necessary for a policy to articulate this system and then further explain the way it will be enforced with which practices and ways. Everything that is done under the name of security must strictly agree to the policy. 1.4 Types of Hacker Hackers are generally divide into two major categories. 1.4.1 Black Hats Black hat hackers are the biggest threat both internal and external to the IT infrastructure of any organization, as they are consistently challenging the security of applications and services. They are also called crackers, These are the persons who specialize in unauthorized infiltration. There could be Varity of reasons for this type of penetration it could be for profit, for enjoyment, or for political motivations or as a part of a social cause. Such infiltration often involves modification / destruction of data. 1.4.2 White Hats White hat hackers are similar to black hat hackers but there is a important difference that is white hat hackers do it without any criminal intention. Different companies all around the world hire/contact these kinds of persons to test their systems and softwares. They check how secure these systems are and point out any fault they found. These hackers, also known as ethical hackers, These are the persons or security experts who are specialize in penetration testing. These types of people are also known as tiger teams. These experts may use different types of methods and techniques to carry out their tests, including social engineering tactics, use of hacking tools, and attempts to bypass security to gain entry into protected areas, but they do this only to find weaknesses in the system[8]. 1.5 Types of Attacks There are many types of attacks that can be categorized under 2 major categories Active Attacks Passive Attacks 1.5.1 Active Attacks Active attacks involve the attacker taking the offensive and directing malicious packets towards its victims in order to gain illegitimate access of the target machine such as by performing exhaustive user password combinations as in brute-force attacks. Or by exploiting remote local vulnerabilities in services and applications that are termed as holes. Other types of attacks include Masquerading attack when attacker pretends to be a different entity. Attacker user fake Identity of some legitimate user. Replay attack In Replay attack, attacker captures data and retransmits it to produce an unauthorized effect. It is a kind of man in middle attack. Modification attack In this type of attack integrity of the message is compromise. Message or file is modified by the attacker to achieve his malicious goals. Denial of service (DOS)attack In DOS attack an attacker attempts to prevent legitimate users from accessing information or services. By targeting your computer and its network connection, or the computers and network of the sites you are trying to use, an attacker may be able to prevent you from accessing email, websites, online accounts (banking, etc.), or other services that rely on the affected computer. TCP ICMP scanning is also a form of active attacks in which the attackers exploit the way protocols are designed to respond. e.g. ping of death, sync attacks etc. In all types of active attacks the attacker creates noise over the network and transmits packets making it possible to detect and trace the attacker. Depending on the skill level, it has been observed that the skill full attackers usually attack their victims from proxy destinations that they have victimized earlier. 1.5.2 Passive Attacks Passive attacks involve the attacker being able to intercept, collect monitor any transmission sent by their victims. Thus, eavesdropping on their victim and in the process being able to listen in to their victims or targets communications. Passive attacks are very specialized types of attacks which are aimed at obtaining information that is being transmitted over secure and insecure channels. Since the attacker does not create any noise or minimal noise on the network so it is very difficult to detect and identify them. Passive attacks can be divided into 2 main types, the release of message content and traffic analysis. Release of message content It involves protecting message content from getting in hands of unauthorized users during transmission. This can be as basic as a message delivered via a telephone conversation, instant messenger chat, email or a file. Traffic analysis It involves techniques used by attackers to retrieve the actual message from encrypted intercepted messages of their victims. Encryption provides a means to mask the contents of a message using mathematical formulas and thus make them unreadable. The original message can only be retrieved by a reverse process called decryption. This cryptographic system is often based on a key or a password as input from the user. With traffic analysis the attacker can passively observe patterns, trends, frequencies and lengths of messages to guess the key or retrieve the original message by various cryptanalysis systems. Chapter 2 Honeypot and Honeynet 2.1 Honeypot Is a system, or part of a system, deliberately made to invite an intruder or system cracker. Honeypots have additional functionality and intrusion detection systems built into them for the collection of valuable information on the intruders. The era of virtualization had its impact on security and honeypots, the community responded, marked by the fine efforts of Niels Provos (founder of honeyd) Thorsten Holz for their masterpiece book ââ¬Å"Virtual Honeypots From Botnet Tracking to Intrusion Detectionâ⬠in 2007. 2.2 Types of Honeypots Honeypots can be categorized into 2 main types based on Level of interaction Deployment. 2.2.1 Level of interaction Level of interaction determines the amount of functionality a honeypot provides. 2.2.1.1 Low-interaction Honeypot Low-interaction honey pots are limited in the extent of their interaction with the attacker. They are generally emulator of the services and operating systems. 2.2.1.2 High interaction Honeypot High-interaction honeypots are complex solution they involve with the deployment of real operating systems and applications. High interaction honeypots capture extensive amount of information by allowing attacker to interact with the real systems. 2.2.2 Deployment Based on deployment honeypot may be classified as Production Honeypots Research Honeypots 2.2.2.1 Production Honeypots Production honeypots are honeypots that are placed within the production networks for the purpose of detection. They extend the capabilities of the intrusion detection systems. These type of honeypots are developed and cond to integrate with the organizations infrastructure and scope. They are usually implemented as low-interaction honeypots but implementation may vary depending on the available funding and expertise required by the organization. Production honeypots can be placed within the application and authentication server subnets and can identify any attacks directed towards those subnets. Thus they can be used to identify both internal and external threats for an organization. These types of honeypots can also be used to detect malware propagation in the network caused by zero day exploits. Since IDSs detection is based on database signatures they fail to detect exploits that are not defined in their databases. This is where the honeypots out shine the Intrusion detection systems. They aid the system network administrators by providing network situational awareness. On basis of these results administrators can take decisions necessary to add or enhance security resources of the organization e.g. firewall, IDS and IPS etc. 2.2.2.1 Research Honeypots Research honeypots are deployed by network security researchers the whitehat hackers. Their primarily goal is to learn the tools, tactics techniques of the blackhat hackers by which they exploit computers network systems. These honeypots are deployed with the idea of allowing the attacker complete freedom and in the process learn his tactics from his movement within the system. Research honeypots help security researchers to isolate attacker tools they use to exploit systems. They are then carefully studied within a sand box environment to identify zero day exploits. Worms, Trojans and viruses propagating in the network can also be isolated and studied. The researchers then document their findings and share with system programmers, network and system administrators various system and anti-virus vendors. They provide the raw material for the rule engines of IDS, IPS and firewall system. Research Honeypots act as early warning systems. They are designed to detect and log maximum information from attackers yet being stealthy enough not to let attackers identify them. The identity of the honeypot is crucial and we can conclude that the learning curve (from the attacker) is directly proportional to the stealthiest of thehoneypot .These types of honeypots are usually deployed at universities and by the RD departments of various organizations. These types of honeypots are usually deployed as High-Interaction honeypots. 2.3 Honeynet The concept of the honeypot is sometimes extended to a network of honeypots, known as a honeynet. In honeynet we grouped different types of honeypots with different operatrating systems which increases the probability of trapping an attacker. At the same time, a setting in which the attacker explores the honeynet through network connections between the various host systems provides additional prospects for monitoring the attack and revealing information about the intruder. The honeynet operator can also use the honeynet for training purposes, gaining valuable experience with attack strategies and digital forensics without endangering production systems. The Honeynet project is a non-profit research organization that provides tools for building and managing honeynets. The tools of the Honeynet project are designed for the latest generation of high interaction honeynets that require two separate networks. The honeypots reside on the first network, and the second network holds the tools for managing the honeynet. Between these tools (and facing the Internet) is a device known as the honeywall. The honeywall, which is actually a kind of gateway device, captures controls, and analyzes all inbound and outbound traffic to the honeypots[4]. It is a high-interaction honeypot designed to capture wide-range of information on threats. High-interaction means that a honeynet provides real systems, applications, and services for attackers to interact with, as opposed to low-interaction honeypots which provide emulated services and operating systems. It is through this extensive interaction we gain information on threats, both external and internal to an organization. What makes a honeynet different from most honeypots is that it is a network of real computers for attackers to interact with. These victim systems (honeypots within the honeynet) can be any type of system, service, or information you want to provide [14]. 2.4 Honeynet Data Management Data management consist of three process Data control, data capture and data collection. 2.4.1 Data Control Data control is the containment of activity within the honeynet. It determines the means through which the attackers activity can be restricted in a way to avoid damaging/abusing other systems/resources through the honeynet. This demands a great deal of planning as we require to give the attacker freedom in order to learn from his moves and at the same time not let our resources (honeypot+bandwidth) to be used to attack, damage and abuse other hosts on the same or different subnets. Careful measures are taken by the administrators of the honeynet to study and formulate a policy on attackers freedom versus containment and implement this in a way to achieve maximum data control and yet not be discovered or identified by the attacker as a honeypot. Security is a process and is implemented in layers, various mechanisms to achieve data control are available such as firewall, counting outbound connections, intrusion detection systems,intrusion prevention systems and bandwidth restriction e tc. Depending on our requirements and risk thresholds defined we can implement data control mechanisms accordingly [4]. 2.4.2 Data Capture Data Capture involves the capturing, monitoring and logging of allthreats and attacker activities within the honeynet. Analysis of this captured data provides an insight on the tools, tactics, techniques and motives of the attackers. The concept is to achieve maximum logging capability at all nodes and hence log any kind of attackers interaction without the attacker knowing it. This type of stealthy logging is achieved by setting up tools and mechanisms on the honeypots to log all system activity and have network logging capability at the honeywall. Every bit of information is crucial in studying the attacker whether its a TCP port scan, remote and local exploit attempt, brute force attack, attack tool download by the haacker, various local commands run, any type of communication carried out over encrypted and unencrypted channels (mostly IRC) and any outbound connection attempt made by the attacker [25]. All of this should be logged successfully and sent over to a remote location to avoid any loss of data due to risk of system damage caused by attackers, such as data wipe out on disk etc. In order to avoid detection of this kind of activity from the attacker, data masking techniques such as encryption should be used. 2.4.3 Data Collection Once data is captured, it is securely sent to a centralized data collection point. Data is used for analysis and archiving which is collected from different honeynet sensors. Implementations may vary depending on the requirements of the organization, however latest implementations incorporate data collection at the honeywall gateway [19]. 2.5 Honeynet Architectures There are three honeynet architectures namely Generation I, Generation II and Generation III 2.5.1 Generation I Architecture Gen I Honeynet was developed in 1999 by the Honeynet Project. Its purpose was to capture attackers activity and give them the feeling of a real network. The architecture is simple with a firewall aided by IDS at front and honeypots placed behind it. This makes it detectable by attacker [7]. 2.5.2 Generation II III Architecture Gen II honeynets were first introduced in 2001 and Gen III honeynets was released in the end of 2004. Gen II honeynets were made in order to address the issues of Gen I honeynets. Gen II and Gen III honeynets have the same architecture. The only difference being improvements in deployment and management, in Gen III honeynets along with the addition of Sebek server built in the honeywall. Sebek is a stealthy capture tool installed on honeypots that capture and log all requests sent to the system read and write system call. This is very helpful in providing an insight on the attacker [7]. A radical change in architecture was brought about by the introduction of a single device that handles the data control and data capture mechanisms of the honeynet called the IDS Gateway or marketing-wise, the Honeywall. By making the architecture more ââ¬Å"stealthyâ⬠, attackers are kept longer and thus more data is captured. There was also a major thrust in improving honeypot layer of data capture with the introduction of a new UNIX and Windows based data. 2.6 Virtual Honeynet Virtualization is a technology that allows running multiple virtual machines on a single physical machine. Each virtual machine can be an independent Operating system installation. This is achieved by sharing the physical machines resources such as CPU, Memory, Storage and peripherals through specialized software across multiple environments. Thus multiple virtual Operating systems can run concurrently on a single physical machine [4]. A virtual machine is specialized software that can run its own operating systems and applications as if it were a physical computer. It has its own CPU, RAM storage and peripherals managed by software that dynamically shares it with the physical hardware resources. Virtulization A virtual Honeynet is a solution that facilitates one to run a honeynet on a single computer. We use the term virtual because all the different operating systems placed in the honeynet have the appearance to be running on their own, independent computer. Network to a machine on the Honeynet may indicate a compromised enterprise system. CHAPTER 3 Design and Implementation Computer networks, connected to the Internet are vulnerable to a variety of exploits that can compromise their intended operations. Systems can be subject to Denial of Service Attacks, i-e preventing other computers to gain access for the desired service (e.g. web server) or prevent them from connecting to other computers on the Internet. They can also be subject to attacks that cause them to cease operations either temporarily or permanently. A hacker may be able to compromise a system and gain root access as if he is the system administrator. The number of exploits targeted against various platforms, operating systems, and applications increasing regularly. Most of vulnerabilities and attack methods are detected after the exploitations and cause big loses. Following are the main components of physical deployment of honeynet. First is the design of the Deployed Architecture. Then we installed SUN Virtual box as the Virtualization software. In this we virtually installed three Operating System two of them will work as honey pots and one Honeywall Roo 1.4 as Honeynet transparent Gateway. Snort and sebek are the part of honeywall roo operating system. Snort as IDS and Snort-Inline as IPS. Sebek as the Data Capture tool on the honeypot. The entire OS and honeywall functionality is installed on the system it formats all the previous data from the hard disk. The only purpose now of the CDROM is to install this functionality to the local hard drive. LiveCD could not be modified, so after installing it on the hard drive we can modify it according to our requirement. This approach help us to maintain the honeywall, allowing honeynet to use automated tools such asyumto keep packages current [31]. In the following table there is a summry of products with features installed in honeynet and hardware requirements. Current versions of the installed products are also mention in the table. Table 3.1 Project Summary Project Summary Feature Product Specifications Host Operating System Windows Server 2003 R2 HW Vendor HP Compaq DC 7700 ProcessorIntel(R) Pentiumà ® D CPU 3GHz RAM 2GB Storage 120GB NIC 1GB Ethernet controller (public IP ) Guest Operating System 1 Linux, Honeywall Roo 1.4 Single Processor Virtual Machine ( HONEYWALL ) RAM 512 MB Storage 10 GB NIC 1 100Mbps Bridged interface NIC 2 100Mbps host-only interface NIC 3 100Mbps Bridged interface (public IP ) Guest Operating System 2 Linux, Ubuntu 8.04 LTS (Hardy Heron) Single Processor Virtual Machine ( HONEYPOT ) RAM 256 MB Storage 10 GB NIC 100Mbps host-only vmnet (public IP ) Guest Operating System 3 Windows Server 2003 Single Processor Virtual Machine ( HONEYPOT ) RAM 256 MB Storage 10 GB NIC 100Mbps host-only vmnet (public IP ) Virtualization software SUN Virtual Box Version 3 Architecture Gen III Gen III implemented as a virtual honeynet Honeywall Roo Roo 1.4 IDS Snort Snort 2.6.x IPS Snort_inline Snort_inline 2.6.1.5 Data Capture Tool (on honeypots) Sebek Sebek 3.2.0 Honeynet Project Online Tenure November 12, 2009 TO December 12, 2009 3.1 Deployed Architecture and Design 3.2 Windows Server 2003 as Host OS Usability and performance of virtualization softwares are very good on windows server 2003. Windows Server 2003is aserveroperating system produced byMicrosoft. it is considered by Microsoft to be the cornerstone of itsWindows Server Systemline of business server products. Windows Server 2003 is more scalable and delivers better performance than its predecessor,Windows 2000. 3.3 Ubuntu as Honeypot Determined to use free and open source software for this project, Linux was the natural choice to fill as the Host Operating System for our projects server. Ubuntu 8.04 was used as a linux based honeypot for our implementation. The concept was to setup an up-to-date Ubuntu server, cond with commonly used services such as SSH, FTP, Apache, MySQL and PHP and study attacks directed towards them on the internet. Ubuntu being the most widely used Linux desktop can prove to be a good platform to study zero day exploits. It also becomes a candidate for malware collection and a source to learn hacker tools being used on the internet. Ubuntu was successfully deployed as a virtual machine and setup in our honeynet with a host-only virtual Ethernet connection. The honeypot was made sweeter i.e. an interesting target for the attacker by setting up all services with default settings, for example SSH allowed password based connectivity from any IP on default port 22, users created were given privi leges to install and run applications, Apache index.html page was made remotely accessible with default errors and banners, MySQL default port 1434 was accessible and outbound connections were allowed but limited [3]. Ubuntu is a computeroperating systembased on theDebianGNU/Linux distribution. It is named after theSouthern Africanethical ideology Ubuntu (humanity towards others)[5]and is distributed asfree and open source software. Ubuntu provides an up-to-date, stable operating system for the average user, with a strong focus onusabilityand ease of installation. Ubuntu focuses onusability andsecurity. The Ubiquity installer allows Ubuntu to be installed to the hard disk from within the Live CD environment, without the need for restarting the computer prior to installation. Ubuntu also emphasizesaccessibilityandinternationalization to reach as many people as possible [33]. Ubuntu comes installed with a wide range of software that includes OpenOffice, Firefox,Empathy (Pidgin in versions before 9.10), Transmission, GIMP, and several lightweight games (such as Sudoku and chess). Ubuntu allows networking ports to be closed using its firewall, with customized port selectio
Wednesday, September 4, 2019
A Factor For Firm Formation Economics Essay
A Factor For Firm Formation Economics Essay Firms are all around us and are the main expressers of economic activity in the modern capitalistic world. We observe firms being created, growing, evolving, expanding into new areas by merging with others but also remaining stable, declining, getting acquired and sometimes declaring bunkruptcy. It is clear that firms activities vary a lot and as a result, multiple studies regarding them have been undertaken during the course of the years. This essays purpose is to address the, perhaps, most important element associated with a firms existence, its formation, and especially the conditions and the reasons under which firms tend to form. But first, in order to be able to explain the circumstances and the factors that lead into the successful formation of a firm, a definition of it will be given. According to Jensen and Meckling, a firm is a legal fiction which serves as a focus for a complex process in which the conflicting objectives of individuals are brought into equilibrium within a framework of contractual relations (1976 p.311). The feature of the firm that makes it unique, though, is its ability to supersede the price mechanism, one of the pylons on which the whole economic theory is based, with decisions taken by the firms agents upon real-life situations and which, in most cases, deviate from what the economic theory through the price mechanism dictates (Coase 1937 p.390). Of major importance in this essay is the attempt to present, describe and evaluate the existence of transaction costs, which is a key aspect of Coases, Arrows Williamsons and Di Maggios analyses of the reasons why firms are formed. However, although it is crucial in understanding the genesis of a firm and its explanatory capability is invaluable, economising on transaction costs theory does not provide a sole explanation of it and other factors must be taken into account in order for us to have a clearer picture of the situation. The purpose and length of the essay does not provide the possibility to elaborate in a thorough and complete way about those factors, but technological advances and entrepreneurial spirit and creativity will be outlined and briefly explained. Moreover, for a successful firm creation to take place, there are many conditions that need to hold true, some of which will be presented in the following analysis. These are: widely understood rules when it comes to go verning a firm, and analytical planning before the actual formation of the firm. Transaction Costs Theory: Both a condition and a factor for firm formation As argued by the title given above, the transaction costs theory can be seen as both a condition and a factor on which a successful firm formation relies, depending on how the reader perceives the situation. The existence of transaction costs is a condition for firms to arise, but the process by which the economic agents economise on transaction costs is probably the most crucial factor that drives firm formation and that is why it will be analysed separately from the other conditions and factors. The main reason for a firms formation is the cost of using the price mechanism by which the economic system is being run (Coase 1937 p.390; Arrow 1969 p.70). Or, according to Williamson, a firm is the product of a series of organisational innovations that have had the purpose and effect of economising on transaction costs (1981 p.1537). More specifically, organising production through the price mechanism enables an obvious transaction cost of finding out what the current prices of interest are. Even if specialist price finders existed, this type of cost would not be totally eliminated (Coase 1937 p.390). As it can be understood, this more realistic theory contradicts with the theoretical model of the economy, in which there is perfect price information to all agents. But what is understood of transaction costs and what actions do firms take in order to reduce them? Transaction costs are mainly the costs of deciding, haggling, arranging and coordinating actions that constantly take place in the market, as Paul Di Maggio has argued (2001 p.8). Furthermore, they include the creation of contracts for each separate transaction that occurs in the market. As firms are created, these contracts are not eliminated but they are greatly reduced, since the founder-manager of the firm does not have to create contracts for every single transaction in which his/her company participates, as implied by the economic theory. Through this procedure, multiple costs are avoided, because the so called marketing costs are strictly reduced. For example, only one contract per employee is needed, in which the relationship between him and the firm (and its agents) is clearly stated. That will include the amount and the way of payment, the working hours and the certain limits within which the employee will have to obey the employer (Coase 1937 pp.390-393). Further methods that firms use in order to minimise transaction costs are the introduction of repetitive and predictable activities for their employees, by giving duties to them through a clear job description, eliminating the possibility of negotiations about the allocation of tasks. As a result, employers have more time to deal with important issues and decisions concerning the firm. In addition, the fair treatment to employees provided by the firms environment guarantees the reduction of transaction costs, since there is a specified reward-punishment system that everybody abides by, that results to immediate elimination of conflicts (Di Maggio 2001 pp.8-9). Regarding the same topic, Williamson has argued that pre-contract negotiation and task and deliverables specification will reduce the necessity for periodic interventions to check the progress of the contracts execution and its successful comple tion (1981 p.1544). Another crucial question about the transaction costs touches upon the reason of their existence. Related to it are two behavioural assumptions: bounded rationality and opportunism. According to the bounded rationality theory, people are less competent in calculations and are not able to account for every issue that is contract-related and therefore are transaction costs created. Moreover, people are opportunistic and unreliable, because they, many times, act having just their personal interest in mind. Consequently, it is possible that they are going to behave in a non-trustworthy and irresponsible way (Williamson 1981 pp.1544-1546). As it has been presented above, a key factor for a firms formation is the deviation from the economic model that portrays humans as perfectly rational beings that make right choices and have no flaws. As a bottomline, Coases writing about firm growth and expansion should be mentioned, according to which firms grow as their entrepreneurs undertake additional transactions exchange transactions that are co-ordinated through the price mechanism and try to expand until the costs of organising an extra transaction within the firm, equals the cost of carrying out the same transaction by means of an exchange on the open market or the costs of organising in another firm (Coase 1937 p.393, p.395). This is important because we are able to grasp how the second major challenge that firms founders face, the growth of their firm, after, of course, the successful formation of the firm, is illustrated based on the transactions theory described earlier. Conditions under which firms are formed Apart from transaction costs, there are also other conditions that need to hold true in order for a firm to be successfully constituted. A set of widely understood and fairly applied rules is essential, because they deter employees from using firms to seek their personal interest and urge them to contribute to achieve the firms goals. Perhaps the most important rule has to do with the hierarchy of the organisation, that is who gives orders to whom and who has the last call, when decision-making is involved. Secondly, clear admission and promotion criteria need to be established, so that firms transparency is maintained, and lastly routines for the performance of work need to exist, in order for deliverables to be easily checked in terms of integrity. Generally, rules within a firm serve a double role by specifying who does what work and by dictating which behaviours are worth rewarding and which punishing (appraisal punishment system) (Di Maggio 2001 p.8). Of major importance, when it comes to explaining the circumstances under which a firm is brought to life, is the planning that the entrepreneur(s)-founder(s) of the firm has/have to do before he/she/they can actually start building it, since a business plan, according to Delmar and Shane, turns abstract goals into concrete operational steps and therefore is crucial for both a firms existence and success. What is meant with the term business planning is the effort that firm founder(s) need(s) to make so that he/she/they gather(s) the appropriate information about a business opportunity and the action of finding and understanding how this information will be used to give birth to a new organisation that will try and make use of this opportunity (2003 p.1165). Through business planning the founder(s)-manager(s) of the firm is/are going to be able to spot and capitalise in a more efficient and risk-free way on the reduction of transaction costs. Without planning, a firm can not in most c ases fulfill its ultimate goal, survival, and the most sought after one, profit maximisation. Factors that drive firm formation Why is a firm created and what are the key factors that lead to its formation are two closely related questions that will be discussed in this section of the essay. One of these factors is technology and its regime that, according to Shane, includes four dimensions age of technical field, tendency of the market towards segmentation, effectiveness of patents and importance of complementary assets in marketing and distribution which affect the trend for inventions to be exploited through new firms formation (2001 p.1188). This formation is the reaction of potential entrepreneurs when they observe that specific domains of tecnology exploitation are profitable. Concluding, technology is crucial because it has become the main reason for innovation and that is the force that drives firms to the creation of new products, services and processes (Chandler 1959 p.25). Yet another factor that leads to firm formation is the creativity that a person shows, when he/she observes an opportunity to make profits through the creation of a product or provision of some kind of service. This creativity is referred to as entrepreneurship and is associated with spirit, vision and alertness to business opportunities that a person needs to possess (Lee, Florida and Acs 2004 pp.889-890). Whether someone possesses the gift of entrepreneurship or not, is determined by regional variation and characteristics such as population size, industrial structure, human capital capacity and financing availability (Armington and Acs 2002 p.37). A useful claim about entrepreneurship was made by Stuart and Sorenson who argued that firms founding rate is affected by social ties and the entrepreneurs need to reside near resources that they find necessary to mobilise (2003 p.229). Finally Schumpeter, when talking about his concept of creative destruction, he underlined the responsibi lity that independently owned firms bear for reforming or revolutionising, another indicator of the importance of entrepreneurship for firm formation as well as growth (1942 p.132). Conclusion To sum up, although there is no doubt that the firm is an important and complex institution, according to Williamson there seems to be disagreement when it comes to examining the conditions and the reasons that underlie its formation (1981 p.1537). However, much of firms formation literature and analysis relies on the existence of transaction costs and the firms attempts to economise on them. The deviation from the markets theory of organising the economic activity to the firms alternative one, brings upon the two behavioural assumptions bounded rationality and opportunism that introduce reality into the model and cease portraying human beings as perfectly rational. Apart from transaction costs, more conditions and firm formation factors are described in order for the analysis to be more complete within the length limit of this essay. Lastly, since the firms will always be in the centre of the economic activity, and as the state of the world and peoples behaviour change through tim e, it is possible that when similar analyses are to be conducted in the future, new findings regarding the reasons and the conditions under which firms arise, will be discovered that might as well change our perspective.
Tuesday, September 3, 2019
The Beatles :: Essay on The Beatles
When people hear the name "The Beatles" most people think of lead singer, John Lennon. However, the role of Paul McCartney is often overlooked. It was McCartney, not Lennon who was the driving force behind the Beatles. John Lennon and Paul McCartney were in many bands together before the forming of the Beatles. In 1962, along with Ringo Starr1 and George Harrison, they formed the rock group known as "The Beatles". The group featured a modern rock that was new and popular during the period with John and Paul composing and doing the leads on most of the songs. They were backed by George on rhythm and bass guitar and Ringo on drums. George and Ringo also assisted on backing vocals. When they first began playing, the main influence inside the band was John Lennon, who had an uncanny ability to compose songs at a moments notice with an inspiration that others missed. He pushed the members of the band during their touring years and was able to achieve the best possible results from the group. The band began playing in a Music Hall style that is very effective for the audiences but was lacking on their albums. Together with Paul, John began to evolve the band. As the years began to pass, the band was obviously beginning to grow musically. They had moved from simple lyrics like "Love me Do" to harshly aware reflections of life in their home country in "Eleanor Rigby"2. There were attempts, some more successful than others, to incorporate the other Beatles into the idea stage. George Harrison made this leap successfully with such tracks as "I want to tell you", "TAXMAN", and the psychedelic "Love you to". Ringo was featured in the humorous "Yellow Submarine" As the group matured, their creativity began to rely more on the effects and manipulations that they were able to produce in the studio. The Beatles agreed to end their touring career after an American tour of large halls that they failed to fill. It was around this time, that John Lennon began to search for himself. He began using any means that he thought might help him connect. This era was marked by the Beatles visits to the Maharashi Mahesh Yogi, and the beginning of heavy drug use 3. As Lennon began to use LSD in greater and greater quanti-ties4, the other Beatles began to have more and more influence in the production of the albums. Lennon began to become almost reclusive, and often delayed recording sessions.By the time that they were recording Sgt. Pepper's Lonely Hearts Club Band in 1967, Lennon would simply propose songs and themes, and McCartney was left to execute the
Turing Machines And Universes :: essays research papers
<a href="http://www.geocities.com/vaksam/">Sam Vaknin's Psychology, Philosophy, Economics and Foreign Affairs Web Sites In 1936 an American (Alonzo Church) and a Briton (Alan M. Turing) published independently (as is often the coincidence in science) the basics of a new branch in Mathematics (and logic): computability or recursive functions (later to be developed into Automata Theory). The authors confined themselves to dealing with computations which involved ââ¬Å"effectiveâ⬠or ââ¬Å"mechanicalâ⬠methods for finding results (which could also be expressed as solutions (values) to formulae). These methods were so called because they could, in principle, be performed by simple machines (or human-computers or human-calculators, to use Turingââ¬â¢s unfortunate phrases). The emphasis was on finiteness : a finite number of instructions, a finite number of symbols in each instruction, a finite number of steps to the result. This is why these methods were usable by humans without the aid of an apparatus (with the exception of pencil and paper as memory aids). Moreover: no insight or ingenuity were allowed to ââ¬Å"interfereâ⬠or to be part of the solution seeking process. What Church and Turing did was to construct a set of all the functions whose values could be obtained by applying effective or mechanical calculation methods. Turing went further down Churchââ¬â¢s road and designed the ââ¬Å"Turing Machineâ⬠ââ¬â a machine which can calculate the values of all the functions whose values can be found using effective or mechanical methods. Thus, the program running the TM (=Turing Machine in the rest of this text) was really an effective or mechanical method. For the initiated readers: Church solved the decision-problem for propositional calculus and Turing proved that there is no solution to the decision problem relating to the predicate calculus. Put more simply, it is possible to ââ¬Å"proveâ⬠the truth value (or the theorem status) of an expression in the propositional calculus ââ¬â but not in the predicate calculus. Later it was shown that many functions (even in number theory itself) were not recursive, meaning that they co uld not be solved by a Turing Machine. No one succeeded to prove that a function must be recursive in order to be effectively calculable. This is (as Post noted) a ââ¬Å"working hypothesisâ⬠supported by overwhelming evidence. We donââ¬â¢t know of any effectively calculable function which is not recursive, by designing new TMs from existing ones we can obtain new effectively calculable functions from existing ones and TM computability stars in every attempt to understand effective calculability (or these attempts are reducible or equivalent to TM computable functions).
Monday, September 2, 2019
It’s Better To Live In A Small Town Than A Big City
Nowaday, People often have 2 selection for their living place. Some people prefer to live in small town and Others prefer to live in the big city. I think one of the most important decisions that human have to take is to choose his living place where you can feel more comfortable,more suitable and happier. Peronally, I think setting down in modern big city is more beneficial. Through my essay I will analyze one of the most important reasons which is Chance of having a better carreer.Big cities has bigger market, companies or corporations are more famous so of course you can have more chance to found a better work . I have a same age friend. He study IT in an university in Thai Nguyen city , Itââ¬â¢s a city which smaller than Ha Noi capital. He graduated from a famous university but after 1 year in Thai Nguyen he still unemployed Because number of companies fewer means you will have less chance. There are too many student graduated from university each year but number of companies is only slightly increased.Many student together apply in to one company. Having too much CV of employees but employment in the company are limited so maybe you must to win hundred of people,who applied together with you in order to have employment in the company and my close friend canââ¬â¢t do that. After that I advice him to go to Ha Noi to find another chance and he agreed. He went to Ha Noi and applied to some companies. Amazing! Having 3 companies want to contract with him after only 2 months.He contracted with FPT corporation and work as a Software Specialist about computer. Now he still work there and feeling very happy when being a FPT's staff. From this example we can see that there are more activities in big cities that help people to expose their chance. It is clear from the example that we can have better opportunities in big cities. In a nutshell, I am still positive living in big city is Hanoi where will allow me have better work chance. And I believe that many peo ple will move to here for the same reason. Itââ¬â¢s better to live in a small town than a big city Nowaday, People often have 2 selection for their living place. Some people prefer to live in small town and Others prefer to live in the big city. I think one of the most important decisions that human have to take is to choose his living place where you can feel more comfortable,more suitable and happier. Peronally, I think setting down in modern big city is more beneficial. Through my essay I will analyze one of the most important reasons which is Chance of having a better carreer.Big cities has bigger market, companies or corporations are more famous so of course you can have more chance to found a better work . I have a same age friend. He study IT in an university in Thai Nguyen city , Itââ¬â¢s a city which smaller than Ha Noi capital. He graduated from a famous university but after 1 year in Thai Nguyen he still unemployed Because number of companies fewer means you will have less chance. There are too many student graduated from university each year but number of companies is only slightly increased.Many student together apply in to one company. Having too much CV of employees but employment in the company are limited so maybe you must to win hundred of people,who applied together with you in order to have employment in the company and my close friend canââ¬â¢t do that. After that I advice him to go to Ha Noi to find another chance and he agreed. He went to Ha Noi and applied to some companies. Amazing! Having 3 companies want to contract with him after only 2 months.He contracted with FPT corporation and work as a Software Specialist about computer. Now he still work there and feeling very happy when being a FPT's staff. From this example we can see that there are more activities in big cities that help people to expose their chance. It is clear from the example that we can have better opportunities in big cities. In a nutshell, I am still positive living in big city is Hanoi where will allow me have better work chance. And I believe that many peo ple will move to here for the same reason.
Sunday, September 1, 2019
Does Holding the Olympic Games Have Benefits for the Host Country?
AGRUMENTATIVE ESSAY Does holding the Olympic Games have benefits for the host country? In recent years, the Olympic Games have developed into one of the most significant mega-international sporting events (Roche,2000). More and more cities are bidding to host the Olympics and increasingly money are invested in Olympic bids, which is due to the reason that the government believe that they could get benefits from such an event. During the proceeding of the 2012 London Olympic Games, amount of people in the world have been brought into focus on Olympic Games.It is such a big event, holding it successfully will improve one countryââ¬â¢s reputation and get more attention around the world. Does holding the Olympic Games have benefits for the host country? It might be said that hosting Olympic Games has some financial risks because of its exceeding budgets. Countries invest huge number of money on sports facilities, which could result in the over-needed of infrastructure. However, there are many reasons why a country should organize Olympic Games.The first reason why holding Olympic Games have benefits for the host country is that, from the economic point of view, increase the income of revenue. Because of the influx of people who come from all around the world, the needs of consumption will dramatically rise. As a result, it is contributed to the output of factory, which is benefit to the whole market. Whatââ¬â¢s more, Olympic Games attract numbers of merchants to the host country to look for the business opportunities. Their investment in the market will stimulate the growth of economic.Rose and Spiegel (2009) suggests that the rate of trade is increased 30% for those host countries, which ââ¬Ërealize an economic benefit in the form of greater openness. ââ¬â¢ Furthermore, during the proceeding of the Olympics, large numbers of foreigners will come to the hosts to visit. Bolton (2004) states that the percentage of tourists is increased to 150% in the 1992 Barcelona Games, with the Spanish governmentââ¬â¢s effort to stimulate the tourism. They are the potential consumer groups which could promote the local economy. This will stimulate some tourism-relative ndustries (hotels restaurants and shops) to develop. Although it is sometimes claimed that these numbers of tourists tend to be temporary, it must be acknowledged that the host country could become a popular tourist destination. In addition, employment is another great benefit to the host countries. Holding Olympics will create some full-time jobs because of the investment in infrastructure. For example, in Atlanta, the host city of the 1996 Olympic Games, the government invest about $2 billion to Olympic-related projects, which is leading to over 580 000 new jobs to this region between 1991 and 1997. Steven and Bevan, 1999) suggests that the Olympic Games were stimulating economic growth up to $5. 1 billion between 1991 and 1997. During the period of games, Barcelona, the host city of the 1992 Olympic Games, the general percentage of unemployment drop from 18. 4% to 9. 6% (Brunet, 1995). The second reason why holding the Olympics Games have benefits for the host country is that infrastructure such as transportation and sports facilities will get improved during the Games. To guarantee a successful Olympics, government should invest into infrastructure, such as improve the publicââ¬â¢s transportation and sports facilities.Firstly, the Olympics have promoted the urban development and have an impact on the landscape and urban environment. In Tokyo, the host city of 1964 Olympics games, a new road and highway network was constructed to meet the short-term demands of the Games and to accommodate the cityââ¬â¢s continued population and traffic increase in the long-term. Chalkley and Essex (2010) points out a total of 22 main highways were designed for the Games, huge amount of money were spent on land acquisition, compensation and providing alternative si tes for the activities displaced.In addition, the development of infrastructure is not directly related to the leisure facilities, commercial and open spaces, it also involves improve the appearance of the host city. Secondly, the staging of Olympics often contains build the new sporting facilities or restructure the exiting ones. It is often claimed that those facilities have failed to produce a long-term benefits to the country; some of the sports venues often become unused after the Olympics is finished. However, this ignores the fact that the whole society will get beneficial from infrastructural investment and environmental improvement.The London 2012 Olympic Games have made a dedicated plan for the usage of facilities before the facilities is built. For example, after the games, the Olympics Village will become a new community housing. The new shopping centre, which is separated from the Olympic Park, will become an employment centre of this area. Transportation will get impro ved through the construct of new stations, line extensions and additional trains and a largest urban park will build available for both local community and for elite athletes (Olympic Delivery Authority, 2007).The final reason why holding the Olympics Games have benefits for the host country is that it will help to improve the host countryââ¬Ës image. For the host country, itââ¬â¢s not just a competition about sports; it's a chance to improve their international prominence and a sense of national pride. Firstly, it is contribute to transforming the image of the host city. In order to amplify the effect of Olympics Games, it is necessary to rely on the function of media. During the Games, the worldwide TV audience watched a cumulative 36. 1 billion hours of sport (IOC, 2001).This is one of the most effective ways to improve a nationââ¬â¢s image and attract tourists. For example, in 1996, during the 17 days of the Centennial Olympic Games, it has been reported that 3. 5 billi on people saw the city on worldwide television coverage in 214 countries and territories and about two million people visited Atlanta, as a result, the tourist industry of the region increased dramatically (Steven and Bevan, 1999). It seems clear that a successful mega-event can enhance citiesââ¬â¢ reputations through the global media coverage.Prior to the 1992 Olympics, Barcelona was only a large city in Spain, but it is now a famous destination which attracts numbers of tourists to visit. Furthermore, holding this mega-international sporting event could attract the publicââ¬â¢s interest to take part in sporting activities, and also increase local pride and community spirit, which could make a significant contribution to the quality of life of both the individual and community. For example, there is a remarkable increase in Barcelona in the participation of sports activities in the years following the hosting of the Olympic Games.There has been an increase of 46 000 new users in the cityââ¬â¢s sports centres following the 1992 Games, with the percentage of women participating in sporting activities increasing from 35% in 1989 to 45% in 1995. Moreover, in 1994, more than 300 000 people took part in sporting events which involved the cityââ¬â¢s inhabitants on the streets of Barcelona, such as athletic competitions, popular marathon, the bicycle festival and the roller-skating festival (Truno, 1995).In conclusion, it is clear from the weight of evidence that holding Olympic Games have benefits in economic growth, infrastructure improvement and image promotion for the host country. However, there are still some aspects should get the governmentââ¬â¢s attention. For example, in order to handle with the financially risks such as the increasing rate of over-budget, the international Olympic Committee, together with local Olympic organisers should make the capital budget table precisely.Moreover, the post-event facility usage should be considered befor e the infrastructure is built, which is avoid to become a burden to the long-term economy. Only in this way can the host country get maximise economics benefits. Bibliography: Bolton, L. (2004) Despite Lackluster Ticket Sales, Can Greece Be a Big Winner in This Yearââ¬â¢s Olympics? [Online] Available at: http://knowledge. wharton. upenn. edu/article. cfm? articleid=1026 [Accessed 24/08/12]. Brunet, F. (1995) An economic analysis of the Barcelona '92 Olympic Games: resources, financing and impact, in Moragas, D.M. & Botella, M. (eds). The Keys of success: the social, sporting, economic and communications impact of Barcelona ââ¬Ë92. Bellaterra: Servei de Pulbication de la Universitat Autonoma de Barcelona. Chalkley, B. & Essex, S (1999) Urban development through hosting international events: a history of the Olympic Games. Planning Perspectives 14(4), pp. 369-394. International Olympic Committee (2001) Sydney 2000 Olympic Games; Global Television Report. UK: Olympic Television R esearch Centre Sports Marketing Surveys LtdRoche, M. (2000) Mega-Events and Modernity: Olympics and Expos in the Growth of Global Culture. London: Routledge. The Olympic Delivery Authority (2007) Guide to the Olympic, Paralympic & Legacy transformation planning applications and Olympic village (Part) and legacy residential planning application. Guide to Planning Applications [Online] (February 2007). Available at: http://www. london2012. com/mm%5CDocument%5CPublications%5CPlanningApps%5C01%5C24%5C08%5C36%5Cguide-to-the-planning-applications. df [Accessed 26/08/12] Rose, K. & Spiegel, M. (2011) The Olympic Effect. The Economic Journal 121(3), pp. 652-677. Steven, T. & Bevan, T. (1999) Olympic legacy. Sport Management Magazine 19 (9), pp. 16ââ¬â19. Truno, E. (1995) Barcelona: city of sport, in Moragas, D. M. & Botella, M. (eds). The Keys of success: the social, sporting, economic and communications impact of Barcelona ââ¬Ë92. Bellaterra: Servei de Pulbication de la Universitat Autonoma de Barcelona.
Subscribe to:
Posts (Atom)