Featured Posts Coolbthemes

Thursday, July 16, 2015

GTA V Free Download

Grand Theft Auto V PC Game 2015 Overview

GTA 5 is developed by Rockstar North and is published under the banner of Rockstar Games. The release date of this game is 14th April 2015. GTA V game can be played either by first person or by third person perspective.



GTA V PC Game Free Download and get ready for action. The story revolves around Michael Townley who is under the witness protection in Las Santos, San Andreas after mismanaged robbery attempt at Ludendorff. Michael discovered that his wife is cheating upon him and has an affair with her tennis coach. Furious to hear about the affair, Michael chased down the coach and destroyed him. All of this happened in a mansion owned by the girlfriend of a drug lord Martin Madrazo. Now Martin demands a heist at a jewelry shop from Michael and from here onwards Michael’s life changed, his family left him and he with his old pals has been left entangled in the criminal underworld. Now the team has to pull off several fatal heists for their survival in the cold-blooded city where nobody is trustworthy.


 GTA V has seen a major overhaul in technical and visual aspects. The visuals are more crispy with 4k resolution the player can experience lots of new additions like new weaponry, vehicles and wildlife. The traffic has been made denser. The weather system has been enhanced greatly. GTA V also includes a brand new Rockstar editor. Which will provides a whole bunch of editing tool to the player to create the gameplay clips and publish them on YouTube. Lots of mission and linear scenarios have been included Players can also roam freely here and there in an open world. The world depicted in GTA V is much larger than all the other previous releases and is comprised of countryside San Andreas plus fictional city of Las Santos. Player can run, jump and use vehicles to explore the area. Player can also use melee attacks and different explosive to compete with the enemies. GTA 5 like in all previous GTA releases features multiple radio stations which can be play several genres of music whenever the player enters in a vehicle.


Features of GTA V PC Game

Following are the main features of GTA V PC Game that you will be able to experience after the first install on your Operating System.
  • Impressive action adventure game.
  • Major overhauling in technical and visual aspects.
  • New weaponry, vehicles included.
  • New wildlife included.
  • Denser traffic.
  • Enhanced weather system.
  • Includes brand new Rockstar editor.
  • Can use melee attack and other explosives.
  • Features multiple radio station playing different genres of music.


    GTA V Free Download

    Click on the below button to start GTA V Free Download. It is full and complete game. Just download and start playing it. We have provided direct link full setup of the game.

    Before Installing Game You Must Watch This Video


    Game Installation Video
    If You Still Get Activation Message Even After Following Installation Video, Then Set your PC Date to 15th April 2015 and Try Again. Game will Work 100%.


    GTA V Windows 7 , 8 and 8.1 Fix




Friday, March 29, 2013

The DDoS That Almost Broke the Internet

The New York Times this morning published a story about the Spamhaus DDoS attack and how CloudFlare helped mitigate it and keep the site online. The Times calls the attack the largest known DDoS attack ever on the Internet. We wrote about the attack last week. At the time, it was a large attack, sending 85Gbps of traffic. Since then, the attack got much worse. Here are some of the technical details of what we've seen.
Growth Spurt
On Monday, March 18, 2013 Spamhaus contacted CloudFlare regarding an attack they were seeing against their website spamhaus.org. They signed up for CloudFlare and we quickly mitigated the attack. The attack, initially, was approximately 10Gbps generated largely from open DNS recursors. On March 19, the attack increased in size, peaking at approximately 90Gbps. The attack fluctuated between 90Gbps and 30Gbps until 01:15 UTC on on March 21.
The attackers were quiet for a day. Then, on March 22 at 18:00 UTC, the attack resumed, peaking at 120Gbps of traffic hitting our network. As we discussed in the previous blog post, CloudFlare uses Anycast technology which spreads the load of a distributed attack across all our data centers. This allowed us to mitigate the attack without it affecting Spamhaus or any of our other customers. The attackers ceased their attack against the Spamhaus website four hours after it started.
Other than the scale, which was already among the largest DDoS attacks we've seen, there was nothing particularly unusual about the attack to this point. Then the attackers changed their tactics. Rather than attacking our customers directly, they started going after the network providers CloudFlare uses for bandwidth. More on that in a second, first a bit about how the Internet works.
Peering on the Internet
The "inter" in Internet refers to the fact that it is a collection of independent networks connected together. CloudFlare runs a network, Google runs a network, and bandwidth providers like Level3, AT&T, and Cogent run networks. These networks then interconnect through what are known as peering relationships.
When you surf the web, your browser sends and receives packets of information. These packets are sent from one network to another. You can see this by running a traceroute. Here's one from Stanford University's network to the New York Times' website (nytimes.com):
1  rtr-servcore1-serv01-webserv.slac.stanford.edu (134.79.197.130)  0.572 ms
 2  rtr-core1-p2p-servcore1.slac.stanford.edu (134.79.252.166)  0.796 ms
 3  rtr-border1-p2p-core1.slac.stanford.edu (134.79.252.133)  0.536 ms
 4  slac-mr2-p2p-rtr-border1.slac.stanford.edu (192.68.191.245)  25.636 ms
 5  sunncr5-ip-a-slacmr2.es.net (134.55.36.21)  3.306 ms
 6  eqxsjrt1-te-sunncr5.es.net (134.55.38.146)  1.384 ms
 7  xe-0-3-0.cr1.sjc2.us.above.net (64.125.24.1)  2.722 ms
 8  xe-0-1-0.mpr1.sea1.us.above.net (64.125.31.17)  20.812 ms
 9  209.249.122.125 (209.249.122.125)  21.385 ms
There are three networks in the above traceroute: stanford.edu, es.net, and above.net. The request starts at Stanford. Between lines 4 and 5 it passes from Stanford's network to their peer es.net. Then, between lines 6 and 7, it passes from es.net to above.net, which appears to provide hosting for the New York Times. This means Stanford has a peering relationship with ES.net. ES.net has a peering relationship with Above.net. And Above.net provides connectivity for the New York Times.
CloudFlare connects to a large number of networks. You can get a sense of some, although not all, of the networks we peer with through a tool like Hurricane Electric's BGP looking glass. CloudFlare connects to peers in two ways. First, we connect directly to certain large carriers and other networks to which we send a large amount of traffic. In this case, we connect our router directly to the router at the border of the other network, usually with a piece of fiber optic cable. Second, we connect to what are known as Internet Exchanges, IXs for short, where a number of networks meet in a central point.
Most major cities have an IX. The model for IXs are different in different parts of the world. Europe runs some of the most robust IXs, and CloudFlare connects to several of them including LINX (the London Internet Exchange), AMS-IX (the Amsterdam Internet Exchange), and DE-CIX (the Frankfurt Internet Exchange), among others. The major networks that make up the Internet --Google, Facebook Yahoo, etc. -- connect to these same exchanges to pass traffic between each other efficiently. When the Spamhaus attacker realized he couldn't go after CloudFlare directly, he began targeting our upstream peers and exchanges.
Headwaters
Once the attackers realized they couldn't knock CloudFlare itself offline even with more than 100Gbps of DDoS traffic, they went after our direct peers. In this case, they attacked the providers from whom CloudFlare buys bandwidth. We, primarily, contract with what are known as Tier 2 providers for CloudFlare's paid bandwidth. These companies peer with other providers and also buy bandwidth from so-called Tier 1 providers.
Peer_pressure
There are approximately a dozen Tier 1 providers on the Internet. The nature of these providers is that they don't buy bandwidth from anyone. Instead, they engage in what is known as settlement-free peering with the other Tier 1 providers. Tier 2 providers interconnect with each other and then buy bandwidth from the Tier 1 providers in order to ensure they can connect to every other point on the Internet. At the core of the Internet, if all else fails, it is these Tier 1 providers that ensure that every network is connected to every other network. If one of them fails, it's a big deal.
Anycast means that if the attacker attacked the last step in the traceroute then their attack would be spread across CloudFlare's worldwide network, so instead they attacked the second to last step which concentrated the attack on one single point. This wouldn't cause a network-wide outage, but it could potentially cause regional problems.
We carefully select our bandwidth providers to ensure they have the ability to deal with attacks like this. Our direct peers quickly filtered attack traffic at their edge. This pushed the attack upstream to their direct peers, largely Tier 1 networks. Tier 1 networks don't buy bandwidth from anyone, so the majority of the weight of the attack ended up being carried by them. While we don't have direct visibility into the traffic loads they saw, we have been told by one major Tier 1 provider that they saw more than 300Gbps of attack traffic related to this attack. That would make this attack one of the largest ever reported.
The challenge with attacks at this scale is they risk overwhelming the systems that link together the Internet itself. The largest routers that you can buy have, at most, 100Gbps ports. It is possible to bond more than one of these ports together to create capacity that is greater than 100Gbps however, at some point, there are limits to how much these routers can handle. If that limit is exceeded then the network becomes congested and slows down.
Over the last few days, as these attacks have increased, we've seen congestion across several major Tier 1s, primarily in Europe where most of the attacks were concentrated, that would have affected hundreds of millions of people even as they surfed sites unrelated to Spamhaus or CloudFlare. If the Internet felt a bit more sluggish for you over the last few days in Europe, this may be part of the reason why.
Attacks on the IXs
In addition to CloudFlare's direct peers, we also connect with other networks over the so-called Internet Exchanges (IXs). These IXs are, at their most basic level, switches into which multiple networks connect and can then pass bandwidth. In Europe, these IXs are run as non-profit entities and are considered critical infrastructure. They interconnect hundreds of the world's largest networks including CloudFlare, Google, Facebook, and just about every other major Internet company.
Beyond attacking CloudFlare's direct peers, the attackers also attacked the core IX infrastructure on the London Internet Exchange (LINX), the Amsterdam Internet Exchange (AMS-IX), the Frankfurt Internet Exchange (DE-CIX), and the Hong Kong Internet Exchange (HKIX). From our perspective, the attacks had the largest effect on LINX which caused impact over the exchange and LINX's systems that monitor the exchange, as visible through the drop in traffic recorded by their monitoring systems. (Corrected: see below for original phrasing.)
The congestion impacted many of the networks on the IXs, including CloudFlare's. As problems were detected on the IX, we would route traffic around them. However, several London-based CloudFlare users reported intermittent issues over the last several days. This is the root cause of those problems.
The attacks also exposed some vulnerabilities in the architecture of some IXs. We, along with many other network security experts, worked with the team at LINX to better secure themselves. In doing so, we developed a list of best practices for any IX in order to make them less vulnerable to attacks.
Two specific suggestions to limit attacks like this involve making it more difficult to attack the IP addresses that members of the IX use to interchange traffic between each other. We are working with IXs to ensure that: 1) these IP addresses should not be announced as routable across the public Internet; and 2) packets destined to these IP addresses should only be permitted from other IX IP addresses. We've been very impressed with the team at LINX and how quickly they've worked to implement these changes and add additional security to their IX and are hopeful other IXs will quickly follow their lead.
The Full Impact of the Open Recursor Problem
At the bottom of this attack we once again find the problem of open DNS recursors. The attackers were able to generate more than 300Gbps of traffic likely with a network of their own that only had access 1/100th of that amount of traffic themselves. We've written about how these mis-configured DNS recursors as a bomb waiting to go off that literally threatens the stability of the Internet itself. We've now seen an attack that begins to illustrate the full extent of the problem.
While lists of open recursors have been passed around on network security lists for the last few years, on Monday the full extent of the problem was, for the first time, made public. The Open Resolver Project made available the full list of the 21.7 million open resolvers online in an effort to shut them down.
We'd debated doing the same thing ourselves for some time but worried about the collateral damage of what would happen if such a list fell into the hands of the bad guys. The last five days have made clear that the bad guys have the list of open resolvers and they are getting increasingly brazen in the attacks they are willing to launch. We are in full support of the Open Resolver Project and believe it is incumbent on all network providers to work with their customers to close any open resolvers running on their networks.

Unlike traditional botnets which could only generate limited traffic because of the modest Internet connections and home PCs they typically run on, these open resolvers are typically running on big servers with fat pipes. They are like bazookas and the events of the last week have shown the damage they can cause. What's troubling is that, compared with what is possible, this attack may prove to be relatively modest.
As someone in charge of DDoS mitigation at one of the Internet giants emailed me this weekend: "I've often said we don't have to prepare for the largest-possible attack, we just have to prepare for the largest attack the Internet can send without causing massive collateral damage to others. It looks like you've reached that point, so... congratulations!"
At CloudFlare one of our goals is to make DDoS something you only read about in the history books. We're proud of how our network held up under such a massive attack and are working with our peers and partners to ensure that the Internet overall can stand up to the threats it faces.
Correction: The original sentence about the impact on LINX was "From our perspective, the attacks had the largest effect on LINX which for a little over an hour on March 23 saw the infrastructure serving more than half of the usual 1.5Tbps of peak traffic fail." That was not well phrased, and has been edited, with notation in place.

Spamhaus’ attackers turned DNS into a weapon of mass destruction



A little more than a year ago, details emerged about an effort by some members of the hacktivist group Anonymous to build a new weapon to replace their aging denial-of-service arsenal. The new weapon would use the Internet's Domain Name Service as a force-multiplier to bring the servers of those who offended the group to their metaphorical knees. Around the same time, an alleged plan for an Anonymous operation, "Operation Global Blackout" (later dismissed by some security experts and Anonymous members as a "massive troll"), sought to use the DNS service against the very core of the Internet itself in protest against the Stop Online Piracy Act.
This week, an attack using the technique proposed for use in that attack tool and operation—both of which failed to materialize—was at the heart of an ongoing denial-of-service assault on Spamhaus, the anti-spam clearing house organization. And while it hasn't brought the Internet itself down, it has caused major slowdowns in the Internet's core networks.
DNS Amplification (or DNS Reflection) remains possible after years of security expert warnings. Its power is a testament to how hard it is to get organizations to make simple changes that would prevent even recognized threats. Some network providers have made tweaks that prevent botnets or "volunteer" systems within their networks to stage such attacks. But thanks to public cloud services, "bulletproof" hosting services, and other services that allow attackers to spawn and then reap hundreds of attacking systems, DNS amplification attacks can still be launched at the whim of a deep-pocketed attacker—like, for example, the cyber-criminals running the spam networks that Spamhaus tries to shut down.

Hello, operator?

The Domain Name Service is the Internet's directory assistance line. It allows computers to get the numerical Internet Protocol (IP) address for a remote server or other network-attached device based on its human-readable host and domain name. DNS is organized in a hierarchy; each top-level domain name (such as .com, .edu, .gov, .net, and so on) has a "root" DNS server keeping a list of each of the "authoritative" DNS servers for each domain registered with them. If you've ever bought a domain through a domain registrar, you've created (either directly or indirectly) an authoritative DNS address for that domain by selecting the primary and secondary DNS servers that go with it.
When you type "arstechnica.com" into your browser's address bar and hit the return key, your browser checks with a DNS resolver—your personal Internet 411 service— to determine where to send the Web request. For some requests, the resolver may be on your PC. (For example, this happens if you've requested a host name that's in a local "hosts" table for servers within your network, or one that's stored in your computer's local cache of DNS addresses you've already looked up.) But if it's the first time you've tried to connect to a computer by its host and domain name, the resolver for the request is probably running on the DNS server configured for your network—within your corporate network, at an Internet provider, or through a public DNS service such as Google's Public DNS.
There are two ways for a resolver to get the authoritative IP address for a domain name that isn't in its cache: an iterative request and a recursive request. In an iterative request, the resolver pings the top-level domain's DNS servers for the authoritative DNS for the destination domain, then it sends a DNS request for the full hostname to that authoritative server. If the computer that the request is seeking is in a subdomain or "zone" within a larger domain—such as www.subdomain.domain.com—it may tell the resolver to go ask that zone's DNS server. The resolver "iterates" the request down through the hierarchy of DNS servers until it gets an answer.
But on some networks, the DNS resolver closest to the requesting application doesn't handle all that work. Instead, it sends a "recursive" request to the next DNS server up and lets that server handle all of the walking through the DNS hierarchy for it. Once all the data is collected from the root, domain, and subdomain DNS servers for the requested address, the resolver then pumps the answer back to its client.







How DNS queries are supposed to work—when they're not being used as weapons.
To save time, DNS requests don't use the "three-way handshake" of the Transmission Control Protocol (TCP) to make all these queries. Instead, DNS typically uses the User Datagram Protocol (UDP)—a "connectionless" protocol that lets the server fire and forget requests.

Pump up the volume

That makes the sending of requests and responses quicker—but it also opens up a door to abuse of DNS that DNS amplification uses to wreak havoc on a target. All the attacker has to do is find a DNS server open to requests from any client and send it requests forged as being from the target of the attack. And there are millions of them.
The "amplification" in DNS amplification attacks comes from the size of those responses. While a DNS lookup request itself is fairly small, the resulting response of a recursive DNS lookup can be much larger. A relatively small number of attacking systems sending a trickle of forged UDP packets to open DNS servers can result in a firehose of data being blasted at the attackers' victim.
DNS amplification attacks wouldn't be nearly as amplified if it weren't for the "open" DNS servers they use to fuel the attacks. These servers have been configured (or misconfigured) to answer queries from addresses outside of their network. The volume of traffic that can be generated by such open DNS servers is huge. Last year, Ars reported on a paper presented by Randal Vaughan of Baylor University and Israeli security consultant Gadi Evron at the 2006 DefCon security conference. The authors documented a series of DNS amplification attacks in late 2005 and early 2006 that generated massive traffic loads for the routers of their victims. In one case, the traffic was "as high as 10Gbps and used as many as 140,000 exploited name servers," Vaughan and Evron reported. "A DNS query consisting of a 60 byte request can be answered with responses of over 4000 bytes, amplifying the response packet by a factor of 60."
But even if you can't find an open DNS server to blast recursive responses from, you can still depend on the heart of the Internet for a respectable hail of packet projectiles. A "root hint" request—sending a request for name servers for the "." domain—results in a response 20 times larger than the packet the request came in. That's in part thanks to DNS-SEC, the standard adopted to make it harder to spoof DNS responses, since now the response includes certificate data from the responding server.







A comparison of a "root hint" query and the response delivered by the DNS server. Not all data shown.
In the case of the attack on Spamhaus, the organization was able to turn to the content delivery network CloudFlare for help. CloudFlare hid Spamhaus behind its CDN, which uses the Anycast feature of the Border Gateway Protocol to cause packets destined for the antispam provider's site to be routed to the closest CloudFlare point of presence. This spread out the volume of the attack. And CloudFlare was able to then shut off amplified attacks aimed at Spamhaus with routing filters that blocked aggregated DNS responses matching the pattern of the attack.
But that traffic still had to get to Cloudflare before it could be blocked. And that resulted in a traffic jam in the core of the Internet, slowing connections for the Internet as a whole.

No fix on the horizon

The simplest way to prevent DNS amplification and reflection attacks would be to prevent forged DNS requests from being sent along in the first place. But that "simple" fix isn't exactly easy—or at least easy to get everyone who needs to participate to do.
There's been a proposal on the books to fix the problem for nearly 13 years—the Internet Engineering Task Force's BCP 38, an approach to "ingress filtering" of packets. First pitched in 2000  1998 as part of RFC 2267 , the proposal has gone nowhere. And while the problem would be greatly reduced if zone and domain DNS servers simply were configured not to return recursive or even "root hint" responses received from outside their own networks, that would require action by the owners of the network. It's an action that doesn't have a direct monetary or security benefit to them associated with it.
ISPs generally do "egress filtering"—they check outbound traffic to make sure it's coming from IP addresses within their network.  This prevents them from filling up their peering connections with bad traffic.  But "ingress" filtering would check to make sure that requests coming in through a router were coming from the proper direction based on their advertised IP source.
Another possible solution that would eliminate the problem entirely is to make DNS use TCP for everything—reducing the risk of forged packets.  DNS already uses TCP for tasks like zone transfers. But that would require a change to DNS itself, so it's unlikely that would ever happen, considering that you can't even convince people to properly configure their DNS servers to begin with.
Maybe the attack on Spamhaus will change that, and core network providers will move to do more to filter DNS traffic that doesn't seem to match up with known DNS servers. Maybe just maybe, BCP 38 will get some traction. And maybe pigs will fly.

DDoS attack against Spamhaus later targeted Tier 1 providers

IDG News Service - A distributed denial-of-service (DDoS) attack of unprecedented scale that targeted an international spam-fighting organization last week ended up causing problems for Internet users around the world, experts say.
The DDoS attack started more than a week ago and targeted the Spamhaus Project, an organization based in Geneva, Switzerland, and London that maintains databases of IP (Internet Protocol) addresses, domain names and other Internet resources involved in spam, malware and other abusive online activities.
Spamhaus publishes the data in the form of block lists that are used by Internet and email service providers, corporations, universities and governments around the world to filter Internet traffic on their networks and servers.
In order to keep its services and website online Spamhaus enlisted the help of a San Francisco-based company called CloudFlare that runs a global content delivery network aimed at improving website performance.
CloudFlare said in a blog post last week that it had mitigated an attack against Spamhaus that peaked at 75Gbps. However, the attack significantly increased in scale since then, said Matthew Prince, CouldFlare's CEO, Wednesday via email.
Seeing that CloudFlare's network infrastructure allowed the company to mitigate the original attack, the attackers decided to move upstream and directly target CloudFlare's Internet service providers and then the upstream providers of those providers, Prince said Wednesday in a blog post.
The attackers ultimately targeted Tier 1 providers, which operate the networks at the core of the Internet, and Internet Exchanges (IX), critical nodes located around the world that connect large networks like those of Google, Facebook, Yahoo and pretty much every major Internet company.
"While we don't have direct visibility into the traffic loads they saw, we have been told by one major Tier 1 provider that they saw more than 300Gbps of attack traffic related to this attack," Prince said.
"We've seen congestion across several major Tier 1s, primarily in Europe where most of the attacks were concentrated, that would have affected hundreds of millions of people even as they surfed sites unrelated to Spamhaus or CloudFlare," Prince said. "If the Internet felt a bit more sluggish for you over the last few days in Europe, this may be part of the reason why."
"Given the 300Gbps number being reported, this would be the largest publicly acknowledged attack on record," said Patrick Gilmore, chief architect at Akamai Technologies, Wednesday via email. Akamai operates one of the world's largest content delivery networks.
In general, when an attack is very large, it can fill the Internet pipes and hurt infrastructure between the source of the attack and the intended victim, Gilmore said.
"We agree that the size of the attack was around 300Gbps," said Dan Holden, director of the security and engineering response team at Arbor Networks, a DDoS mitigation provider. "The largest attack we have previously seen was of around 100Gbps back in 2010."







The method of attack used in this case is known as DNS reflection and involves sending spoofed requests to so-called open DNS (Domain Name System) resolvers -- DNS servers that can be queried by anyone on the Internet -- that appear to originate from the intended victim's IP address. The attackers usually craft their requests so that the responses returned to the victim by the queried servers would be very large.
DNS reflection attacks are not new and there are millions of open DNS resolvers on the Internet that can be abused in this way.
This type of attack can be mitigated by the victim or the provider that is defending against the attack, but in this particular case, because of its size, the attack also stressed the rest of the Internet along the way, Holden said. "It was essentially stressful to the fabric of the Internet."
Holden hopes that the size of the attack and the attention it received will help speed up efforts to rid the Internet of open DNS resolvers. However, he agreed that in the short term it might actually encourage other attackers to use the same attack method because of its success.
A group called the Stophaus Movement has taken responsibility for the unprecedented attack. The group claims that Spamhaus is abusing its position of power to force hosting companies to end their business relationships with certain customers that are flagged as spammers without any court order or legal oversight.
The members of the Stophaus Movement are hosting companies and other parties that have been flagged by Spamhaus as spammers themselves because they refused to comply with Spamhaus' requests, said Sven Kamphuis, who claims to be a spokesman for the group, on Wednesday.
Kamphuis runs a network provider called CB3ROB that has been blacklisted by Spamhaus for hosting spam botnets and extortion scams. CB3ROB is a provider for a Dutch hosting company called CyberBunker.com that allows its customers to "host any content they like, except child porn and anything related to terrorism."
"I'm not a spammer and none of the Stophaus members are," Kamphuis said. If a company gets blacklisted by Spamhaus its bandwidth providers get blacklisted too, he said. This means that if CB3ROB gets blacklisted and this company has KPN as a bandwidth supplier, KPN's mail servers get blacklisted too, he said. Those suppliers then often decide to terminate the contract to keep themselves off the blacklist, he added.
Because of this and because so many providers use Spamhaus' blacklist, the organization "acts like they are the de facto Internet police," Kamphuis said. "Everyone in the business has had more than enough of Spamhaus."

Spamhaus attacks expose huge open DNS server dangers

Massive distributed denial-of-service attacks on Spamhaus this week focused widespread attention on the huge security threats posed by millions of poorly configured Internet Domain Name System (DNS) servers.
The attacks on Spamhaus that began March 19 were apparently launched by a group opposed to the Geneva, Switzerland-based volunteer organization's antispam work.
Several security firms described the attacks on the organization as the largest -- by far -- ever publicly known DDoS attacks to date.
In DDoS attacks, hackers typically try to take down a network by directing huge volumes of useless traffic to it. The traffic is usually generated using large botnets of compromised computers.

Large DDoS attacks have typically tended to involve between 4 gigabits per second to 10Gbps of traffic.
The Spamhaus attacks involved traffic volumes that reached a staggering 300Gbps -- said to be three times larger than the largest DDoS traffic seen to date and magnitudes greater than the traffic involved in a majority of past denial-of-service attacks.
The perpetrators behind the attack employed the well-known but infrequently used method DNS reflection to generate the huge stream of DDoS traffic directed against Spamhaus.
DNS servers are used primarily to look up and resolve domain names such as www.computerworld.com and www.idg.com to their corresponding IP addresses. If a DNS server does not have the domain information in its database or cache, it queries other nearby DNS servers for the information.
Ideally, DNS servers should be configured only to handle look-up requests coming from within a specific domain or IP address range. So a DNS server belonging to an ISP should handle only requests coming from within its IP address range.
In reality, however, millions of DNS servers are configured by default to be open DNS resolvers that accept and respond to queries from outside their own domain, making them vulnerable to exploitation by attackers because virtually anyone on the Internet can use an open DNS server to handle genuine or malicious queries.
For instance, to generate DDoS traffic, the attackers behind the Spamhaus attack sent queries with a spoofed source address to tens of thousands of open DNS resolvers, said Matthew Prince, CEO of CloudFlare, which has been helping Spamhaus deal with the recent attacks.
The lookup requests were made to appear as if they came from Spamhaus. So the responses to the requests from the tens of thousands of open DNS resolvers were sent to Spamhaus, generating a huge volume of traffic.
To magnify the volume of traffic, the attackers crafted the look-up queries in such a manner as to get each open DNS server to respond with much larger volumes of data than normal, Prince said.








Denial-of-service attacks that take advantage of open DNS resolvers are not new.
As far back as in 2006, more than 1,500 organizations around the world were hit by a series of similar attacks, prompting wide concern from security experts.
Then, as now, many security experts warned that ISPs and others operating DNS servers must ensure that their systems are properly configured to prevent attacks such as the one launched against Spamhaus. The problem remains as pervasive as ever despite the warnings, experts note today.
The Open DNS Resolver Project , an effort by a group of security experts to draw attention to the issue, estimates that there are currently about 27 million DNS servers that are open resolvers. About 25 million of those pose a significant threat, according to the project's website.


According to Prince, barely 100,000 of the open resolvers were used to direct 300Gbps of traffic against the organization. "What's spooky here is that only a tiny fraction of the open resolvers were used," he said. The attackers could easily have co-opted more DNS servers, Prince noted.
"This is a situation where some configuration changes on the DNS server side can help prevent the attacks," said Alex Cox, a principal security researcher with RSA Security's FirstWatch team.
But the required changes are difficult to get without a broad collaboration among ISPs. "The problem with a DNS attack is you can't really turn your DNS servers off" without causing widespread disruption, Cox said. "Once this thing blows over it will be interesting to see how some of the folks whose infrastructure was used will respond."
The perpetrators of this week's attacks knew that Spamhaus had a good infrastructure in place to deal with denial-of-service attacks and therefore had to do something really big, said Dan Holden, director of the security and engineering response team at Arbor Networks.
Such attacks are not fully defendable but can be mitigated by ensuring that DNS servers are configured properly, he said. "The good news is that these open DNS resolvers will get a lot more visibility" following the attacks he said. "So hopefully the issue will get fixed."
Several standards are readily available to help ISPs and others operating DNS servers to configure systems to ensure they respond only to requests from their own users, said Mike Smith director of the customer security incident response team at Akamai.
DNS server operators also need to have egress filtering controls in place to ensure that the DNS traffic leaving their networks originated from inside their network, he said.
The Open DNS Resolver Project also calls on DNS server operators to consider implementing rate-limiting software to prevent the sort of traffic amplification that was used in the Spamhaus attacks.
"There are things that need to get cleaned up. That is why we need some awareness of the problem," Smith said.

Tuesday, March 26, 2013

The OSI Reference Model In brief

One of the greatest functions of the OSI specifications is to assist in data transfer between disparate
hosts—meaning, for example, that they enable us to transfer data between a Unix host
and a PC or a Mac.
The OSI isn’t a physical model, though. Rather, it’s a set of guidelines that application
developers can use to create and implement applications that run on a network. It also provides
a framework for creating and implementing networking standards, devices, and internetworking
schemes.
The OSI has seven different layers, divided into two groups. The top three layers define how
the applications within the end stations will communicate with each other and with users. The
bottom four layers define how data is transmitted end to end. Figure 1.6 shows the three upper
layers and their functions, and Figure 1.7 shows the four lower layers and their functions.
FIGURE 1 . 6 The upper layers
• Provides a user interface
• Presents data
• Handles processing such as encryption
• Keeps different applications’
• data separate
Application
Presentation
Session
Transport
Network
Data Link
Physical











When you study Figure 1.6, understand that the user interfaces with the computer at the
Application layer and also that the upper layers are responsible for applications communicating
between hosts. Remember that none of the upper layers knows anything about networking
or network addresses. That’s the responsibility of the four bottom layers.
In Figure 1.7, you can see that it’s the four bottom layers that define how data is transferred
through a physical wire or through switches and routers. These bottom layers also determine
how to rebuild a data stream from a transmitting host to a destination host’s application.
FIGURE 1 . 7 The lower layers
The following network devices operate at all seven layers of the OSI model:
Network management stations (NMSs)
Web and application servers
Gateways (not default gateways)
Network hosts
Basically, the ISO is pretty much the Emily Post of the network protocol world. Just as Ms.
Post wrote the book setting the standards—or protocols—for human social interaction, the
ISO developed the OSI reference model as the precedent and guide for an open network protocol
set. Defining the etiquette of communication models, it remains today the most popular
means of comparison for protocol suites.
The OSI reference model has seven layers:
Application layer (layer 7)
Presentation layer (layer 6)
Session layer (layer 5)
Transport layer (layer 4)
Network layer (layer 3)
Data Link layer (layer 2)
Physical layer (layer 1)











The Application Layer
The Application layer of the OSI model marks the spot where users actually communicate to
the computer. This layer only comes into play when it’s apparent that access to the network
is going to be needed soon. Take the case of Internet Explorer (IE). You could uninstall every
trace of networking components from a system, such as TCP/IP, NIC card, and so on, and you
could still use IE to view a local HTML document—no problem. But things would definitely
get messy if you tried to do something like view an HTML document that must be retrieved
using HTTP or nab a file with FTP or TFTP. That’s because IE will respond to requests such
as those by attempting to access the Application layer. And what’s happening is that the Application
layer is acting as an interface between the actual application program—which isn’t at
all a part of the layered structure—and the next layer down by providing ways for the application
to send information down through the protocol stack. In other words, IE doesn’t truly
reside within the Application layer—it interfaces with Application layer protocols when it
needs to deal with remote resources.
The Application layer is also responsible for identifying and establishing the availability of
the intended communication partner and determining whether sufficient resources for the
intended communication exist.
These tasks are important because computer applications sometimes require more than
only desktop resources. Often, they’ll unite communicating components from more than one
network application. Prime examples are file transfers and email, as well as enabling remote
access, network management activities, client/server processes, and information location.
Many network applications provide services for communication over enterprise networks, but
for present and future internetworking, the need is fast developing to reach beyond the limits
of current physical networking.












The Presentation Layer
The Presentation layer gets its name from its purpose: It presents data to the Application layer
and is responsible for data translation and code formatting.
This layer is essentially a translator and provides coding and conversion functions. A successful
data-transfer technique is to adapt the data into a standard format before transmission.
Computers are configured to receive this generically formatted data and then convert the data
back into its native format for actual reading (for example, EBCDIC to ASCII). By providing
translation services, the Presentation layer ensures that data transferred from the Application
layer of one system can be read by the Application layer of another one.
The OSI has protocol standards that define how standard data should be formatted. Tasks
like data compression, decompression, encryption, and decryption are associated with this
layer. Some Presentation layer standards are involved in multimedia operations too.
The Session Layer
The Session layer is responsible for setting up, managing, and then tearing down sessions
between Presentation layer entities. This layer also provides dialog control between
devices, or nodes. It coordinates communication between systems and serves to organize
their communication by offering three different modes: simplex, half duplex, and full
duplex. To sum up, the Session layer basically keeps different applications’ data separate
from other applications’ data.
The Transport Layer
The Transport layer segments and reassembles data into a data stream. Services located in the
Transport layer segment and reassemble data from upper-layer applications and unite it into
the same data stream. They provide end-to-end data transport services and can establish a
logical connection between the sending host and destination host on an internetwork.
Some of you are probably familiar with TCP and UDP already. (But if you’re not, no worries—
I’ll tell you all about them in Chapter 2.) If so, you know that both work at the Transport
layer and that TCP is a reliable service and UDP is not. This means that application developers
have more options because they have a choice between the two protocols when working with
TCP/IP protocols.




The Network Layer
The Network layer (also called layer 3) manages device addressing, tracks the location of devices
on the network, and determines the best way to move data, which means that the Network layer
must transport traffic between devices that aren’t locally attached. Routers (layer 3 devices) are
specified at the Network layer and provide the routing services within an internetwork.
It happens like this: First, when a packet is received on a router interface, the destination IP
address is checked. If the packet isn’t destined for that particular router, it will look up the destination
network address in the routing table. Once the router chooses an exit interface, the packet
will be sent to that interface to be framed and sent out on the local network. If the router can’t find
an entry for the packet’s destination network in the routing table, the router drops the packet.
Two types of packets are used at the Network layer: data and route updates.
Data packets Used to transport user data through the internetwork. Protocols used to support
data traffic are called routed protocols; examples of routed protocols are IP and IPv6.
You’ll learn about IP addressing in Chapters 2 and 3 and IPv6 in Chapter 13 .
Route update packets Used to update neighboring routers about the networks connected to
all routers within the internetwork. Protocols that send route update packets are called routing
protocols; examples of some common ones are RIP, RIPv2, EIGRP, and OSPF. Route update
packets are used to help build and maintain routing tables on each router.
In Figure 1.13, I’ve given you an example of a routing table. The routing table used in a
router includes the following information:
Network addresses Protocol-specific network addresses. A router must maintain a routing table
for individual routing protocols because each routing protocol keeps track of a network with a different
addressing scheme (IP, IPv6, and IPX, for example). Think of it as a street sign in each of the
different languages spoken by the residents that live on a particular street. So, if there were American,
Spanish, and French folks on a street named Cat, the sign would read Cat/Gato/Chat.





The Data Link Layer
The Data Link layer provides the physical transmission of the data and handles error
notification, network topology, and flow control. This means that the Data Link layer
will ensure that messages are delivered to the proper device on a LAN using hardware
addresses and will translate messages from the Network layer into bits for the Physical
layer to transmit.
The Data Link layer formats the message into pieces, each called a data frame, and adds a customized
header containing the hardware destination and source address. This added information
forms a sort of capsule that surrounds the original message in much the same way that engines,
navigational devices, and other tools were attached to the lunar modules of the Apollo project.
These various pieces of equipment were useful only during certain stages of space flight and were
stripped off the module and discarded when their designated stage was complete. Data traveling
through networks is similar.
Figure 1.15 shows the Data Link layer with the Ethernet and IEEE specifications. When you
check it out, notice that the IEEE 802.2 standard is used in conjunction with and adds functionality
to the other IEEE standards.
FIGURE 1 . 1 5 Data Link layer
It’s important for you to understand that routers, which work at the Network layer, don’t
care at all about where a particular host is located. They’re only concerned about where networks
are located and the best way to reach them—including remote ones. Routers are totally
obsessive when it comes to networks. And for once, this is a good thing! It’s the Data Link
layer that’s responsible for the actual unique identification of each device that resides on a
local network.
For a host to send packets to individual hosts on a local network as well as transmit packets
between routers, the Data Link layer uses hardware addressing. Each time a packet is sent between
routers, it’s framed with control information at the Data Link layer, but that information is
stripped off at the receiving router and only the original packet is left completely intact. This framing
of the packet continues for each hop until the packet is finally delivered to the correct receiving
host. It’s really important to understand that the packet itself is never altered along the route; it’s
only encapsulated with the type of control information required for it to be properly passed on to
the different media types.






The Physical Layer
Finally arriving at the bottom, we find that the Physical layer does two things: It sends bits and
receives bits. Bits come only in values of 1 or 0—a Morse code with numerical values. The
Physical layer communicates directly with the various types of actual communication media.
Different kinds of media represent these bit values in different ways. Some use audio tones,
while others employ state transitions—changes in voltage from high to low and low to high.
Specific protocols are needed for each type of media to describe the proper bit patterns to be
used, how data is encoded into media signals, and the various qualities of the physical media’s
attachment interface.
The Physical layer specifies the electrical, mechanical, procedural, and functional requirements
for activating, maintaining, and deactivating a physical link between end systems. This
layer is also where you identify the interface between the data terminal equipment (DTE) and
the data communication equipment (DCE). (Some old phone-company employees still call
DCE data circuit-terminating equipment.) The DCE is usually located at the service provider,
while the DTE is the attached device. The services available to the DTE are most often accessed
via a modem or channel service unit/data service unit (CSU/DSU).
The Physical layer’s connectors and different physical topologies are defined by the OSI as
standards, allowing disparate systems to communicate. The CCNA objectives are only interested
in the IEEE Ethernet standards.
Hubs at the Physical Layer
A hub is really a multiple-port repeater. A repeater receives a digital signal and reamplifies or
regenerates that signal and then forwards the digital signal out all active ports without looking
at any data. An active hub does the same thing. Any digital signal received from a segment on
a hub port is regenerated or reamplified and transmitted out all ports on the hub. This means
all devices plugged into a hub are in the same collision domain as well as in the same broadcast
domain.












Ethernet Networking
Ethernet is a contention media access method that allows all hosts on a network to share the
same bandwidth of a link. Ethernet is popular because it’s readily scalable, meaning that it’s
comparatively easy to integrate new technologies, such as Fast Ethernet and Gigabit Ethernet,
into an existing network infrastructure. It’s also relatively simple to implement in the first
place, and with it, troubleshooting is reasonably straightforward. Ethernet uses both Data
Link and Physical layer specifications, and this section of the chapter will give you both the
Data Link layer and Physical layer information you need to effectively implement, troubleshoot,
and maintain an Ethernet network.
Ethernet networking uses Carrier Sense Multiple Access with Collision Detection (CSMA/CD),
a protocol that helps devices share the bandwidth evenly without having two devices transmit at
the same time on the network medium. CSMA/CD was created to overcome the problem of those
collisions that occur when packets are transmitted simultaneously from different nodes. And trust
me—good collision management is crucial, because when a node transmits in a CSMA/CD network,
all the other nodes on the network receive and examine that transmission. Only bridges and
routers can effectively prevent a transmission from propagating throughout the entire network!







Half- and Full-Duplex Ethernet
Half-duplex Ethernet is defined in the original 802.3 Ethernet; Cisco says it uses only one wire
pair with a digital signal running in both directions on the wire. Certainly, the IEEE specifications
discuss the process of half duplex somewhat differently, but what Cisco is talking
about is a general sense of what is happening here with Ethernet.
It also uses the CSMA/CD protocol to help prevent collisions and to permit retransmitting
if a collision does occur. If a hub is attached to a switch, it must operate in half-duplex mode
because the end stations must be able to detect collisions. Half-duplex Ethernet—typically
10BaseT—is only about 30 to 40 percent efficient as Cisco sees it because a large 10BaseT network
will usually only give you 3 to 4Mbps, at most.
But full-duplex Ethernet uses two pairs of wires instead of one wire pair like half duplex.
And full duplex uses a point-to-point connection between the transmitter of the transmitting
device and the receiver of the receiving device. This means that with full-duplex data transfer,
you get a faster data transfer compared to half duplex. And because the transmitted data is
sent on a different set of wires than the received data, no collisions will occur.
The reason you don’t need to worry about collisions is because now it’s like a freeway with
multiple lanes instead of the single-lane road provided by half duplex. Full-duplex Ethernet is
supposed to offer 100 percent efficiency in both directions—for example, you can get 20Mbps
with a 10Mbps Ethernet running full duplex or 200Mbps for Fast Ethernet. But this rate is
something known as an aggregate rate, which translates as “you’re supposed to get” 100 percent
efficiency. No guarantees, in networking as in life.
Full-duplex Ethernet can be used in three situations:
With a connection from a switch to a host
With a connection from a switch to a switch
With a connection from a host to a host using a crossover cable

Sunday, February 24, 2013

Police Fighters

Game Description:

Sun City is the city of the future, but the rising crime rate makes this place dangerous for living. Extremists block roads, criminals crash cars and hackers override the city’s automatic traffic control. The Special Supercar Police Unit was organized to blow the whistle on these practices.Download this free full version game today and help police to battle against crime.
Free Game Features:
- Real action game, vehicular combat;
- Modern 3D graphics;
- Extraordinarily addictive game play;
- Unique power-ups;
- Gorgeous effects, huge explosions!
- Original soundtrack and powerful sound effects;
- Free full version game without any limitations.

System Requirements:

  • - Windows 95/98/XP/ME/Vista/7;
  • - Processor 800 Mhz or better;
  • - RAM: minimum 1024Mb;
  • - DirectX 9.0 or higher;
  • - DirectX compatible sound board;
  • - Easy game removal through the Windows Control Panel.

No download? In his case CLICK HERE
File Size: 64 Mb

Sudden Strike

Game Description:

During World War II, Crimea was a scene of some of the bloodiest battles. Finding yourself in a battle-scarred world, you will assume the role of a military commander in charge of an occupied town seeking to survive and fight.Download free full version game today and experience the intensity and adrenaline rush of real combat.
Free Game Features:
- Award-winning strategy game;
- More than 50 missions;
- Increased degree of realism: completely overhauled unit values;
- More than 50 new units;
- Fight battles in three difficulty levels;
- Controllable boats allow you to cross rivers;
- Supplies and additional fire power provided by controllable railways;
- Each motorized unit possesses a separately controllable crew and can be captured;
- Weather system: March troops through rain and snow;
- Campaigns set in summer & winter landscapes;
- Free full version game without any limitations.

System Requirements:

  • - Windows 95/98/XP/ME/Vista/7;
  • - Processor 800 Mhz or better;
  • - RAM: minimum 1024Mb;
  • - DirectX 9.0 or higher;
  • - DirectX compatible sound board;
  • - Easy game removal through the Windows Control Panel.
No download? In his case CLICK HERE
File Size: 182 Mb

Deadly Race

Game Description:

In the beginning of 21th century global energy crisis wakened World War III. A lot of cities were destroyed and people abandoned them searching for better life. Now these abandoned cities are used as race tracks for a new kind of popular entertainment - Apocalypse Motor Racing. Daredevils from different corners of the world drive to victory through wrecked cars of their competitors in this deadly racing.Download free full version game today and try to win Apocalypse Motor Racing or try to survive!

System Requirements:

  • - Windows 95/98/XP/ME/Vista/7;
  • - Processor 800 Mhz or better;
  • - RAM: minimum 1024Mb;
  • - DirectX 9.0 or higher;
  • - DirectX compatible sound board;
  • - Easy game removal through the Windows Control Panel.

No download? In his case CLICK HERE
File Size: 44 Mb 

Star Defender 4

Game Description:

Star Defender 4 is a breathtaking space shooter that still has all the best features of Star Defender series. You’ll be dazzled by splendidly craft graphics and music. Face tons of new enemies with a unique style of behavior and new ways of attacking in Star Defender 4. Use new Star Defender 4 weapons: machine-gun, saw, flame thrower, acid bomb and cutter! And also the best weapons from the previous game: parasitron, lasers, infector, ball lightning, missiles, homing laser and barriers! Blast through more than 100 levels, 8 original missions and of course huge impressive Bosses in the end of every mission! You'll get the real pleasure playing Star Defender 4!
If you need a breathtaking game that won't let you let a mouse out of your hands to have a cup of coffee, which got already cold… If you want to feel as you were the Intergalactic Disinfector Number one who craves for taking vengeance on that disgusting annoying insects... If you long for the real war - then Star Defender 4 is for you! Download free full version game today and hold your breath...

System Requirements:

  • - Windows 95/98/XP/ME/Vista/7;
  • - Processor 800 Mhz or better;
  • - RAM: minimum 1024Mb;
  • - DirectX 9.0 or higher;
  • - DirectX compatible sound board;
  • - Easy game removal through the Windows Control Panel.
No download? In his case CLICK HERE
File Size: 13 Mb