I'm talking about him claiming something that happens every day never happens. Go back and read his posts. Your comprehension needs work.
What has he claimed doesn't happen, yet you see every day?
@Mearen1911 - I'm sure I'm not the only one still wondering what exactly you're talking about.
That's twice you mentioned a mysterious "something" that happens every day, that Mischkag claimed was impossible.
You want us all to go read all his posts and try to puzzle out exactly which claim you're referring to?
Heck, I'm no fanboy of DICE, so I might even agree with you... if I knew what the heck you're talking about.
Either find some common ground and reply to each other respectfully, Or don't reply to each other at all.
Nobody is saying you have to agree with each other, We do appreciate and encourage debate ( thus being the backbone of any forum ), But when it comes down to just insulting each other, It makes you look bad.
We're all adults here, Let's begin behaving like such, Yes?
I'm really trying. I don't think I was insulting my fellow forum user at all. I'd rather we avoid the tit for tat undercutting of opinions that has been prevalent in this thread.
I really just disagree with slandering the one netcode dev that has been honest with us.
A hitreg bug, would be something specific to a given instance.
Switching guns while reloading. And then have no hitreg.
Enter vehicle as a class and then exit vehicle, and have no hitreg.
Something like the previous hitreg bug that made some players unable to hit any one without restarting the game for example!
General hitreg would be that you sometime doesn't get a hit on someone.
Maybe one hit isn't counted out of 5 hits.
Drop that "crazy hostility" you've got going..
I'm not going to play semantics with you, and I'm not going to allow semantics to dismiss the feedback of myself and others, you're literally making nonsense claims. Bugs, issues, problems.... WHO CARES? These terms are not the point.
I'm also not the one declaring that the dev has no reason to further interact.
It seems you have dismissed the idea of legit hit-reg bugs and are now blaming supression and spread, I'm not going to follow a narrative simply out of ease.
If you don't think the netcode, or legit bugs are to blame, thats totally fine, not a problem, what we don't need is you dismissing feedback.
Im not dismissing feedback!
I'm not saying there is no hitreg issues!
I do doubt that mischkag can do much more, under the circumstances!
Packet loss limit and ping limit would be an easy way to see if issues would remain.
But since every thing he have done to make it more fair for lowpingers.
Have resulted in hate from everyone with a bad connection.. What more is there that he can do!?
He hasn't "done" anything. He doesn't even understand the problem. He claimed something that happens every single day here is impossible. He needs to be replaced. THAT is why this problem isn't getting fixed. THAT is why netcode issues have existed for at least 3 Battlefield games. You're giving software developers way more credit than they deserve here. They're not infallible but they do need to admit to their limitations. Dice also needs to recognize this.
Lol, ignorance breeds ignorant opinions.
Don't hate on the dude who is fighting for low ping.
It's up to the leads, not the designer. That's how it works in every facet of game dev.
Your insults prove you're not here to help so don't project. What you're claiming isn't mutually exclusive. You do know that right? Also I'm not hating anyone, I'm point out DICE's failures. If you comprehended what you read, you would know that.
What did I state that is not mutually exclusive?
Being ignorant isn't preferable, but to acknowledge ignorance is not necessarily an insult. I didn't mean to insult you, but you blaming the designer is not beneficial (there is a netcode team, and they answer to production leads, who answer to EA). Just trying to shed some light, as knowledge is power.
You claim the dev that interacted here doesn't understand, that's not true.
DICE recognizes that a large portion of the audience has poor connections, due to the backlash of the initial 100ms cap, there isn't much chance of DICE clamping down on poor connections again. This is unfortunate as there are tangible issues regarding player desync still in the live client, that doesn't mean things are being ignored.
You blame all of the netcode problems of the last three games on one dev, I'd love to see the factual evidence you have to back up that opinion, from my perspective, it seems like a purely negative assumption.
There you go with more insults. Grow up and don't bother replying to me again.
Those are not insults.
Sorry you're sensitive to facts.
Denying that you tried to insult me is pathetic. You simply refuse to grow up. That's even worse.
Soo, with the in the name of the tsar patch the hit detection for me is a lot more consistent.
Something like 1 to 2 wtf moments a game down from 10 wtfs and i'm able to shoot knee sliders 9/10 times so big improvement.
I do feel tho that sometimes i'm getting hits when I miss but no where near as much, and that those hits might just be a lagged enemy moving into the bullet on the server as I see them fall in the direction of the miss..?
It also seems like sometimes the death notification is really delayed and I find myself wanting to empty a clip to make sure.
Occasionally i'll put 3 bullets into someone with a medic rifle and the third won't register so again i'm emptying clips instead of relying on 3 shots per kill in most situations.... but I have been using the 1906 a bit.....
The hit/miss performance of the 1895 trench feels better (which is what i'm basing my perception of increased consistency on).
Anyway, whatever you did has made it better, thanks!
Soo, with the in the name of the tsar patch the hit detection for me is a lot more consistent.
Something like 1 to 2 wtf moments a game down from 10 wtfs and i'm able to shoot knee sliders 9/10 times so big improvement.
I do feel tho that sometimes i'm getting hits when I miss but no where near as much, and that those hits might just be a lagged enemy moving into the bullet on the server as I see them fall in the direction of the miss..?
It also seems like sometimes the death notification is really delayed and I find myself wanting to empty a clip to make sure.
Occasionally i'll put 3 bullets into someone with a medic rifle and the third won't register so again i'm emptying clips instead of relying on 3 shots per kill in most situations.... but I have been using the 1906 a bit.....
The hit/miss performance of the 1895 trench feels better (which is what i'm basing my perception of increased consistency on).
Anyway, whatever you did has made it better, thanks!
There are acknowledged server issues that could be part of it?
i think theory is more apt.
if you break down the protocols it makes no sense to delay or kick tcp...because it will resend.
with icmp this will show up if you "hinder it"...they have other ways around this though by simply avoiding measuring it on hops en route...hence the blank nodes.(on a side not most of these show up with udp ping and the majority show issues)...i dont buy isp's explanations because when im playign and i see issues i also have problems with games....when there are no issues games pay well....
back to icmp...if people can see problems they will complain...so why hinder it.
that leaves good old udp.
if a network is getting close to it level where it has to do something what do they do...just leave it and hope it doesnt happen?
no they re route...but are they re routing all traffic?
no imo
im seeing differences in graphs between icmp and udp on certain parts of routes.
the net result is a differnt latency for icmp compared to udp.
this isnt conspiracy.
You seem to be somewhat versed in TCP, but I recommend a refresher.
TCP vs UDP
TCP is a connection oriented stream over an IP network. It guarantees that all sent packets will reach the destination in the correct order. This implies the use of acknowledgement packets (ACKS) sent back to the sender, and automatic retransmission, causing additional delays and a general less efficient transmission than UDP.
UDP is a connection-less protocol. Communication is datagram oriented. The integrity is guaranteed only on the single datagram. Datagrams reach destination and can arrive out of order or don't arrive at all. It is more efficient than TCP because it uses non ACK. It's generally used for real time communication, where a little percentage of packet loss rate is preferable to the overhead of a TCP connection.
TCP requires a 3 packet exchange with ACK's to establish a connection before any application data is sent. When a packet is lost, the connection bottlenecks until the lost packet resend is received and acknowledged. This adds tremendous amounts of lag. During the retransmission of the lost packet, no other packets are sent. Any actions you take during the retransmission are backlogged for later transmission. The backlog is buffered based and can be only so large.
If the packet loss spike is severe the connection will be severed. Resulting in a reconnection process of a 3 packet exchange with acknowledgements. AGAIN more LAG. If the packet loss exceeds a specific amount (system dependent) client server desync will happen. This will result in a complete disconnection from the game.
Thus the reasoning for using UDP over TCP. With UDP security and reliability are added in at the Application layer.
TCP protocol use: HTTP, HTTPs, FTP, SMTP, Telnet
UDP protocol use: DNS, DHCP, TFTP, SNMP, RIP, VOIP
ICMP requests are routed to a specific box at the datacenter in which the game server resides. All pings (ICMP) for all clients regardless the application are done this way....at every datacenter.
For clarity "Any request" using the ICMP protocol upon reaching the datacenter are routed to a Pingsite server. The Pingsite handles ALL ICMP requests irregardless if it's for a game server, web server, database etc. This is done to reduce the connection load and resource usage of the application server. Once data reaches the datacenter it's mere MICROseconds from DNS to Box.
The TERM latency is generally synonymous with ping. But that is not the case when it comes to games.
Latency (UDP) is not equal to PING (ICMP). It is the round trip of a datagram + server side processing time. client data is processed. Latency is affected by server load.
So of course there are going to be differences when comparing 2 separate strings. Ping is ICMP, Latency is UDP (round trip and processing of a datagram).
of course i know about the two protocols.
i was merely pointing out what isps are very likely to be doing to get round the problem net neutrality has brought to them.
you cant simply pick on traffic types any more.
but what about protocols and if they can hide it wey hey even better.....obviously they havent hidden it well enough.
also an update.
when i have had dodgy gameplay with my new isp ive checked the icmp routes and the udp routes.
not surprisingly they arent the same ........
now if they are measuring latency with icmp...can anyone tell me what the obvious flaw is with the observation in the above paragraph?
i think theory is more apt.
if you break down the protocols it makes no sense to delay or kick tcp...because it will resend.
with icmp this will show up if you "hinder it"...they have other ways around this though by simply avoiding measuring it on hops en route...hence the blank nodes.(on a side not most of these show up with udp ping and the majority show issues)...i dont buy isp's explanations because when im playign and i see issues i also have problems with games....when there are no issues games pay well....
back to icmp...if people can see problems they will complain...so why hinder it.
that leaves good old udp.
if a network is getting close to it level where it has to do something what do they do...just leave it and hope it doesnt happen?
no they re route...but are they re routing all traffic?
no imo
im seeing differences in graphs between icmp and udp on certain parts of routes.
the net result is a differnt latency for icmp compared to udp.
this isnt conspiracy.
You seem to be somewhat versed in TCP, but I recommend a refresher.
TCP vs UDP
TCP is a connection oriented stream over an IP network. It guarantees that all sent packets will reach the destination in the correct order. This implies the use of acknowledgement packets (ACKS) sent back to the sender, and automatic retransmission, causing additional delays and a general less efficient transmission than UDP.
UDP is a connection-less protocol. Communication is datagram oriented. The integrity is guaranteed only on the single datagram. Datagrams reach destination and can arrive out of order or don't arrive at all. It is more efficient than TCP because it uses non ACK. It's generally used for real time communication, where a little percentage of packet loss rate is preferable to the overhead of a TCP connection.
TCP requires a 3 packet exchange with ACK's to establish a connection before any application data is sent. When a packet is lost, the connection bottlenecks until the lost packet resend is received and acknowledged. This adds tremendous amounts of lag. During the retransmission of the lost packet, no other packets are sent. Any actions you take during the retransmission are backlogged for later transmission. The backlog is buffered based and can be only so large.
If the packet loss spike is severe the connection will be severed. Resulting in a reconnection process of a 3 packet exchange with acknowledgements. AGAIN more LAG. If the packet loss exceeds a specific amount (system dependent) client server desync will happen. This will result in a complete disconnection from the game.
Thus the reasoning for using UDP over TCP. With UDP security and reliability are added in at the Application layer.
TCP protocol use: HTTP, HTTPs, FTP, SMTP, Telnet
UDP protocol use: DNS, DHCP, TFTP, SNMP, RIP, VOIP
ICMP requests are routed to a specific box at the datacenter in which the game server resides. All pings (ICMP) for all clients regardless the application are done this way....at every datacenter.
For clarity "Any request" using the ICMP protocol upon reaching the datacenter are routed to a Pingsite server. The Pingsite handles ALL ICMP requests irregardless if it's for a game server, web server, database etc. This is done to reduce the connection load and resource usage of the application server. Once data reaches the datacenter it's mere MICROseconds from DNS to Box.
The TERM latency is generally synonymous with ping. But that is not the case when it comes to games.
Latency (UDP) is not equal to PING (ICMP). It is the round trip of a datagram + server side processing time. client data is processed. Latency is affected by server load.
So of course there are going to be differences when comparing 2 separate strings. Ping is ICMP, Latency is UDP (round trip and processing of a datagram).
of course i know about the two protocols.
i was merely pointing out what isps are very likely to be doing to get round the problem net neutrality has brought to them.
you cant simply pick on traffic types any more.
but what about protocols and if they can hide it wey hey even better.....obviously they havent hidden it well enough.
also an update.
when i have had dodgy gameplay with my new isp ive checked the icmp routes and the udp routes.
not surprisingly they arent the same ........
now if they are measuring latency with icmp...can anyone tell me what the obvious flaw is with the observation in the above paragraph?
THEY ARE NOT MEASURING LATENCY WITH ICMP!!!!!!!!!!!!!!!!!
The network graph is the only UI in game that shows latency. Latency is based off the protocol it uses. In this case it is UDP.
question re ping cap.
If its below 200ms for the european servers then how a 700ms ping player get on and stay on?
btw this game had no ill effects.
in fact it doesnt matter how many 100ms+ players are on it doesnt make a difference.
the only thing that makes a difference is if im getting raised ping on certain noes en route...even if the end result is good.
imo ea arent measuring udp latency.
they are using icmp.
this needs to change.
some people think they have good connections when they wont have.
The 200ms threshold is not a ping kicker. The 200ms threshold stipulates when a player's shots are handled strictly by the server.
Below threshold : (Client Reg -> Server Auth) Client determines a hit and sends a hit claim to the server. The server in turn arbitrates the hit claim. If validated the server awards the hit. Only shots the client deems a hit are sent to the server.
Above threshold : (Server reg & Auth) All shots fired by a client are sent to the server for arbitration. Every shot is scrutinized.
Have you read any of this thread or the actual patch notes?
i think theory is more apt.
if you break down the protocols it makes no sense to delay or kick tcp...because it will resend.
with icmp this will show up if you "hinder it"...they have other ways around this though by simply avoiding measuring it on hops en route...hence the blank nodes.(on a side not most of these show up with udp ping and the majority show issues)...i dont buy isp's explanations because when im playign and i see issues i also have problems with games....when there are no issues games pay well....
back to icmp...if people can see problems they will complain...so why hinder it.
that leaves good old udp.
if a network is getting close to it level where it has to do something what do they do...just leave it and hope it doesnt happen?
no they re route...but are they re routing all traffic?
no imo
im seeing differences in graphs between icmp and udp on certain parts of routes.
the net result is a differnt latency for icmp compared to udp.
this isnt conspiracy.
You seem to be somewhat versed in TCP, but I recommend a refresher.
TCP vs UDP
TCP is a connection oriented stream over an IP network. It guarantees that all sent packets will reach the destination in the correct order. This implies the use of acknowledgement packets (ACKS) sent back to the sender, and automatic retransmission, causing additional delays and a general less efficient transmission than UDP.
UDP is a connection-less protocol. Communication is datagram oriented. The integrity is guaranteed only on the single datagram. Datagrams reach destination and can arrive out of order or don't arrive at all. It is more efficient than TCP because it uses non ACK. It's generally used for real time communication, where a little percentage of packet loss rate is preferable to the overhead of a TCP connection.
TCP requires a 3 packet exchange with ACK's to establish a connection before any application data is sent. When a packet is lost, the connection bottlenecks until the lost packet resend is received and acknowledged. This adds tremendous amounts of lag. During the retransmission of the lost packet, no other packets are sent. Any actions you take during the retransmission are backlogged for later transmission. The backlog is buffered based and can be only so large.
If the packet loss spike is severe the connection will be severed. Resulting in a reconnection process of a 3 packet exchange with acknowledgements. AGAIN more LAG. If the packet loss exceeds a specific amount (system dependent) client server desync will happen. This will result in a complete disconnection from the game.
Thus the reasoning for using UDP over TCP. With UDP security and reliability are added in at the Application layer.
TCP protocol use: HTTP, HTTPs, FTP, SMTP, Telnet
UDP protocol use: DNS, DHCP, TFTP, SNMP, RIP, VOIP
ICMP requests are routed to a specific box at the datacenter in which the game server resides. All pings (ICMP) for all clients regardless the application are done this way....at every datacenter.
For clarity "Any request" using the ICMP protocol upon reaching the datacenter are routed to a Pingsite server. The Pingsite handles ALL ICMP requests irregardless if it's for a game server, web server, database etc. This is done to reduce the connection load and resource usage of the application server. Once data reaches the datacenter it's mere MICROseconds from DNS to Box.
The TERM latency is generally synonymous with ping. But that is not the case when it comes to games.
Latency (UDP) is not equal to PING (ICMP). It is the round trip of a datagram + server side processing time. client data is processed. Latency is affected by server load.
So of course there are going to be differences when comparing 2 separate strings. Ping is ICMP, Latency is UDP (round trip and processing of a datagram).
of course i know about the two protocols.
i was merely pointing out what isps are very likely to be doing to get round the problem net neutrality has brought to them.
you cant simply pick on traffic types any more.
but what about protocols and if they can hide it wey hey even better.....obviously they havent hidden it well enough.
also an update.
when i have had dodgy gameplay with my new isp ive checked the icmp routes and the udp routes.
not surprisingly they arent the same ........
now if they are measuring latency with icmp...can anyone tell me what the obvious flaw is with the observation in the above paragraph?
THEY ARE NOT MEASURING LATENCY WITH ICMP!!!!!!!!!!!!!!!!!
The network graph is the only UI in game that shows latency. Latency is based off the protocol it uses. In this case it is UDP.
then something is up with their measurements.
when i do get iffy gameplay i check pingplotter and see a dramatic change in routes on udp.
i check my "LATENCY" on the net graph and i see very little variance.
yet on fifas if its iffy i see 3 bars instead of 4 and it might jump between red orange or green pre match.
i think the fact they dont bother showing a true packet loss figure is very suspect.
i think ea need to be transparent.
i dont believe the net graph latency figure...it simply can not be correct.
question re ping cap.
If its below 200ms for the european servers then how a 700ms ping player get on and stay on?
btw this game had no ill effects.
in fact it doesnt matter how many 100ms+ players are on it doesnt make a difference.
the only thing that makes a difference is if im getting raised ping on certain noes en route...even if the end result is good.
imo ea arent measuring udp latency.
they are using icmp.
this needs to change.
some people think they have good connections when they wont have.
The 200ms threshold is not a ping kicker. The 200ms threshold stipulates when a player's shots are handled strictly by the server.
Below threshold : (Client Reg -> Server Auth) Client determines a hit and sends a hit claim to the server. The server in turn arbitrates the hit claim. If validated the server awards the hit. Only shots the client deems a hit are sent to the server.
Above threshold : (Server reg & Auth) All shots fired by a client are sent to the server for arbitration. Every shot is scrutinized.
Have you read any of this thread or the actual patch notes?
not all because im testing 2 games...plus other things.
im more interested why the "LATENCY" bear no resemblence to pingplotter or how the game plays
if it is 50ms all games id expect it to be the same every time.....even more so as you kindly pointed out that its tested with udp lol
im looking at no major differences on the netgraphs but variable gameplay...how can this be?
i also see a few players with 100ms ping...most are between 20 and 100.
my guess is packet loss is a major factor and ea are hiding it.
The packet that is lost is resent by the client, that information when it eventually is received by the server is then applied late to the player who the player with packet loss is interacting with, being held by damage , late and stacked damage are the effects of this, it causes large amounts of de-sync in game and is basically what we have been mostly complaining about for 257 pages.
i think theory is more apt.
if you break down the protocols it makes no sense to delay or kick tcp...because it will resend.
with icmp this will show up if you "hinder it"...they have other ways around this though by simply avoiding measuring it on hops en route...hence the blank nodes.(on a side not most of these show up with udp ping and the majority show issues)...i dont buy isp's explanations because when im playign and i see issues i also have problems with games....when there are no issues games pay well....
back to icmp...if people can see problems they will complain...so why hinder it.
that leaves good old udp.
if a network is getting close to it level where it has to do something what do they do...just leave it and hope it doesnt happen?
no they re route...but are they re routing all traffic?
no imo
im seeing differences in graphs between icmp and udp on certain parts of routes.
the net result is a differnt latency for icmp compared to udp.
this isnt conspiracy.
You seem to be somewhat versed in TCP, but I recommend a refresher.
TCP vs UDP
TCP is a connection oriented stream over an IP network. It guarantees that all sent packets will reach the destination in the correct order. This implies the use of acknowledgement packets (ACKS) sent back to the sender, and automatic retransmission, causing additional delays and a general less efficient transmission than UDP.
UDP is a connection-less protocol. Communication is datagram oriented. The integrity is guaranteed only on the single datagram. Datagrams reach destination and can arrive out of order or don't arrive at all. It is more efficient than TCP because it uses non ACK. It's generally used for real time communication, where a little percentage of packet loss rate is preferable to the overhead of a TCP connection.
TCP requires a 3 packet exchange with ACK's to establish a connection before any application data is sent. When a packet is lost, the connection bottlenecks until the lost packet resend is received and acknowledged. This adds tremendous amounts of lag. During the retransmission of the lost packet, no other packets are sent. Any actions you take during the retransmission are backlogged for later transmission. The backlog is buffered based and can be only so large.
If the packet loss spike is severe the connection will be severed. Resulting in a reconnection process of a 3 packet exchange with acknowledgements. AGAIN more LAG. If the packet loss exceeds a specific amount (system dependent) client server desync will happen. This will result in a complete disconnection from the game.
Thus the reasoning for using UDP over TCP. With UDP security and reliability are added in at the Application layer.
TCP protocol use: HTTP, HTTPs, FTP, SMTP, Telnet
UDP protocol use: DNS, DHCP, TFTP, SNMP, RIP, VOIP
ICMP requests are routed to a specific box at the datacenter in which the game server resides. All pings (ICMP) for all clients regardless the application are done this way....at every datacenter.
For clarity "Any request" using the ICMP protocol upon reaching the datacenter are routed to a Pingsite server. The Pingsite handles ALL ICMP requests irregardless if it's for a game server, web server, database etc. This is done to reduce the connection load and resource usage of the application server. Once data reaches the datacenter it's mere MICROseconds from DNS to Box.
The TERM latency is generally synonymous with ping. But that is not the case when it comes to games.
Latency (UDP) is not equal to PING (ICMP). It is the round trip of a datagram + server side processing time. client data is processed. Latency is affected by server load.
So of course there are going to be differences when comparing 2 separate strings. Ping is ICMP, Latency is UDP (round trip and processing of a datagram).
of course i know about the two protocols.
i was merely pointing out what isps are very likely to be doing to get round the problem net neutrality has brought to them.
you cant simply pick on traffic types any more.
but what about protocols and if they can hide it wey hey even better.....obviously they havent hidden it well enough.
also an update.
when i have had dodgy gameplay with my new isp ive checked the icmp routes and the udp routes.
not surprisingly they arent the same ........
now if they are measuring latency with icmp...can anyone tell me what the obvious flaw is with the observation in the above paragraph?
THEY ARE NOT MEASURING LATENCY WITH ICMP!!!!!!!!!!!!!!!!!
The network graph is the only UI in game that shows latency. Latency is based off the protocol it uses. In this case it is UDP.
then something is up with their measurements.
when i do get iffy gameplay i check pingplotter and see a dramatic change in routes on udp.
i check my "LATENCY" on the net graph and i see very little variance.
yet on fifas if its iffy i see 3 bars instead of 4 and it might jump between red orange or green pre match.
i think the fact they dont bother showing a true packet loss figure is very suspect.
i think ea need to be transparent.
i dont believe the net graph latency figure...it simply can not be correct.
Then decompile your client and look at the code. If they're adjusting latency values you'll see the modification just before it is sent to the UI.
if someone is getting 1% udp packet loss and another is getting 5% what are ea doing about this?
Do you know anything about networking? If the routes, distances, network traffic etc are different obviously the outcomes will be different.
A player in NY will not have the same latency/loss as a player in VA for a server in DC. The same even applies to players living right next door to each other playing on the same server.
not sure how they could compensate when they wouldnt know what instruction was in that packet as the one in front and behind arrived
The client holds a copy of each sent packet in memory for roughly 1 second. This is a direct copy of the ack redundancy system I noted in a previous post. If the client doesn't receive an ack from the server within X ms (offset + UTT) for that packet it will resend.
High latency players (above threshold) do not resend data of any sort. The server extrapolates for loss. The extrapolated position is bound to the players game history as though the data was sent by him/her. Rightly, hit arbitration will be applied to the extrapolated data. HP Shots that arrive late or are lost are outright discarded and not resent (aka automatic MISS). Also the FHT is clamped, meaning it will only rewind history to a set ms. This time frame does not take into consideration your latency.
Refer to pg 62
Side note ... Network Graph only shows downstream packet loss. refer to pg 69.
Comments
@Mearen1911 - I'm sure I'm not the only one still wondering what exactly you're talking about.
That's twice you mentioned a mysterious "something" that happens every day, that Mischkag claimed was impossible.
You want us all to go read all his posts and try to puzzle out exactly which claim you're referring to?
Heck, I'm no fanboy of DICE, so I might even agree with you... if I knew what the heck you're talking about.
Enlighten us, please.
I'm really trying. I don't think I was insulting my fellow forum user at all. I'd rather we avoid the tit for tat undercutting of opinions that has been prevalent in this thread.
I really just disagree with slandering the one netcode dev that has been honest with us.
Denying that you tried to insult me is pathetic. You simply refuse to grow up. That's even worse.
Something like 1 to 2 wtf moments a game down from 10 wtfs and i'm able to shoot knee sliders 9/10 times so big improvement.
I do feel tho that sometimes i'm getting hits when I miss but no where near as much, and that those hits might just be a lagged enemy moving into the bullet on the server as I see them fall in the direction of the miss..?
It also seems like sometimes the death notification is really delayed and I find myself wanting to empty a clip to make sure.
Occasionally i'll put 3 bullets into someone with a medic rifle and the third won't register so again i'm emptying clips instead of relying on 3 shots per kill in most situations.... but I have been using the 1906 a bit.....
The hit/miss performance of the 1895 trench feels better (which is what i'm basing my perception of increased consistency on).
Anyway, whatever you did has made it better, thanks!
There are acknowledged server issues that could be part of it?
No fix as of yet.
http://xboxdvr.com/gamer/oJU5T1No/video/36626767#t=5
of course i know about the two protocols.
i was merely pointing out what isps are very likely to be doing to get round the problem net neutrality has brought to them.
you cant simply pick on traffic types any more.
but what about protocols and if they can hide it wey hey even better.....obviously they havent hidden it well enough.
also an update.
when i have had dodgy gameplay with my new isp ive checked the icmp routes and the udp routes.
not surprisingly they arent the same ........
now if they are measuring latency with icmp...can anyone tell me what the obvious flaw is with the observation in the above paragraph?
If its below 200ms for the european servers then how a 700ms ping player get on and stay on?
btw this game had no ill effects.
in fact it doesnt matter how many 100ms+ players are on it doesnt make a difference.
the only thing that makes a difference is if im getting raised ping on certain noes en route...even if the end result is good.
imo ea arent measuring udp latency.
they are using icmp.
this needs to change.
some people think they have good connections when they wont have.
THEY ARE NOT MEASURING LATENCY WITH ICMP!!!!!!!!!!!!!!!!!
The network graph is the only UI in game that shows latency. Latency is based off the protocol it uses. In this case it is UDP.
The 200ms threshold is not a ping kicker. The 200ms threshold stipulates when a player's shots are handled strictly by the server.
Below threshold : (Client Reg -> Server Auth) Client determines a hit and sends a hit claim to the server. The server in turn arbitrates the hit claim. If validated the server awards the hit. Only shots the client deems a hit are sent to the server.
Above threshold : (Server reg & Auth) All shots fired by a client are sent to the server for arbitration. Every shot is scrutinized.
Have you read any of this thread or the actual patch notes?
then something is up with their measurements.
when i do get iffy gameplay i check pingplotter and see a dramatic change in routes on udp.
i check my "LATENCY" on the net graph and i see very little variance.
yet on fifas if its iffy i see 3 bars instead of 4 and it might jump between red orange or green pre match.
i think the fact they dont bother showing a true packet loss figure is very suspect.
i think ea need to be transparent.
i dont believe the net graph latency figure...it simply can not be correct.
not all because im testing 2 games...plus other things.
im more interested why the "LATENCY" bear no resemblence to pingplotter or how the game plays
if it is 50ms all games id expect it to be the same every time.....even more so as you kindly pointed out that its tested with udp lol
im looking at no major differences on the netgraphs but variable gameplay...how can this be?
i also see a few players with 100ms ping...most are between 20 and 100.
my guess is packet loss is a major factor and ea are hiding it.
if someone is getting 1% udp packet loss and another is getting 5% what are ea doing about this?
Compensating for the lost packets purposefully ruining the game for everyone else.
not sure how they could compensate when they wouldnt know what instruction was in that packet as the one in front and behind arrived
Then decompile your client and look at the code. If they're adjusting latency values you'll see the modification just before it is sent to the UI.
Do you know anything about networking? If the routes, distances, network traffic etc are different obviously the outcomes will be different.
A player in NY will not have the same latency/loss as a player in VA for a server in DC. The same even applies to players living right next door to each other playing on the same server.
https://forums.battlefield.com/en-us/discussion/105245/ping-latency-jitter/p1
Explain what you think the compensation is and how it works. There maybe a misunderstanding.
The client holds a copy of each sent packet in memory for roughly 1 second. This is a direct copy of the ack redundancy system I noted in a previous post. If the client doesn't receive an ack from the server within X ms (offset + UTT) for that packet it will resend.
High latency players (above threshold) do not resend data of any sort. The server extrapolates for loss. The extrapolated position is bound to the players game history as though the data was sent by him/her. Rightly, hit arbitration will be applied to the extrapolated data. HP Shots that arrive late or are lost are outright discarded and not resent (aka automatic MISS). Also the FHT is clamped, meaning it will only rewind history to a set ms. This time frame does not take into consideration your latency.
Refer to pg 62
Side note ... Network Graph only shows downstream packet loss. refer to pg 69.