Ruby setters—whether created by (c)attr_accessor
or manually—seem to be the only methods that need self.
qualification when accessed within the class itself. This seems to put Ruby alone the world of languages:
- All methods need
self
/this
(like Perl, and I think Javascript) - No methods require
self
/this
is (C#, Java) - Only setters need
self
/this
(Ruby?)
The best comparison is C# vs Ruby, because both languages support accessor methods which work syntactically just like class instance variables: foo.x = y
, y = foo.x
. C# calls them properties.
Here's a simple example; the same program in Ruby then C#:
class A
def qwerty; @q; end # manual getter
def qwerty=(value); @q = value; end # manual setter, but attr_accessor is same
def asdf; self.qwerty = 4; end # "self." is necessary in ruby?
def xxx; asdf; end # we can invoke nonsetters w/o "self."
def dump; puts "qwerty = #{qwerty}"; end
end
a = A.new
a.xxx
a.dump
take away the self.qwerty =()
and it fails (Ruby 1.8.6 on Linux & OS X). Now C#:
using System;
public class A {
public A() {}
int q;
public int qwerty {
get { return q; }
set { q = value; }
}
public void asdf() { qwerty = 4; } // C# setters work w/o "this."
public void xxx() { asdf(); } // are just like other methods
public void dump() { Console.WriteLine("qwerty = {0}", qwerty); }
}
public class Test {
public static void Main() {
A a = new A();
a.xxx();
a.dump();
}
}
Question: Is this true? Are there other occasions besides setters where self is necessary? I.e., are there other occasions where a Ruby method cannot be invoked without self?
There are certainly lots of cases where self becomes necessary. This is not unique to Ruby, just to be clear:
using System;
public class A {
public A() {}
public int test { get { return 4; }}
public int useVariable() {
int test = 5;
return test;
}
public int useMethod() {
int test = 5;
return this.test;
}
}
public class Test {
public static void Main() {
A a = new A();
Console.WriteLine("{0}", a.useVariable()); // prints 5
Console.WriteLine("{0}", a.useMethod()); // prints 4
}
}
Same ambiguity is resolved in same way. But while subtle I'm asking about the case where
- A method has been defined, and
- No local variable has been defined, and
we encounter
qwerty = 4
which is ambiguous—is this a method invocation or an new local variable assignment?
@Mike Stone
Hi! I understand and appreciate the points you've made and your example was great. Believe me when I say, if I had enough reputation, I'd vote up your response. Yet we still disagree:
- on a matter of semantics, and
- on a central point of fact
First I claim, not without irony, we're having a semantic debate about the meaning of 'ambiguity'.
When it comes to parsing and programming language semantics (the subject of this question), surely you would admit a broad spectrum of the notion 'ambiguity'. Let's just adopt some random notation:
- ambiguous: lexical ambiguity (lex must 'look ahead')
- Ambiguous: grammatical ambiguity (yacc must defer to parse-tree analysis)
- AMBIGUOUS: ambiguity knowing everything at the moment of execution
(and there's junk between 2-3 too). All these categories are resolved by gathering more contextual info, looking more and more globally. So when you say,
"qwerty = 4" is UNAMBIGUOUS in C# when there is no variable defined...
I couldn't agree more. But by the same token, I'm saying
"qwerty = 4" is un-Ambiguous in ruby (as it now exists)
"qwerty = 4" is Ambiguous in C#
And we're not yet contradicting each other. Finally, here's where we really disagree: Either ruby could or could not be implemented without any further language constructs such that,
For "qwerty = 4," ruby UNAMBIGUOUSLY invokes an existing setter if there
is no local variable defined
You say no. I say yes; another ruby could exist which behaves exactly like the current in every respect, except "qwerty = 4" defines a new variable when no setter and no local exists, it invokes the setter if one exists, and it assigns to the local if one exists. I fully accept that I could be wrong. In fact, a reason why I might be wrong would be interesting.
Let me explain.
Imagine you are writing a new OO language with accessor methods looking like instances vars (like ruby & C#). You'd probably start with conceptual grammars something like:
var = expr // assignment
method = expr // setter method invocation
But the parser-compiler (not even the runtime) will puke, because even after all the input is grokked there's no way to know which grammar is pertinent. You're faced which a classic choice. I can't be sure of the details, but basically ruby does this:
var = expr // assignment (new or existing)
// method = expr, disallow setter method invocation without .
that is why it's un-Ambiguous, while and C# does this:
symbol = expr // push 'symbol=' onto parse tree and decide later
// if local variable is def'd somewhere in scope: assignment
// else if a setter is def'd in scope: invocation
For C#, 'later' is still at compile time.
I'm sure ruby could do the same, but 'later' would have to be at runtime, because as ben points out you don't know until the statement is executed which case applies.
My question was never intended to mean "do I really need the 'self.'?" or "what potential ambiguity is being avoided?" Rather I wanted to know why was this particular choice made? Maybe it's not performance. Maybe it just got the job done, or it was considered best to always allow a 1-liner local to override a method (a pretty rare case requirement) ...
But I'm sort of suggesting that the most dynamical language might be the one which postpones this decision the longest, and chooses semantics based on the most contextual info: so if you have no local and you defined a setter, it would use the setter. Isn't this why we like ruby, smalltalk, objc, because method invocation is decided at runtime, offering maximum expressiveness?
$this->
when accessing instances variables. That trips me up all the time. – Jean