Sunday, August 4, 2019

A note on Verification Planning and Management - 1

There is already a lot of knowledge and literature on verification technologies available of the web today. Today I wanted to write about the planning and management aspects of verification.

In any kind of project, planning plays an important role. All military missions are backed by solid planning. The foundation of sound verification is built by a sound verification plan.

What are the ingredients of  a sound verification plan :-

1) Schedule - A tentative estimate of the timelines and schedule for the complete project helps you to understand how much effective time you have to perform comprehensive verification of your module

2) What to Verify : Testplan - An understanding of the "What to verifiy (Verification Goals)" or feature sets and scoping is the primary requirement in verification. Depending upon whether you are doing IP level/subystem level or SoC level verification, your scoping and feature set differs. Same is applicable for your coverage criteria.

3) How to Verify : Architecture - Your verification architecture determines the how part of verification. A simple, user-friendly and yet powerful architecture of the verification testbench determines greater efficiency and efficacy of verification. Needless to say, your verification architecture is the foundation of your success in verification. This includes the flows and methodologies as well.

4) Tools, Flows and Methodologies - Depending upon the above two criteria, you decide what tools and technologies you want to employ in order to achieve your verification goals.

It has been a cliche nowadays to have very sophisticated flows for verification which are very complex and not at all user friendly. The principles of great flow design are :-
            1) Simplicity
            2) User-friendliness
            3) Systematic and methodical

6) Linux resources - You also need to estimate the disk space needed for your team and also for performing regressions

7) Team Demographics - Now that you know the what and how of verification, you need to decide your team structure. This is a critical aspect of verification.

To give an example of the same, I would relate it to the recent surgical strikes by India on terrorists.
Each team of special forces was comprised of specialists in different areas e.g., there were snipers, medics, close combat specialists, communication experts etc. etc.,

Similarly a strong verification team should consist of experts in different areas who work together to execute the project with absolute perfection. Sometimes you may not have experts, in that case you need to develop expertise through learning sessions for the entire team.

8) Knowledge Management - that brings us to the next topic, how to develop/ build expertise to execute the project and also have a mechanism to capture this knowledge in tangible form so that it is not lost and can be passed on to posterity.

A systematic ramp-up plan is essential for sound project execution. Here I would refer to the Japanese word "Nemawashi", which means groundwork.

9) Metrics Generation - every project is essentially tracked by senior management in terms of its progress. You need to decide how you are going to generate the metrics for your progress tracking, and you need to build in those hooks early in your verification plan and methodology so that the metrics tracking data can be generated seamlessly. For example coverage data is one such metrics.

10) "The Parkinsons Law" - This famous law from economics states that " Every work expands itself to fit in the available time". We need to be very aware of this phenomenon. The antidote to this phenomenon is to divide all the macro tasks to micro-level tasks and allocate time for them in an atomic manner. So that would effectively mitigate the Parkinson's Law.

These are some of the salient points that one can take into account for an effective verification plan. In my next note, I would discuss about verification management.

Thursday, May 8, 2014

Wednesday, May 7, 2014

SV - Quiz 3

The following code is compliant with the LRM. What is the output in your simulator ?



Tuesday, May 6, 2014

Monday, April 14, 2014

SV - Quiz 1

What would be the output of the following code:-


Thursday, April 10, 2014

Deciphering UVM - 1

The Methodology
There is enough written about OVM/UVM and the methodology. The need and use of it etc. etc. UVM is a collaborative effort by the EDA fraternity. It has standardized or atleast attempted to standardize the building of testbenches across the semiconductor industry. The pros and cons can be debated. But thats not the point. I was not quite happy just by using the methodology. So from the "how" the question turned into "why". And thats what I am exploring and in doing so I will go by the first principles way...

Lets look at the very basic building blocks of UVM.

UVM base classes

uvm_void
Present in uvm/src/base/uvm_misc.svh
This is the basic base class from which everything gets created. We can say the basic substratum of UVM (a little exaggeration !!). This is an abstract class with no data members or methods.
The code looks like this

virtual class uvm_void;
endclass

uvm_object
Present in uvm/src/base/uvm_object.svh
The code looks like this

virtual class uvm_object extends uvm_void;
'
'(properties / methods)
'
endclass

The uvm_object class is the base class for all further classes.
This class has approx. 36 methods which can be used for different purposes.
The uvm reference doc has details of how to use these methods. Of course the code also has comments. Some of the important methods are set_name, get_name, get_full_name, get_inst_id, get_inst_count, get_object_type, get_type_name, print, sprint, do_print, convert2string, record, copy, do_copy, compare, pack_bytes, unpack, set_int_local, use_uvm_seeding, reseed etc.There are some pure virtual methods like get_type_name and create.

uvm_report_object
Present in uvm/src/base/uvm_report_object.svh
The code looks like this 

class uvm_report_object extends uvm_object;
'
'(properties / methods)
'
endclass

This class provides an interface to the uvm reporting mechanism. Through this interface uvm components generate different kinds of messages.
Some of the methods in this class are uvm_report, uvm_report_info, uvm_report_warning, uvm_report_error, uvm_report_fatal etc., The uvm_report_object takes help from the uvm_report_handler for most of its functions.

uvm_report_handler
Present in uvm/src/base/uvm_report_handler.svh
The code looks like this

class uvm_report_handler;
'
'(properties / methods)
'
endclass

uvm_report_handler handles most of the functions of the uvm_report_object. There is a one-to-one relationship between uvm_report_object and uvm_report_handler, but it can be many-to-one if several uvm_report_objects are configured to use the same uvm_report_handler object. The uvm_report_handler delegates handling of its reports to the uvm_report_server. The relationship between uvm_report_handler and uvm_report_server is many-to-one.

uvm_report_server
Present in uvm/src/base/uvm_report_server.svh
The code looks like this

class uvm_report_server extends uvm_object;
'
'(properties / methods)
' 
endclass

uvm_report_server is a global server that processes all of the reports generated by an uvm_report_handler. Testbench is not supposed to be using the methods of the uvm_report_server.

uvm_report_catcher
Present in uvm/src/base/uvm_report_catcher.svh
The code looks like this

virtual class uvm_report_catcher extends uvm_callback;
'
'(properties / methods)
'
endclass

The uvm_report_catcher is used to catch messages issued by the uvm_report_servers.

We just discussed the uvm_report* classes just to get a hang of their utility and how things work under the hood.

Going back to our basic uvm_base classes...

uvm_component
Present in uvm/src/base/uvm_component.svh
The code looks like this

virtual class uvm_component extends uvm_report_object;
'
'(properties / methods)
'
endclass

The uvm_component class is the root base class for all uvm components. It contains all the features of uvm_object and uvm_report_object. On top of that it provides interfaces for hierarchy, phasing, configuration, reporting, transaction recording and factory. The uvm_component is automatically seeded during the construction using UVM seeding, if enabled. All other objects must be manually reseeded. Some of the methods in this class are: get_parent, get_full_name, get_children, get_child, set_name, lookup, get_depth, build, build_phase, connect_phase, connect, end_of_elaboration, end_of_elaboration_phase, start_of_simulation_phase, run_phase, run, pre_reset_phase, reset_phase, post_reset_phase, pre_configure_phase, configure_phase, post_configure_phase, pre_main_phase, main_phase, post_main_phase, shutdown_phase, extract_phase, check_phase, report_phase, final_phase, phase_started, phase_ready_to_end, phase_ended, set_domain, get_domain, suspend, resume, status, kill, do_kill_all, stop_phase, set_config_int, set_config_object, set_config_string, get_config_int, get_config_object, get_config_string, check_config_usage, apply_config_settings, print_config_settings, raised, dropped, all_dropped, create_object, create_component, set_type_override_by_type, set_type_override_by_inst, set_type_override, set_inst_override, set_report_verbosity_level_hier, begin_tr, do_being_tr, end_tr, do_end_tr etc.

From the list of the methods in the uvm_component class, we can easily infer is that this is a heavy weight class and host of methods which are at the heart of the UVM methodology. Hence it is important to make the discretion, when to use a uvm_component and when to use a uvm_object which is much lighter in weight.

uvm_transaction
Present in uvm/src/base/uvm_transaction.svh
The code looks like this

virtual class uvm_transaction extends uvm_object;
'
'(properties / methods)
'
endclass

The uvm_transaction class is the root base class for UVM transactions. It inherits all the methods and properties of the uvm_object. On top of that the uvm_transaction class adds a timing and recording interface. This class provides timestamp properties, notification events and transaction recording support. Important to note here, use of this class as a base for user-defined transactions is deprecated. Its subtype/child class uvm_sequence_item, shall be used as the base class for all user-defined transaction types.
Some of the methods in this class are: accept_tr, do_accept_tr, begin_tr, end_tr, get_tr_handle, disable_recording, enable_recording, is_active, get_event_pool, set_initiator, get_initiator, get_begin_time, get_end_time, set_transaction_id, get_transaction_id etc.

uvm_sequence_item
Present in uvm/src/seq/uvm_sequence_item.svh
The code looks like this

class uvm_sequence_item extends uvm_transaction;
'
'(properties / methods)
'
endclass

This is the base class for user-defined sequence items as well as for the uvm_sequence class. This class provides the basic functionality for objects, both sequence items and sequences, to operate in the sequence mechanism. 
Some of the methods in this class are: get_type_name, set_sequence_id, get_sequence_id, set_item_context, set_use_sequence_info, get_use_sequence_info, set_id_info, set_sequencer, get_sequencer, set_parent_sequence, get_parent_sequence, get_depth, is_item, get_full_name, get_root_sequence_name, get_sequence_path, uvm_report, uvm_report_info etc. 

Well, this is a basic overview of the base classes that ultimately build UVM. The details of the methods and their applications can be found in the uvm source code. From a user perspective this post should connect some of the dots. This is just the beginning, more discussions to follow...

(source : UVM source code)

Monday, April 7, 2014

A Ruby routine to generate Kaprekar numbers

What are Kaprekar numbers ?
Kaprekar numbers are special numbers that were discovered by India mathematician Dattaraya Ramchandra Kaprekar. A more detailed explanation of Kaprekar and his numbers can be found here
http://en.wikipedia.org/wiki/Kaprekar_number
http://en.wikipedia.org/wiki/D._R._Kaprekar
This is a humble effort from my side to pay tribute to this great mathematician. Here is a Ruby routine to generate "n" Kaprekar numbers...

The output might look something like this...

Thursday, March 6, 2014

OpenSource Simulators / Tools for Verification / VLSI Design

 It had been a long cherished desire to see the development of opensource community in the VLSI industry. Though companies prefer to have paid tools because of their apparent reliability, many small start-ups never take-off because of the humongous cost of tools. The tools are costly because there is a huge amount of engineering R&D effort that lies behind the convenient user interface. Nevertheless a parallel opensource movement in the semiconductor community would definitely accelerate the technological development in the ASIC area.

The 2013 reports show that the current semiconductor industry revenues total upto $315 billions. Out of that the EDA industry had revenues of approximately 1.72 billions, which is about 0.5% of the entire semiconductor industry revenues. Which effectively explains how the value addition takes place further in the product lifecycle from raw RTL code to finished products. We as verification engineers live in the domain of RTL code and hence our jobs is to do quality verification with the tools available. However tools are not available that easily, they are expensive and smaller companies can afford only limited licenses. Having said that, the EDA industry acts as a key enabler in creating bug-free ASIC.

Within the given framework, ASIC development has to progress for new technologies to be proven fast. Hence a parallel opensource EDA development is not a bad idea. Actually it is a great idea.

What are the current free tools available :-
 I think I will stop here. I believe this gives us an insight into the opensource tools that are out there and enthusiasts all over the world are working on them. Now the next thing is to create a opensource systemverilog simulator. The parser would be complex, but if everyone across the world works on it, its not difficult !! Please leave a comment if you are interested in being a part of the opensource systemverilog simulator development, and we can start a group....

Monday, March 3, 2014

Constants in SystemVerilog

Here is a brief note on SV - constants. Constants are data objects that never change. SV provides the following types of constants.

Elaboration time constants:-

1) parameter - A parameter has two attributes, type and range.
    e.g. parameter logic[7:0] BASE_ADDR = 8;

By default a parameter would take the type and range of the final value assigned to it. By default any parameter would be unsigned. Hierarchical references are not allowed in parameter declarations, reason being these are elaboration type constants. Package references are however allowed in parameter declarations. A parameter can be overridden by a defparam statement. However there is an exception in case of type parameters.
Type parameters:- A parameter constant which specifies a data type, e.g.,

module m1 (parameter type p1 = shortint);
    p1 i = 0;  // i here is shortint

    initial  begin
      int j = 0;
      j = i + 2;
      .
      .
    end
endmodule

A type parameter cannot be overridden by a defparam statement.

2) localparam - local parameters (localparam) are identical to paramters except they cannot be overridden by defparam or instance parameter value assignments.Inside a compilation unit, generate block, package or class body, parameter and localparam are synonymous.

3) specparam - this is a parameter type intended only for providing timing values or delays. These can be declared within specify blocks or the module body.

Note: specparam and param are not interchangeable.

Run-time constant:-

1) const - const can be set during elaboration and simulation/run time. However localparam can be set only during elaboration time.

The following code-snippet might give some insights into the usage details of the different constant types.



Parameter as $ : Parameter can be assigned a value $ for integer types only.
The LRM gives an example in terms of assertions.

It would be interesting to run the following code-snippet and see how different simulators respond to it.


A note on defparam : Parameter values can be changed in any module, interface or program using cross-module references using the defparam statement.

However the SV-2012 LRM does give an indication to deprecate this feature in future versions, hence it is a good idea to take care of not using this feature where re-usability is of prime importance (which is always !!)

Tuesday, December 17, 2013

+: operator in SystemVerilog

An interesting code snippet on usage of +: operator in SystemVerilog.

module test;
bit [7:0] a;
integer i;
integer j;

initial begin
  //a = 8'hAB; 
  //a = 8'h1C; 
  a = 8'h19; 
  i = 4; 
  j = a[i[7:0]];
  $display("a = %h", a); 
  $display("j = %0d", j);
  j = a[0+:3]; 
  j = a[2+:2]; 
  $display("j = %0d", j);
  r = {8{1'b0}};
  $display("r = %h", r);
  #3 $finish;
 end
endmodule

Fibonacci series using SystemVerilog

This is a code for generating Fibonacci series using systemverilog. Recursion is used here in an in-efficient manner. An interesting exercise would be to optimize the following code :-


How to parse command-line arguments in Ruby using OptionParser

I do my scripting in Ruby. I did a lot of scripting in Perl in the past. Usually people prefer getoptlong in Perl to elegantly process command-line arguments.

As with Ruby, there is more than one way to do a particular thing. Ruby has OptionParser.
OptionParser provides a beautiful framework for processing command-line arguments.

Here is a simple demonstration of its usage :-

#!/usr/bin/ruby

require 'optparse'
 
options = {} # Hash where all the options are stored
optparse = OptionParser.new do |opts|
  opts.banner = "Usage: ./script -f "
  opts.on("-f INPUT_FILE", String,"regression list") do |a|
    options[:input_file] = a.chomp
  end
end

if ARGV.empty?  #In case no arguments are provided, help message is displayed
  puts optparse      #./script -h or --help also displays the usage information
  exit
else
  optparse.parse!
end

regression_list = options[:input_file]

< Rest of the code for dealing with the regression list and simulations etc.>

Note - if you use parse! then ARGV is taken as the input argument by default
If you choose to use parse, then you would have to supply the input list explicitly, e.g., ARGV

On the mathematics of verification

Many of you would wonder why I am writing after more than 3 years. My last post was way back in 2010. Well I am still in the business of ASIC verification.  These years had been immensely profound and educating. When you get a glimpse of infinity, all our lifetime's learning appear to be a drop of water in the ocean.

But the quest for knowledge still goes on. I may not be able to ever understand the workings of this universe, but I still want to pursue the quest, even if it is for the mundane.

So in continuation to my last post "on the mathematics of assertions", I have got some really concrete insight into the problem from a much broader perspective. The problem of verification is far more complex than we perceive through dynamic simulations. There is a profound mathematical basis for the behavior of complex boolean systems. I have got a hold on the mathematics, but there is lot of work to be done before I can come to any publishable conclusion.

Friday, April 9, 2010

On the mathematics of assertions

Assertions are boolean functions and are sampled at regular time intervals. This currently is the known definition. Boolean algebra doesn't have any notion of continuity. There are only two logical states, which are true and false. Time is not a discrete quantity. It's a continuously increasing quantity. Digital circuits can be modeled as boolean functions. But the state of these functions cannot be determined with reference to a continuously varying quantity. Because in that case there is a violation of the algebraic properties of boolean functions. I don't have sufficient mathematics in place to prove this. The only way that seems reasonable is either to model time in terms of a boolean quantity or convert the resulting boolean function into a decimal function. The domain of comparison must be same. Its the same way that we cannot compare an integer with a complex number. The imaginary part adds additional attribute to it. For the time being there is a tendency to rely heavily on the accuracy of assertions for the functional correctness of a digital circuit. But aren't we missing something?

Saturday, April 3, 2010

Patents and intellectual property

Many of the technology companies in the American silicon valley have branches in the Indian silicon valley in Bangalore. Most of these companies keep on filing patents on a regular basis and keep on enriching their knowledge treasury. Most of these patents are technology related. What then are the areas where verification engineers can file patents? We can take our time and think about it. But a few things before us. Many of us work in the services/outsourcing models. We still are bound by our agreement with our employers that whatever work we would do, it would be the property of the employer. On one hand you work onsite running regressions and debugging failures, on the other hand if you by dint of your determination do something worthy of patents, it would go to the employer. Thats how it works. So why not think about independent consultancy. It offers more money, more freedom and satisfaction. The model is yet to establish itself here. Only a reputed few enjoy that option, but gradually it would work its way out.

I was curious about the concept of patents. So I did some googling. What I found was a patent is valid only in the country where it is filed and granted. So if you file a patent in India it would not be valid in U.S and vice-versa. Filing a patent in India is affordable. While if an Indian engineer (who is working in India) wants to file a patent in U.S, the fees are not affordable. I am not going into the specific details, but this is my general finding. Also I found that if your patent work is not related to your employer's primary business area, you might have it to yourself, of course after discussions with the relevant people in your company.

So we can start by filing patents in India atleast. Thereby enriching our country's intellectual property base. Business repercussions are a possibility. But thats something that would come as an effect. And maybe in a positive way. The room for innovation is very big. All we need to do it to start thinking. I am sure there are ample areas where our employers would not object to us owning the patents.
U.S is the land of opportunity and enterprise, nevertheless many of us have decided to stay back in India and enjoy the roadside tea, the informal socialising, the sambar, rasam, the idly and masala dosa. By the way is there a patent for masala dosa..?

Wednesday, March 31, 2010

ASIC verification in the next few years - probable trends

There was VHDL, and then came Verilog. Soon SystemC followed. Synopsys lanched Vera and Cadence acquired Verisity. Methodology for designs remained the same but verification tools and techniques have changed by leaps and bounds. Though it is not very blatanly visible since the whole thing is packaged so nicely as a part of the marketing, the fact remains is that we are borrowing technologies that are already standard practices in Software engineering. OOPs is an example. We are just giving it a different flavour. But thats part of the customisation process. Under the hood everything is C or C++.  Barring the performance criteria, we could have used Perl or Python to build simulators. I am not getting into the relative merits vis-a-vis demerits. But there is always a business reason. So the bottom line is we are adapting software engineering methodologies and calling it a layered architecture. My view is if we are borrowing from software then we might as well borrow well. Software engineering is rich terms of methodologies, techniques and development models. I don't see any reason why we cannot bring in Agile or Scrum in our verification development processes. As a forecast I can bet for it that these things will come with a different name and in a more restrictive and proprietary form in future, because it is hardware.

Google has already started developing "go language". Which in future might replace C or C++. It would be an ideal language to develop simulators especially in the case of ASIC Verification, where concurrency plays such an important role. Then there could be a possibility of distributed systems that can probably address the problems of ever increasing design and testbench complexities. Afterall one needs to run the simulation and finish in time. Formal verification has already started influencing verification cycles and productivity. Its not unlikely that an algorithimic approach might take precedence over simulation.

With development in programming logic technology, and better tools for probing signals, people may not even think of simulations in future. The scope is huge. The intensity of renaissance in verification technology can be great. But for the time being what we can do is start adapting software engineering techniques in verification in a proactive way. The wheel is already invented, all we have to do is to use it. And use it well.

Tuesday, March 30, 2010

The state of the semiconductor industry in India from a career perspective

The semiconductor industry started in India when Texas Instruments started their development centre in Bangalore more than a decade ago. I remember, ten years ago not many people were aware of the VLSI market in India. We had read about the term in textbooks. The concept of ASIC verification was not much known. Then gradually some educational institutes started incorporating topics in their curriculum. Some more institutes started running some short term certificate courses.

In the following five years things started changing. There was a huge demand in the market for skilled VLSI professionals in India because outsourcing had started big time. An engineer with some basic knowledge of digital electronics and HDL was considered a great asset. The pay was good those days. Everything looked very green and promising. Once you entered the VLSI industry, you were considered to be one from the esteemed class. Then recession came, it was bad for Software as well for VLSI. But VLSI resource base was still limited, so impact of recession was not felt to a very high extent. But software engineers were losing jobs in a big way.

After the recession, 2003 started to look up. Things were improving and the market was good. VLSI engineers were again in high-demand. The general engineering pool was full of a lot of attitude. People were very choosy and picky in terms of work and salary. Services companies were obliging them because they could easily afford a big "buffer" strength.  But this was not to last for more than five years and 2008 started to look down. This time the recession had a full-grown global nature. It affected every possible sector. Semiconductor, for the first time in India saw what it had never seen before. Major semiconductor companies ramped-down rapidly. Consequently service providers had to reduce their additional resource pool. "Buffer" was considered a dangerous word. "Bench strength" as it is often referred to, was reduced nearly to zero. There was a huge set of unemployed people who had nowhere to go, because they had skill sets which could not be used anywhere else. A Java programmer can switch a domain from finance to retail and still do Java programming. But what would ASIC verification engineers do with their Verilog or VHDL knowledge. Their C/C++ skills were just good enough to do the processor programming or their PERL/UNIX skills were just enough to build the environment. They could not write code to build applications. They could only do ASIC verification. They could not enhance their skills because they needed simulators which were highly expensive. Opensource simulators were limited in functionality and could not simulate HVLs like SystemVerilog. There were very few engineers who had knowledge of domains like wireless, graphics etc. because most of them were only running regressions and debugging failures. A true analogy would not even compare to an industrial worker. Because an industrial worker with a specific skill set has more versatility in career choice than an ASIC verification engineer.

2010 started to look a little better and a few companies started recruiting. For 4 positions, companies would get 200 applications. And every company wanted experts in SystemVerilog or hands-on project experience in SystemVerilog. But until 2009 very few companies were actually active in SystemVerilog. They now needed people with strong SystemVerilog skills. This was ironical. On top of that there was recession. Then there were EDA companies who kept on pushing for SystemVerilog to get their business going. They also needed to survive. For an average ASIC verification professional life was really bad. People who had engrossed themselves completely in their work suddenly woke up to find that they were jobless, because they had kept themselves away from the management politics. So when the time came, there was nobody to back them up. In India there is no system of government aid to the unemployed. People who had financial liabilities and a family to take care saw the worst nightmare of their lives.

Joblessness as it is is a very difficult situation. But generic skill-sets enable a professional with more versatility in terms of career options. ASIC verification skills are very specific. At the same time in Indian semiconductor companies, there is very limited exposure to domain knowledge. Most of the semiconductor industry in India is comprised of subsidiary branches of American, European or Japanese semiconductor companies and Indian service providers. There is hardly any Indian company which is willing to take the risk to develop a product. The work that is done in the Indian branches of the MNCs has as it is very limited developmental content. On top of that the more repetitive and laborious kinds are outsourced to service providers. Engineers who are from reputed institutes work in the coveted MNC's in their Indian branches. In a country with a population of 1 billion a handful of IITs or similar colleges, the majority of the engineering talent belongs to the average group. However as far as ASIC verification is considered, any average engineer can learn the technologies and tools and perform verification. It is only a matter of opportunity.

Humanity is not a characteristic of economics. In business the only thing that is fair is profit. The current economic recession has taught us many lessons. These lessons can help us if they are remembered and practiced. We may not be able to change the business dynamics of the semiconductor industry as individuals, but we together can definitely make some significant changes in our respective careers. I can think of some of them as follows:-

1) Those of us who are not doing too well in terms of skills or performance in asic verification can either think of changing skill sets early in career or revamping existing skills to meet the future challenges. The domain of knowledge is huge and there can be other fields that can generate passion and money

2) We must realise that Indian semiconductor industry is small. Though there are a lot of companies, the positions are less

3) Most of the recruitment happens through internal referrals. Companies should open-up their positions to everyone and conduct recruitments in a fair manner. Most of us are aware of the plausible areas of corruption in the recruitment process

4) License costs of simulators and tools are humungous. So Indian semiconductor startups would never take-off, until their primary overhead, which is the license cost of tools, can be compensated for. This can happen only in one situation. That is when we start developing our own tools. A good solution to the problem is to develop opensource simulators and tools. The whole world has witnessed the opensource revolution and the great products and tools that are developed through collaboration. Opensource tools would be a big leap for not only the semiconductor market in India but also for the whole world. The wherewithal needs to be affordable for the product to emerge out of the foundry

Refer to the following article in eetimes:-

http://www.eetimes.com/news/design/columns/tool_talk/showArticle.jhtml?articleID=17404385

5) Lastly we all must collaborate with fellow engineers with a spirit of camaraderie irrespective of affiliations

Though businesses can apparently grow at the cost of ethics and morality, but such businesses are doomed to be failures. The current recession has shown this to the world. Economics though harsh is a great leveler and is driven by fundamentals. Fundamentals after all are driven by values. Hence businesses without values cannot survive, and so is the case with technology.