PLSQL Developer
experience of 4 to 6 years
Skills- MS SQl Server and Oracle, AWS or Azure
• Experience in setting up RDS service in cloud technologies such as AWS or Azure
• Strong proficiency with SQL and its variation among popular databases
• Should be well-versed in writing stored procedures, functions, packages, using collections,
• Skilled at optimizing large, complicated SQL statements.
• Should have worked in migration projects.
• Should have worked on creating reports.
• Should be able to distinguish between normalized and de-normalized data modelling designs and use cases.
• Knowledge of best practices when dealing with relational databases
• Capable of troubleshooting common database issues
• Familiar with tools that can aid with profiling server resource usage and optimizing it.
• Proficient understanding of code versioning tools such as Git and SVN
Similar jobs
Our client is the world’s largest media investment company and are a part of WPP. In fact, they are responsible for one in every three ads you see globally. We are currently looking for a Senior Software Engineer to join us. In this role, you will be responsible for coding/implementing of custom marketing applications that Tech COE builds for its customer and managing a small team of developers.
What your day job looks like:
- Serve as a Subject Matter Expert on data usage – extraction, manipulation, and inputs for analytics
- Develop data extraction and manipulation code based on business rules
- Develop automated and manual test cases for the code written
- Design and construct data store and procedures for their maintenance
- Perform data extract, transform, and load activities from several data sources.
- Develop and maintain strong relationships with stakeholders
- Write high quality code as per prescribed standards.
- Participate in internal projects as required
Minimum qualifications:
- B. Tech./MCA or equivalent preferred
- Excellent 3 years Hand on experience on Big data, ETL Development, Data Processing.
What you’ll bring:
- Strong experience in working with Snowflake, SQL, PHP/Python.
- Strong Experience in writing complex SQLs
- Good Communication skills
- Good experience of working with any BI tool like Tableau, Power BI.
- Sqoop, Spark, EMR, Hadoop/Hive are good to have.
ketteQ is a supply chain planning and automation platform. We are looking for extremely strong and experienced Technical Consultant to help with system design, data engineering and software configuration and testing during the implementation of supply chain planning solutions. This job comes with a very attractive compensation package, and work-from-home benefit. If you are high-energy, motivated, and initiative-taking individual then this could be a fantastic opportunity for you.
Responsible for technical design and implementation of supply chain planning solutions.
Responsibilities
- Design and document system architecture
- Design data mappings
- Develop integrations
- Test and validate data
- Develop customizations
- Deploy solution
- Support demo development activities
Requirements
- Minimum 5 years experience in technical implementation of Enterprise software preferably Supply Chain Planning software
- Proficiency in ANSI/postgreSQL
- Proficiency in ETL tools such as Pentaho, Talend, Informatica, and Mulesoft
- Experience with Webservices and REST APIs
- Knowledge of AWS
- Salesforce and Tableau experience a plus
- Excellent analytical skills
- Must possess excellent verbal and written communication skills and be able to communicate effectively with international clients
- Must be a self-starter and highly motivated individual who is looking to make a career in supply chain management
- Quick thinker with proven decision-making and organizational skills
- Must be flexible to work non-standard hours to accommodate globally dispersed teams and clients
Education
- Bachelors in Engineering from a top-ranked university with above average grades
Responsibilities
- Design, plan and control the implementation of business solutions requests/demands.
- Execution of best practices, design, and codification, guiding the rest of the team in accordance with it.
- Gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements
- Drive complex technical projects from planning through execution
- Perform code review and manage technical debt
- Handling release deployments and production issues
- Coordinate stress tests, stability evaluations, and support for the concurrent processing of specific solutions
- Participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews
Skills
- Degree in Informatics Engineering, Computer Science, or in similar areas
- Minimum of 5+ years’ work experience in the similar roles
- Expert knowledge in developing cloud-based applications with Java, Spring Boot, Spring Rest, SpringJPA, and Spring Cloud
- Strong understanding of Azure Data Services
- Strong working knowledge of SQL Server, SQL Azure Database, No SQL, Data Modeling, Azure AD, ADFS, Identity & Access Management.
- Hands-on experience in ThingWorx platform (Application development, Mashups creation, Installation of ThingWorx and ThingWorx components)
- Strong knowledge of IoT Platform
- Development experience in Microservices Architectures best practices and, Docker, Kubernetes
- Experience designing /maintaining/tuning high-performance code to ensure optimal performance
- Strong knowledge of web security practice
- Experience working in Agile Development
- Knowledge about Google CloudPlatform and Kubernetes
- Good understanding of Git, source control procedures, and feature branching
- Fluent in English - written and spoken (mandatory)
- Analyze and organize raw data
- Build data systems and pipelines
- Evaluate business needs and objectives
- Interpret trends and patterns
- Conduct complex data analysis and report on results
- Build algorithms and prototypes
- Combine raw information from different sources
- Explore ways to enhance data quality and reliability
- Identify opportunities for data acquisition
- Should have experience in Python, Django Micro Service Senior developer with Financial Services/Investment Banking background.
- Develop analytical tools and programs
- Collaborate with data scientists and architects on several projects
- Should have 5+ years of experience as a data engineer or in a similar role
- Technical expertise with data models, data mining, and segmentation techniques
- Should have experience programming languages such as Python
- Hands-on experience with SQL database design
- Great numerical and analytical skills
- Degree in Computer Science, IT, or similar field; a Master’s is a plus
- Data engineering certification (e.g. IBM Certified Data Engineer) is a plus
We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.
We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.
Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!
Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc.
Key Responsibilities
-
Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality
-
Work with the Office of the CTO as an active member of our architecture guild
-
Writing pipelines to consume the data from multiple sources
-
Writing a data transformation layer using DBT to transform millions of data into data warehouses.
-
Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities
-
Identify downstream implications of data loads/migration (e.g., data quality, regulatory)
What To Bring
-
5+ years of software development experience, a startup experience is a plus.
-
Past experience of working with Airflow and DBT is preferred
-
5+ years of experience working in any backend programming language.
-
Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL
-
Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)
-
Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects
-
Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure
-
Basic understanding of Kubernetes & docker is a must.
-
Experience in data processing (ETL, ELT) and/or cloud-based platforms
-
Working proficiency and communication skills in verbal and written English.
JOB DESCRIPTION
- 2 to 6 years of experience in imparting technical training/ mentoring
- Must have very strong concepts of Data Analytics
- Must have hands-on and training experience on Python, Advanced Python, R programming, SAS and machine learning
- Must have good knowledge of SQL and Advanced SQL
- Should have basic knowledge of Statistics
- Should be good in Operating systems GNU/Linux, Network fundamentals,
- Must have knowledge on MS office (Excel/ Word/ PowerPoint)
- Self-Motivated and passionate about technology
- Excellent analytical and logical skills and team player
- Must have exceptional Communication Skills/ Presentation Skills
- Good Aptitude skills is preferred
- Exceptional communication skills
Responsibilities:
- Ability to quickly learn any new technology and impart the same to other employees
- Ability to resolve all technical queries of students
- Conduct training sessions and drive the placement driven quality in the training
- Must be able to work independently without the supervision of a senior person
- Participate in reviews/ meetings
Qualification:
- UG: Any Graduate in IT/Computer Science, B.Tech/B.E. – IT/ Computers
- PG: MCA/MS/MSC – Computer Science
- Any Graduate/ Post graduate, provided they are certified in similar courses
ABOUT EDUBRIDGE
EduBridge is an Equal Opportunity employer and we believe in building a meritorious culture where everyone is recognized for their skills and contribution.
Launched in 2009 EduBridge Learning is a workforce development and skilling organization with 50+ training academies in 18 States pan India. The organization has been providing skilled manpower to corporates for over 10 years and is a leader in its space. We have trained over a lakh semi urban & economically underprivileged youth on relevant life skills and industry-specific skills and provided placements in over 500 companies. Our latest product E-ON is committed to complementing our training delivery with an Online training platform, enabling the students to learn anywhere and anytime.
To know more about EduBridge please visit: http://www.edubridgeindia.com/
You can also visit us on Facebook , LinkedIn for our latest initiatives and products
Work shift: Day time
- Strong problem-solving skills with an emphasis on product development.
insights from large data sets.
• Experience in building ML pipelines with Apache Spark, Python
• Proficiency in implementing end to end Data Science Life cycle
• Experience in Model fine-tuning and advanced grid search techniques
• Experience working with and creating data architectures.
• Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural
networks, etc.) and their real-world advantages/drawbacks.
• Knowledge of advanced statistical techniques and concepts (regression, properties of distributions,
statistical tests and proper usage, etc.) and experience with applications.
• Excellent written and verbal communication skills for coordinating across teams.
• A drive to learn and master new technologies and techniques.
• Assess the effectiveness and accuracy of new data sources and data gathering techniques.
• Develop custom data models and algorithms to apply to data sets.
• Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting, and other business outcomes.
• Develop company A/B testing framework and test model quality.
• Coordinate with different functional teams to implement models and monitor outcomes.
• Develop processes and tools to monitor and analyze model performance and data accuracy.
Key skills:
● Strong knowledge in Data Science pipelines with Python
● Object-oriented programming
● A/B testing framework and model fine-tuning
● Proficiency in using sci-kit, NumPy, and pandas package in python
Nice to have:
● Ability to work with containerized solutions: Docker/Compose/Swarm/Kubernetes
● Unit testing, Test-driven development practice
● DevOps, Continuous integration/ continuous deployment experience
● Agile development environment experience, familiarity with SCRUM
● Deep learning knowledge
- 4-7 years of Industry experience in IT or consulting organizations
- 3+ years of experience defining and delivering Informatica Cloud Data Integration & Application Integration enterprise applications in lead developer role
- Must have working knowledge on integrating with Salesforce, Oracle DB, JIRA Cloud
- Must have working scripting knowledge (windows or Nodejs)
Soft Skills
- Superb interpersonal skills, both written and verbal, in order to effectively develop materials that are appropriate for variety of audience in business & technical teams
- Strong presentation skills, successfully present and defend point of view to Business & IT audiences
- Excellent analysis skills and ability to rapidly learn and take advantage of new concepts, business models, and technologies
The candidate must have Expertise in ADF(Azure data factory), well versed with python.
Performance optimization of scripts (code) and Productionizing of code (SQL, Pandas, Python or PySpark, etc.)
Required skills:
Bachelors in - in Computer Science, Data Science, Computer Engineering, IT or equivalent
Fluency in Python (Pandas), PySpark, SQL, or similar
Azure data factory experience (min 12 months)
Able to write efficient code using traditional, OO concepts, modular programming following the SDLC process.
Experience in production optimization and end-to-end performance tracing (technical root cause analysis)
Ability to work independently with demonstrated experience in project or program management
Azure experience ability to translate data scientist code in Python and make it efficient (production) for cloud deployment
We’re looking to hire someone to help scale Machine Learning and NLP efforts at Episource. You’ll work with the team that develops the models powering Episource’s product focused on NLP driven medical coding. Some of the problems include improving our ICD code recommendations , clinical named entity recognition and information extraction from clinical notes.
This is a role for highly technical machine learning & data engineers who combine outstanding oral and written communication skills, and the ability to code up prototypes and productionalize using a large range of tools, algorithms, and languages. Most importantly they need to have the ability to autonomously plan and organize their work assignments based on high-level team goals.
You will be responsible for setting an agenda to develop and ship machine learning models that positively impact the business, working with partners across the company including operations and engineering. You will use research results to shape strategy for the company, and help build a foundation of tools and practices used by quantitative staff across the company.
What you will achieve:
-
Define the research vision for data science, and oversee planning, staffing, and prioritization to make sure the team is advancing that roadmap
-
Invest in your team’s skills, tools, and processes to improve their velocity, including working with engineering counterparts to shape the roadmap for machine learning needs
-
Hire, retain, and develop talented and diverse staff through ownership of our data science hiring processes, brand, and functional leadership of data scientists
-
Evangelise machine learning and AI internally and externally, including attending conferences and being a thought leader in the space
-
Partner with the executive team and other business leaders to deliver cross-functional research work and models
Required Skills:
-
Strong background in classical machine learning and machine learning deployments is a must and preferably with 4-8 years of experience
-
Knowledge of deep learning & NLP
-
Hands-on experience in TensorFlow/PyTorch, Scikit-Learn, Python, Apache Spark & Big Data platforms to manipulate large-scale structured and unstructured datasets.
-
Experience with GPU computing is a plus.
-
Professional experience as a data science leader, setting the vision for how to most effectively use data in your organization. This could be through technical leadership with ownership over a research agenda, or developing a team as a personnel manager in a new area at a larger company.
-
Expert-level experience with a wide range of quantitative methods that can be applied to business problems.
-
Evidence you’ve successfully been able to scope, deliver and sell your own research in a way that shifts the agenda of a large organization.
-
Excellent written and verbal communication skills on quantitative topics for a variety of audiences: product managers, designers, engineers, and business leaders.
-
Fluent in data fundamentals: SQL, data manipulation using a procedural language, statistics, experimentation, and modeling
Qualifications
-
Professional experience as a data science leader, setting the vision for how to most effectively use data in your organization
-
Expert-level experience with machine learning that can be applied to business problems
-
Evidence you’ve successfully been able to scope, deliver and sell your own work in a way that shifts the agenda of a large organization
-
Fluent in data fundamentals: SQL, data manipulation using a procedural language, statistics, experimentation, and modeling
-
Degree in a field that has very applicable use of data science / statistics techniques (e.g. statistics, applied math, computer science, OR a science field with direct statistics application)
-
5+ years of industry experience in data science and machine learning, preferably at a software product company
-
3+ years of experience managing data science teams, incl. managing/grooming managers beneath you
-
3+ years of experience partnering with executive staff on data topics